title
stringlengths 3
59
| text
stringlengths 10k
146k
| pageid
int64 569
44.9M
| time
stringclasses 1
value |
---|---|---|---|
Karl Popper | thumb|upright|Sir Karl Popper, bust in the Arkadenhof of the University of Vienna
Sir Karl Raimund Popper (28 July 1902 – 17 September 1994) was an Austrian-British philosopher and professor.Karl Popper (1902–94) advocated by Andrew Marr BBC In Our Time – Greatest Philosopher, Retrieved Jan 2015Adams, I.; Dyson, R.W., Fifty Major Political Thinkers, Routledge, 2007, p. 196. "He became a British citizen in 1945". He is generally regarded as one of the greatest philosophers of science of the 20th century.Shea, B. "Popper, Karl: Philosophy of Science", in Internet Encyclopedia of Philosophy, James Feiser (ed.) and Bradley Dowden (ed.). Retrieved 10 Feb 2016.
Popper is known for his rejection of the classical inductivist views on the scientific method, in favour of empirical falsification: A theory in the empirical sciences can never be proven, but it can be falsified, meaning that it can and should be scrutinized by decisive experiments. Popper is also known for his opposition to the classical justificationist account of knowledge, which he replaced with critical rationalism, namely "the first non-justificational philosophy of criticism in the history of philosophy."William W. Bartley: Rationality versus the Theory of Rationality, In Mario Bunge: The Critical Approach to Science and Philosophy (The Free Press of Glencoe, 1964), section IX.
In political discourse, he is known for his vigorous defence of liberal democracy and the principles of social criticism that he came to believe made a flourishing open society possible. His political philosophy embraces ideas from all major democratic political ideologies and attempts to reconcile them: socialism/social democracy, libertarianism/classical liberalism and conservatism.
Personal life
Family and training
Karl Popper was born in Vienna (then in Austria-Hungary) in 1902, to upper middle-class parents. All of Karl Popper's grandparents were Jewish, but the Popper family converted to Lutheranism before Karl was born,Magee, Bryan. The Story of Philosophy. New York: DK Publishing, 2001. p. 221, ISBN 0-7894-3511-X and so he received Lutheran baptism. They understood this as part of their cultural assimilation, not as an expression of devout belief.Karl Popper: Kritischer Rationalismus und Verteidigung der offenen Gesellschaft. In Josef Rattner, Gerhard Danzer (Eds.): Europäisches Österreich: Literatur- und geistesgeschichtliche Essays über den Zeitraum 1800–1980, p. 293 Karl's father Simon Siegmund Carl Popper was a lawyer from Bohemia and a doctor of law at the Vienna University, and mother Jenny Schiff was of Silesian and Hungarian descent. Karl Popper's uncle was the Austrian philosopher Josef Popper-Lynkeus. After establishing themselves in Vienna, the Poppers made a rapid social climb in Viennese society: Simon Siegmund Carl became a partner in the law firm of Vienna's liberal Burgomaster Herr Grübl and, after Grübl's death in 1898, Simon took over the business. (Malachi Hacohen records that Herr Grübl's first name was Raimund, after which Karl received his middle name.Malachi Haim Hacohen. Karl Popper – The Formative Years, 1902–1945: Politics and Philosophy in Interwar Vienna. Cambridge: Cambridge University Press, 2001. pp. 10 & 23, ISBN 0-521-47053-6 Popper himself, in his autobiography, erroneously recalls that Herr Grübl's first name was Carl.Karl R. Popper ([1976] 2002. Unended Quest: An Intellectual Autobiography, p. 6.) His father was a bibliophile who had 12,000–14,000 volumes in his personal libraryRaphael, F. The Great Philosophers London: Phoenix, p. 447, ISBN 0-7538-1136-7 and took an interest in philosophy, the classics, and social and political issues. Popper inherited both the library and the disposition from him.Manfred Lube: Karl R. Popper – Die Bibliothek des Philosophen als Spiegel seines Lebens. Imprimatur. Ein Jahrbuch für Bücherfreunde. Neue Folge Band 18 (2003), S. 207–38, ISBN 3-447-04723-2. Later, he would describe the atmosphere of his upbringing as having been "decidedly bookish."
Popper left school at the age of 16 and attended lectures in mathematics, physics, philosophy, psychology and the history of music as a guest student at the University of Vienna. In 1919, Popper became attracted by Marxism and subsequently joined the Association of Socialist School Students. He also became a member of the Social Democratic Workers' Party of Austria, which was at that time a party that fully adopted the Marxist ideology. After the street battle in the Hörlgasse on 15 June 1919, when police shot eight of his unarmed party comrades, he became disillusioned by what he saw to be the "pseudo-scientific" historical materialism of Marx, abandoned the ideology, and remained a supporter of social liberalism throughout his life.
He worked in street construction for a short amount of time, but was unable to cope with the heavy labour. Continuing to attend university as a guest student, he started an apprenticeship as cabinetmaker, which he completed as a journeyman. He was dreaming at that time of starting a daycare facility for children, for which he assumed the ability to make furniture might be useful. After that he did voluntary service in one of psychoanalyst Alfred Adler's clinics for children. In 1922, he did his matura by way of a second chance education and finally joined the University as an ordinary student. He completed his examination as an elementary teacher in 1924 and started working at an after-school care club for socially endangered children. In 1925, he went to the newly founded Pädagogisches Institut and continued studying philosophy and psychology. Around that time he started courting Josefine Anna Henninger, who later became his wife.
In 1928, he earned a doctorate in psychology, under the supervision of Karl Bühler. His dissertation was entitled "Die Methodenfrage der Denkpsychologie" (The question of method in cognitive psychology). In 1929, he obtained the authorisation to teach mathematics and physics in secondary school, which he started doing. He married his colleague Josefine Anna Henninger (1906–1985) in 1930. Fearing the rise of Nazism and the threat of the Anschluss, he started to use the evenings and the nights to write his first book Die beiden Grundprobleme der Erkenntnistheorie (The Two Fundamental Problems of the Theory of Knowledge). He needed to publish one to get some academic position in a country that was safe for people of Jewish descent. However, he ended up not publishing the two-volume work, but a condensed version of it with some new material, Logik der Forschung (The Logic of Scientific Discovery), in 1934. Here, he criticised psychologism, naturalism, inductionism, and logical positivism, and put forth his theory of potential falsifiability as the criterion demarcating science from non-science. In 1935 and 1936, he took unpaid leave to go to the United Kingdom for a study visit.A. C. Ewing was responsible for Karl Popper's 1936 invitation to Cambridge (Edmonds and Eidinow 2001, p. 67).
Academic life
In 1937, Popper finally managed to get a position that allowed him to emigrate to New Zealand, where he became lecturer in philosophy at Canterbury University College of the University of New Zealand in Christchurch. It was here that he wrote his influential work The Open Society and its Enemies. In Dunedin he met the Professor of Physiology John Carew Eccles and formed a lifelong friendship with him. In 1946, after the Second World War, he moved to the United Kingdom to become reader in logic and scientific method at the London School of Economics. Three years later, in 1949, he was appointed professor of logic and scientific method at the University of London. Popper was president of the Aristotelian Society from 1958 to 1959. He retired from academic life in 1969, though he remained intellectually active for the rest of his life. In 1985, he returned to Austria so that his wife could have her relatives around her during the last months of her life; she died in November that year. After the Ludwig Boltzmann Gesellschaft failed to establish him as the director of a newly founded branch researching the philosophy of science, he went back again to the United Kingdom in 1986, settling in Kenley, Surrey.
thumb|Sir Karl Popper's gravesite in , in Vienna, Austria
Popper died of "complications of cancer, pneumonia and kidney failure" in Kenley at the age of 92 on 17 September 1994. He had been working continuously on his philosophy until two weeks before, when he suddenly fell terminally ill. After cremation, his ashes were taken to Vienna and buried at Lainzer cemetery adjacent to the ORF Centre, where his wife Josefine Anna Popper (called ‘Hennie’) had already been buried. Popper's estate is managed by his secretary and personal assistant Melitta Mew and her husband Raymond. Popper's manuscripts went to the Hoover Institution at Stanford University, partly during his lifetime and partly as supplementary material after his death. Klagenfurt University possesses Popper's library, including his precious bibliophilia, as well as hard copies of the original Hoover material and microfilms of the supplementary material. The remaining parts of the estate were mostly transferred to The Karl Popper Charitable Trust. In October 2008 Klagenfurt University acquired the copyrights from the estate.
Popper and his wife chose not to have children because of the circumstances of war in the early years of their marriage. Popper commented that this "was perhaps a cowardly but in a way a right decision".
Honours and awards
thumb|left|Sir Karl Popper, Prof. Cyril Höschl. K. Popper received the Honorary Doctor's degree of Charles University in Prague (May 1994)
Popper won many awards and honours in his field, including the Lippincott Award of the American Political Science Association, the Sonning Prize, the Otto Hahn Peace Medal of the United Nations Association of Germany in Berlin and fellowships in the Royal Society, British Academy, London School of Economics, King's College London, Darwin College, Cambridge, Austrian Academy of Sciences and Charles University, Prague. Austria awarded him the Grand Decoration of Honour in Gold for Services to the Republic of Austria in 1986, and the Federal Republic of Germany its Grand Cross with Star and Sash of the Order of Merit, and the peace class of the Order Pour le Mérite. He received the Humanist Laureate Award from the International Academy of Humanism. He was knighted by Queen Elizabeth II in 1965, and was elected a Fellow of the Royal Society in 1976. He was invested with the Insignia of a Companion of Honour in 1982.
Other awards and recognition for Popper included the City of Vienna Prize for the Humanities (1965), Karl Renner Prize (1978), Austrian Decoration for Science and Art (1980), Dr. Leopold Lucas Prize of the University of Tübingen (1980), Ring of Honour of the City of Vienna (1983) and the Premio Internazionale of the Italian Federico Nietzsche Society (1988). In 1992, he was awarded the Kyoto Prize in Arts and Philosophy for "symbolising the open spirit of the 20th century" and for his "enormous influence on the formation of the modern intellectual climate".
Philosophy
Background to Popper's ideas
Karl Popper's rejection of Marxism during his teenage years left a profound mark on his thought. He had at one point joined a socialist association, and for a few months in 1919 considered himself a communist. During this time he became familiar with the Marxist view of economics, class conflict, and history. Although he quickly became disillusioned with the views expounded by Marxism, his flirtation with the ideology led him to distance himself from those who believed that spilling blood for the sake of a revolution was necessary. He came to realise that when it came to sacrificing human lives, one was to think and act with extreme prudence.
The failure of democratic parties to prevent fascism from taking over Austrian politics in the 1920s and 1930s traumatised Popper. He suffered from the direct consequences of this failure, since events after the Anschluss, the annexation of Austria by the German Reich in 1938, forced him into permanent exile. His most important works in the field of social science—The Poverty of Historicism (1944) and The Open Society and Its Enemies (1945)—were inspired by his reflection on the events of his time and represented, in a sense, a reaction to the prevalent totalitarian ideologies that then dominated Central European politics. His books defended democratic liberalism as a social and political philosophy. They also represented extensive critiques of the philosophical presuppositions underpinning all forms of totalitarianism.
Popper puzzled over the stark contrast between the non-scientific character of Freud and Adler's theories in the field of psychology and the revolution set off by Einstein's theory of relativity in physics in the early 20th century. Popper thought that Einstein's theory, as a theory properly grounded in scientific thought and method, was highly "risky", in the sense that it was possible to deduce consequences from it which were, in the light of the then-dominant Newtonian physics, highly improbable (e.g., that light is deflected towards solid bodies—confirmed by Eddington's experiments in 1919), and which would, if they turned out to be false, falsify the whole theory. In contrast, nothing could, even in principle, falsify psychoanalytic theories. He thus came to the conclusion that psychoanalytic theories had more in common with primitive myths than with genuine science.
This led Popper to conclude that what were regarded as the remarkable strengths of psychoanalytical theories were actually their weaknesses. Psychoanalytical theories were crafted in a way that made them able to refute any criticism and to give an explanation for every possible form of human behaviour. The nature of such theories made it impossible for any criticism or experiment—even in principle—to show them to be false. This realisation had an important consequence when Popper later tackled the problem of demarcation in the philosophy of science, as it led him to posit that the strength of a scientific theory lies in its both being susceptible to falsification, and not actually being falsified by criticism made of it. He considered that if a theory cannot, in principle, be falsified by criticism, it is not a scientific theory.One of the severest critics of Popper's so-called demarcation thesis was Adolf Grünbaum, 'Is Falsifiability the Touchstone of Scientific Rationality?' (1976), and 'The Degeneration of Popper's Theory of Demarcation' (1989), both in his Collected Works (edited by Thomas Kupka), vol. I, New York: Oxford University Press 2013, ch. 1 (pp. 9–42) & ch. 2 (pp. 43–61)
Philosophy of science
Falsifiability/problem of demarcation
Popper coined the term "critical rationalism" to describe his philosophy. Concerning the method of science, the term indicates his rejection of classical empiricism, and the classical observationalist-inductivist account of science that had grown out of it. Popper argued strongly against the latter, holding that scientific theories are abstract in nature, and can be tested only indirectly, by reference to their implications. He also held that scientific theory, and human knowledge generally, is irreducibly conjectural or hypothetical, and is generated by the creative imagination to solve problems that have arisen in specific historico-cultural settings.
Logically, no number of positive outcomes at the level of experimental testing can confirm a scientific theory, but a single counterexample is logically decisive: it shows the theory, from which the implication is derived, to be false. To say that a given statement (e.g., the statement of a law of some scientific theory)—call it "T"—is "falsifiable" does not mean that "T" is false. Rather, it means that, if "T" is false, then (in principle), "T" could be shown to be false, by observation or by experiment. Popper's account of the logical asymmetry between verification and falsifiability lies at the heart of his philosophy of science. It also inspired him to take falsifiability as his criterion of demarcation between what is, and is not, genuinely scientific: a theory should be considered scientific if, and only if, it is falsifiable. This led him to attack the claims of both psychoanalysis and contemporary Marxism to scientific status, on the basis that their theories are not falsifiable.
Popper also wrote extensively against the famous Copenhagen interpretation of quantum mechanics. He strongly disagreed with Niels Bohr's instrumentalism and supported Albert Einstein's realist approach to scientific theories about the universe. Popper's falsifiability resembles Charles Peirce's nineteenth century fallibilism. In Of Clocks and Clouds (1966), Popper remarked that he wished he had known of Peirce's work earlier.
In All Life is Problem Solving, Popper sought to explain the apparent progress of scientific knowledge—that is, how it is that our understanding of the universe seems to improve over time. This problem arises from his position that the truth content of our theories, even the best of them, cannot be verified by scientific testing, but can only be falsified. Again, in this context the word "falsified" does not refer to something being "fake"; rather, that something can be (i.e., is capable of being) shown to be false by observation or experiment. Some things simply do not lend themselves to being shown to be false, and therefore, are not falsifiable. If so, then how is it that the growth of science appears to result in a growth in knowledge? In Popper's view, the advance of scientific knowledge is an evolutionary process characterised by his formula:
In response to a given problem situation (), a number of competing conjectures, or tentative theories (), are systematically subjected to the most rigorous attempts at falsification possible. This process, error elimination (), performs a similar function for science that natural selection performs for biological evolution. Theories that better survive the process of refutation are not more true, but rather, more "fit"—in other words, more applicable to the problem situation at hand (). Consequently, just as a species' biological fitness does not ensure continued survival, neither does rigorous testing protect a scientific theory from refutation in the future. Yet, as it appears that the engine of biological evolution has, over many generations, produced adaptive traits equipped to deal with more and more complex problems of survival, likewise, the evolution of theories through the scientific method may, in Popper's view, reflect a certain type of progress: toward more and more interesting problems (). For Popper, it is in the interplay between the tentative theories (conjectures) and error elimination (refutation) that scientific knowledge advances toward greater and greater problems; in a process very much akin to the interplay between genetic variation and natural selection.
Falsification/problem of induction
Among his contributions to philosophy is his claim to have solved the philosophical problem of induction. He states that while there is no way to prove that the sun will rise, it is possible to formulate the theory that every day the sun will rise; if it does not rise on some particular day, the theory will be falsified and will have to be replaced by a different one. Until that day, there is no need to reject the assumption that the theory is true. Nor is it rational according to Popper to make instead the more complex assumption that the sun will rise until a given day, but will stop doing so the day after, or similar statements with additional conditions.
Such a theory would be true with higher probability, because it cannot be attacked so easily: to falsify the first one, it is sufficient to find that the sun has stopped rising; to falsify the second one, one additionally needs the assumption that the given day has not yet been reached. Popper held that it is the least likely, or most easily falsifiable, or simplest theory (attributes which he identified as all the same thing) that explains known facts that one should rationally prefer. His opposition to positivism, which held that it is the theory most likely to be true that one should prefer, here becomes very apparent. It is impossible, Popper argues, to ensure a theory to be true; it is more important that its falsity can be detected as easily as possible.
Popper and David Hume agreed that there is often a psychological belief that the sun will rise tomorrow, but both denied that there is logical justification for the supposition that it will, simply because it always has in the past. Popper writes, "I approached the problem of induction through Hume. Hume, I felt, was perfectly right in pointing out that induction cannot be logically justified." (Conjectures and Refutations, p. 55)
Rationality
Popper held that rationality is not restricted to the realm of empirical or scientific theories, but that it is merely a special case of the general method of criticism, the method of finding and eliminating contradictions in knowledge without ad-hoc-measures. According to this view, rational discussion about metaphysical ideas, about moral values and even about purposes is possible. Popper's student W.W. Bartley III tried to radicalise this idea and made the controversial claim that not only can criticism go beyond empirical knowledge, but that everything can be rationally criticised.
To Popper, who was an anti-justificationist, traditional philosophy is misled by the false principle of sufficient reason. He thinks that no assumption can ever be or needs ever to be justified, so a lack of justification is not a justification for doubt. Instead, theories should be tested and scrutinised. It is not the goal to bless theories with claims of certainty or justification, but to eliminate errors in them. He writes, "there are no such things as good positive reasons; nor do we need such things [...] But [philosophers] obviously cannot quite bring [themselves] to believe that this is my opinion, let alone that it is right" (The Philosophy of Karl Popper, p. 1043)
Philosophy of arithmetic
Popper's principle of falsifiability runs into prima facie difficulties when the epistemological status of mathematics is considered. It is difficult to conceive how simple statements of arithmetic, such as "2 + 2 = 4", could ever be shown to be false. If they are not open to falsification they can not be scientific. If they are not scientific, it needs to be explained how they can be informative about real world objects and events.
Popper's solutionPopper, Karl Raimund (1946) Aristotelian Society Supplementary Volume XX. was an original contribution in the philosophy of mathematics. His idea was that a number statement such as "2 apples + 2 apples = 4 apples" can be taken in two senses. In one sense it is irrefutable and logically true, in the second sense it is factually true and falsifiable. Concisely, the pure mathematics "2 + 2 = 4" is always true, but, when the formula is applied to real world apples, it is open to falsification.Gregory, Frank Hutson (1996) Arithmetic and Reality: A Development of Popper's Ideas. City University of Hong Kong. Republished in Philosophy of Mathematics Education Journal No. 26 (December 2011).
Political philosophy
In The Open Society and Its Enemies and The Poverty of Historicism, Popper developed a critique of historicism and a defence of the "Open Society". Popper considered historicism to be the theory that history develops inexorably and necessarily according to knowable general laws towards a determinate end. He argued that this view is the principal theoretical presupposition underpinning most forms of authoritarianism and totalitarianism. He argued that historicism is founded upon mistaken assumptions regarding the nature of scientific law and prediction. Since the growth of human knowledge is a causal factor in the evolution of human history, and since "no society can predict, scientifically, its own future states of knowledge",The Poverty of Historicism, p. 21 it follows, he argued, that there can be no predictive science of human history. For Popper, metaphysical and historical indeterminism go hand in hand.
In his early years Popper was impressed by Marxism, whether of Communists or socialists. An event that happened in 1919 had a profound effect on him: During a riot, caused by the Communists, the police shot several unarmed people, including some of Popper's friends, when they tried to free party comrades from prison. The riot had, in fact, been part of a plan by which leaders of the Communist party with connections to Béla Kun tried to take power by a coup; Popper did not know about this at that time. However, he knew that the riot instigators were swayed by the Marxist doctrine that class struggle would produce vastly more dead men than the inevitable revolution brought about as quickly as possible, and so had no scruples to put the life of the rioters at risk to achieve their selfish goal of becoming the future leaders of the working class. This was the start of his later criticism of historicism. Popper began to reject Marxist historicism, which he associated with questionable means, and later socialism, which he associated with placing equality before freedom (to the possible disadvantage of equality).Popper, Karl R. ([1976] 2002). Unended Quest: An Intellectual Autobiography, pp. 32 -37
In 1947, Popper co-founded the Mont Pelerin Society, with Friedrich Hayek, Milton Friedman, Ludwig von Mises and others, although he did not fully agree with the think tank's charter and ideology. Specifically, he unsuccessfully recommended that socialists should be invited to participate, and that emphasis should be put on a hierarchy of humanitarian values rather than advocacy of a free market as envisioned by classical liberalism."Popper argued that some socialists ought to be invited to participate", "Well I do believe that in a way one has to have a free market, but I also believe that to make a godhead out of the principle of the free market is nonsense ... [the free market] is not of a fundamental importance. Humanitarianism, that is of fundamental importance" Daniel Stedman Jones: Masters of the Universe: Hayek, Friedman, and the Birth of Neoliberal Politics, pp. 40 ff.
The paradox of tolerance
Although Popper was an advocate of toleration, he said that intolerance should not be tolerated, for if tolerance allowed intolerance to succeed completely, tolerance would be threatened. In The Open Society and Its Enemies, he argued:
Metaphysics
Truth
As early as 1934, Popper wrote of the search for truth as "one of the strongest motives for scientific discovery." Still, he describes in Objective Knowledge (1972) early concerns about the much-criticised notion of truth as correspondence. Then came the semantic theory of truth formulated by the logician Alfred Tarski and published in 1933. Popper writes of learning in 1935 of the consequences of Tarski's theory, to his intense joy. The theory met critical objections to truth as correspondence and thereby rehabilitated it. The theory also seemed, in Popper's eyes, to support metaphysical realism and the regulative idea of a search for truth.
According to this theory, the conditions for the truth of a sentence as well as the sentences themselves are part of a metalanguage. So, for example, the sentence "Snow is white" is true if and only if snow is white. Although many philosophers have interpreted, and continue to interpret, Tarski's theory as a deflationary theory, Popper refers to it as a theory in which "is true" is replaced with "corresponds to the facts". He bases this interpretation on the fact that examples such as the one described above refer to two things: assertions and the facts to which they refer. He identifies Tarski's formulation of the truth conditions of sentences as the introduction of a "metalinguistic predicate" and distinguishes the following cases:
"John called" is true.
"It is true that John called."
The first case belongs to the metalanguage whereas the second is more likely to belong to the object language. Hence, "it is true that" possesses the logical status of a redundancy. "Is true", on the other hand, is a predicate necessary for making general observations such as "John was telling the truth about Phillip."
Upon this basis, along with that of the logical content of assertions (where logical content is inversely proportional to probability), Popper went on to develop his important notion of verisimilitude or "truthlikeness". The intuitive idea behind verisimilitude is that the assertions or hypotheses of scientific theories can be objectively measured with respect to the amount of truth and falsity that they imply. And, in this way, one theory can be evaluated as more or less true than another on a quantitative basis which, Popper emphasises forcefully, has nothing to do with "subjective probabilities" or other merely "epistemic" considerations.
The simplest mathematical formulation that Popper gives of this concept can be found in the tenth chapter of Conjectures and Refutations. Here he defines it as:
where is the verisimilitude of a, is a measure of the content of the truth of a, and is a measure of the content of the falsity of a.
Popper's original attempt to define not just verisimilitude, but an actual measure of it, turned out to be inadequate. However, it inspired a wealth of new attempts.
Cosmological pluralism
Knowledge, for Popper, was objective, both in the sense that it is objectively true (or truthlike), and also in the sense that knowledge has an ontological status (i.e., knowledge as object) independent of the knowing subject (Objective Knowledge: An Evolutionary Approach, 1972). He proposed three worlds:Karl Popper, Three Worlds, The Tanner Lecture on Human Values, The University of Michigan, 1978. World One, being the physical world, or physical states; World Two, being the world of mind, or mental states, ideas, and perceptions; and World Three, being the body of human knowledge expressed in its manifold forms, or the products of the second world made manifest in the materials of the first world (i.e., books, papers, paintings, symphonies, and all the products of the human mind). World Three, he argued, was the product of individual human beings in exactly the same sense that an animal path is the product of individual animals, and that, as such, has an existence and evolution independent of any individual knowing subjects. The influence of World Three, in his view, on the individual human mind (World Two) is at least as strong as the influence of World One. In other words, the knowledge held by a given individual mind owes at least as much to the total accumulated wealth of human knowledge, made manifest, as to the world of direct experience. As such, the growth of human knowledge could be said to be a function of the independent evolution of World Three. Many contemporary philosophers, such as Daniel Dennett, have not embraced Popper's Three World conjecture, due mostly, it seems, to its resemblance to mind-body dualism.
Origin and evolution of life
The creation–evolution controversy in the United States raises the issue of whether creationistic ideas may be legitimately called science and whether evolution itself may be legitimately called science. In the debate, both sides and even courts in their decisions have frequently invoked Popper's criterion of falsifiability (see Daubert standard). In this context, passages written by Popper are frequently quoted in which he speaks about such issues himself. For example, he famously stated "Darwinism is not a testable scientific theory, but a metaphysical research program—a possible framework for testable scientific theories." He continued:
He also noted that theism, presented as explaining adaptation, "was worse than an open admission of failure, for it created the impression that an ultimate explanation had been reached".
Popper later said:
In 1974, regarding DNA and the origin of life he said:
He explained that the difficulty of testing had led some people to describe natural selection as a tautology, and that he too had in the past described the theory as "almost tautological", and had tried to explain how the theory could be untestable (as is a tautology) and yet of great scientific interest:
Popper summarized his new view as follows:
These frequently quoted passages are only a very small part of what Popper wrote on the issue of evolution, however, and give the wrong impression that he mainly discussed questions of its falsifiability. Popper never invented this criterion to give justifiable use of words like science. In fact, Popper says at the beginning of Logic of Scientific Discovery that it is not his aim to define science, and that science can in fact be defined quite arbitrarily.
Popper had his own sophisticated views on evolutionNiemann, Hans-Joachim: Karl Popper and the Two New Secrets of Life: Including Karl Popper's Medawar Lecture 1986 and Three Related Texts. Tubingen: Mohr Siebeck, 2014. ISBN 978-3161532078. that go much beyond what the frequently-quoted passages say.For a secondary source see H. Keuth: The philosophy of Karl Popper, section 15.3 "World 3 and emergent evolution". See also John Watkins: Popper and Darwinism. The Power of Argumentation (Ed Enrique Suárez Iñiguez). Primary sources are, especially Objective Knowledge: An evolutionary approach, section "Evolution and the Tree of Knowledge", and Evolutionary epistemology (Eds. G. Radnitzsky, W.W. Bartley), section "Natural selection and the emergence of mind", In search of a better world, section "Knowledge and the shaping of rationality: the search for a better world", p. 16, Knowledge and the Body-Mind Problem: In Defence of Interaction, section "World 3 and emergent evolution", A world of propensities, section "Towards an evolutionary theory of knowledge", The Self and Its Brain: An Argument for Interactionism (with John C. Eccles), sections "The biological approach to human knowledge and intelligence" and "The biological function of conscious and intelligent activity". In effect, Popper agreed with some of the points of both creationists and naturalists, but also disagreed with both views on crucial aspects. Popper understood the universe as a creative entity that invents new things, including life, but without the necessity of something like a god, especially not one who is pulling strings from behind the curtain. He said that evolution must, as the creationists say, work in a goal-directed wayD. W. Miller: Karl Popper, a scientific memoir. Out of Error, p. 33 but disagreed with their view that it must necessarily be the hand of god that imposes these goals onto the stage of life.
Instead, he formulated the spearhead model of evolution, a version of genetic pluralism. According to this model, living organisms themselves have goals, and act according to these goals, each guided by a central control. In its most sophisticated form, this is the brain of humans, but controls also exist in much less sophisticated ways for species of lower complexity, such as the amoeba. This control organ plays a special role in evolution—it is the "spearhead of evolution". The goals bring the purpose into the world. Mutations in the genes that determine the structure of the control may then cause drastic changes in behaviour, preferences and goals, without having an impact on the organism's phenotype. Popper postulates that such purely behavioural changes are less likely to be lethal for the organism compared to drastic changes of the phenotype.K. Popper: Objective Knowledge, section "Evolution and the Tree of Knowledge", subsection "Addendum. The Hopeful Behavioural Monster" (p. 281)
Popper contrasts his views with the notion of the "hopeful monster" that has large phenotype mutations and calls it the "hopeful behavioural monster". After behaviour has changed radically, small but quick changes of the phenotype follow to make the organism fitter to its changed goals. This way it looks as if the phenotype were changing guided by some invisible hand, while it is merely natural selection working in combination with the new behaviour. For example, according to this hypothesis, the eating habits of the giraffe must have changed before its elongated neck evolved. Popper contrasted this view as "evolution from within" or "active Darwinism" (the organism actively trying to discover new ways of life and being on a quest for conquering new ecological niches),Michel Ter Hark: Popper, Otto Selz and the Rise Of Evolutionary Epistemology, pp. 184 ff with the naturalistic "evolution from without" (which has the picture of a hostile environment only trying to kill the mostly passive organism, or perhaps segregate some of its groups).
Popper was a key figure encouraging patent lawyer Günter Wächtershäuser to publish his Iron–sulfur world theory on abiogenesis and his criticism of "soup" theory.
About the creation-evolution controversy, Popper wrote that he considered it "a somewhat sensational clash between a brilliant scientific hypothesis concerning the history of the various species of animals and plants on earth, and an older metaphysical theory which, incidentally, happened to be part of an established religious belief" with a footnote to the effect that "[he] agree[s] with Professor C.E. Raven when, in his Science, Religion, and the Future, 1943, he calls this conflict "a storm in a Victorian tea-cup"; though the force of this remark is perhaps a little impaired by the attention he pays to the vapours still emerging from the cup—to the Great Systems of Evolutionist Philosophy, produced by Bergson, Whitehead, Smuts, and others."Karl R. Popper, The Poverty of Historicism, p. 97
Free will
Popper and John Eccles speculated on the problem of free will for many years, generally agreeing on an interactionist dualist theory of mind. However, although Popper was a body-mind dualist, he did not think that the mind is a substance separate from the body: he thought that mental or psychological properties or aspects of people are distinct from physical ones.Popper, K. R. "Of Clouds and Clocks," in his Objective Knowledge, corrected edition, pp. 206–55, Oxford, Oxford University Press (1973), p. 231 footnote 43, & p. 252; also Popper, K. R. "Natural Selection and the Emergence of Mind", 1977.
When he gave the second Arthur Holly Compton Memorial Lecture in 1965, Popper revisited the idea of quantum indeterminacy as a source of human freedom. Eccles had suggested that "critically poised neurons" might be influenced by the mind to assist in a decision. Popper criticised Compton's idea of amplified quantum events affecting the decision. He wrote:
Popper called not for something between chance and necessity but for a combination of randomness and control to explain freedom, though not yet explicitly in two stages with random chance before the controlled decision, saying, "freedom is not just chance but, rather, the result of a subtle interplay between something almost random or haphazard, and something like a restrictive or selective control."ibid, p. 232
Then in his 1977 book with John Eccles, The Self and its Brain, Popper finally formulates the two-stage model in a temporal sequence. And he compares free will to Darwinian evolution and natural selection:
Religion and God
In an interviewEdward Zerin: Karl Popper On God: The Lost Interview. Skeptic 6:2 (1998) that Popper gave in 1969 with the condition that it should be kept secret until after his death, he summarised his position on God as follows: "I don't know whether God exists or not. ... Some forms of atheism are arrogant and ignorant and should be rejected, but agnosticism—to admit that we don't know and to search—is all right. ... When I look at what I call the gift of life, I feel a gratitude which is in tune with some religious ideas of God. However, the moment I even speak of it, I am embarrassed that I may do something wrong to God in talking about God." He objected to organised religion, saying "it tends to use the name of God in vain", noting the danger of fanaticism because of religious conflicts: "The whole thing goes back to myths which, though they may have a kernel of truth, are untrue. Why then should the Jewish myth be true and the Indian and Egyptian myths not be true?" In a letter unrelated to the interview, he stressed his tolerant attitude: "Although I am not for religion, I do think that we should show respect for anybody who believes honestly."Popper archives fasc. 297.11See also Karl Popper: On freedom. All life is problem solving (1999), chapter 7, pp. 81 ff
Influence
left|thumb|Sir Karl Popper in 1990
Popper played a vital role in establishing the philosophy of science as a vigorous, autonomous discipline within philosophy, through his own prolific and influential works, and also through his influence on his own contemporaries and students. Popper founded in 1946 the Department of Philosophy, Logic and Scientific Method at the London School of Economics and there lectured and influenced both Imre Lakatos and Paul Feyerabend, two of the foremost philosophers of science in the next generation of philosophy of science. (Lakatos significantly modified Popper's position,Site on Lakatos/Popper John Kadvany, PhD and Feyerabend repudiated it entirely, but the work of both is deeply influenced by Popper and engaged with many of the problems that Popper set.)
While there is some dispute as to the matter of influence, Popper had a long-standing and close friendship with economist Friedrich Hayek, who was also brought to the London School of Economics from Vienna. Each found support and similarities in the other's work, citing each other often, though not without qualification. In a letter to Hayek in 1944, Popper stated, "I think I have learnt more from you than from any other living thinker, except perhaps Alfred Tarski."Hacohen, 2000 Popper dedicated his Conjectures and Refutations to Hayek. For his part, Hayek dedicated a collection of papers, Studies in Philosophy, Politics, and Economics, to Popper, and in 1982 said, "...ever since his Logik der Forschung first came out in 1934, I have been a complete adherent to his general theory of methodology."Weimer and Palermo, 1982
Popper also had long and mutually influential friendships with art historian Ernst Gombrich, biologist Peter Medawar, and neuroscientist John Carew Eccles. The German jurist Reinhold Zippelius uses Popper's method of "trial and error" in his legal philosophy.Reinhold Zippelius, Die experimentierende Methode im Recht, 1991 (ISBN 3-515-05901-6), and Rechtsphilosophie, 6th ed., 2011 (ISBN 978-3-406-61191-9)
Popper's influence, both through his work in philosophy of science and through his political philosophy, has also extended beyond the academy. One of Popper's students at the London School of Economics was George Soros, who later became a billionaire investor, and among whose philanthropic foundations is the Open Society Institute, a think-tank named in honour of Popper's The Open Society and Its Enemies.
Criticism
Most criticisms of Popper's philosophy are of the falsification, or error elimination, element in his account of problem solving. Popper presents falsifiability as both an ideal and as an important principle in a practical method of effective human problem solving; as such, the current conclusions of science are stronger than pseudo-sciences or non-sciences, insofar as they have survived this particularly vigorous selection method.
He does not argue that any such conclusions are therefore true, or that this describes the actual methods of any particular scientist. Rather, it is recommended as an essential principle of methodology that, if enacted by a system or community, will lead to slow but steady progress of a sort (relative to how well the system or community enacts the method). It has been suggested that Popper's ideas are often mistaken for a hard logical account of truth because of the historical co-incidence of their appearing at the same time as logical positivism, the followers of which mistook his aims for their own.Bryan Magee 1973: Popper (Modern Masters series)
The Quine-Duhem thesis argues that it's impossible to test a single hypothesis on its own, since each one comes as part of an environment of theories. Thus we can only say that the whole package of relevant theories has been collectively falsified, but cannot conclusively say which element of the package must be replaced. An example of this is given by the discovery of the planet Neptune: when the motion of Uranus was found not to match the predictions of Newton's laws, the theory "There are seven planets in the solar system" was rejected, and not Newton's laws themselves. Popper discussed this critique of naïve falsificationism in Chapters 3 and 4 of The Logic of Scientific Discovery. For Popper, theories are accepted or rejected via a sort of selection process. Theories that say more about the way things appear are to be preferred over those that do not; the more generally applicable a theory is, the greater its value. Thus Newton's laws, with their wide general application, are to be preferred over the much more specific "the solar system has seven planets".
Thomas Kuhn, in his influential book The Structure of Scientific Revolutions, argued that scientists work in a series of paradigms, and that falsificationist methodologies would make science impossible:
Popper's student Imre Lakatos attempted to reconcile Kuhn's work with falsificationism by arguing that science progresses by the falsification of research programs rather than the more specific universal statements of naïve falsificationism. Another of Popper's students Paul Feyerabend ultimately rejected any prescriptive methodology, and argued that the only universal method characterising scientific progress was anything goes.
Popper claimed to have recognised already in the 1934 version of his Logic of Discovery a fact later stressed by Kuhn, "that scientists necessarily develop their ideas within a definite theoretical framework", and to that extent to have anticipated Kuhn's central point about "normal science".K R Popper (1970), "Normal Science and its Dangers", pp. 51–58 in I Lakatos & A Musgrave (eds.) (1970), at p. 51. (But Popper criticised what he saw as Kuhn's relativism.K R Popper (1970), in I Lakatos & A Musgrave (eds.) (1970), at p. 56.) Also, in his collection Conjectures and Refutations: The Growth of Scientific Knowledge (Harper & Row, 1963), Popper writes, "Science must begin with myths, and with the criticism of myths; neither with the collection of observations, nor with the invention of experiments, but with the critical discussion of myths, and of magical techniques and practices. The scientific tradition is distinguished from the pre-scientific tradition in having two layers. Like the latter, it passes on its theories; but it also passes on a critical attitude towards them. The theories are passed on, not as dogmas, but rather with the challenge to discuss them and improve upon them."
Another objection is that it is not always possible to demonstrate falsehood definitively, especially if one is using statistical criteria to evaluate a null hypothesis. More generally it is not always clear, if evidence contradicts a hypothesis, that this is a sign of flaws in the hypothesis rather than of flaws in the evidence. However, this is a misunderstanding of what Popper's philosophy of science sets out to do. Rather than offering a set of instructions that merely need to be followed diligently to achieve science, Popper makes it clear in The Logic of Scientific Discovery that his belief is that the resolution of conflicts between hypotheses and observations can only be a matter of the collective judgment of scientists, in each individual case.Popper, Karl, (1934) Logik der Forschung, Springer. Vienna. Amplified English edition, Popper (1959), ISBN 0-415-27844-9
In a book called Science Versus Crime, Houck writesHouck, Max M., Science Versus Crime, Infobase Publishing, 2009, p. 65 that Popper's falsificationism can be questioned logically: it is not clear how Popper would deal with a statement like "for every metal, there is a temperature at which it will melt." The hypothesis cannot be falsified by any possible observation, for there will always be a higher temperature than tested at which the metal may in fact melt, yet it seems to be a valid scientific hypothesis. These examples were pointed out by Carl Gustav Hempel. Hempel came to acknowledge that Logical Positivism's verificationism was untenable, but argued that falsificationism was equally untenable on logical grounds alone. The simplest response to this is that, because Popper describes how theories attain, maintain and lose scientific status, individual consequences of currently accepted scientific theories are scientific in the sense of being part of tentative scientific knowledge, and both of Hempel's examples fall under this category. For instance, atomic theory implies that all metals melt at some temperature.
An early adversary of Popper's critical rationalism, Karl-Otto Apel attempted a comprehensive refutation of Popper's philosophy. In Transformation der Philosophie (1973), Apel charged Popper with being guilty of, amongst other things, a pragmatic contradiction.See: "Apel, Karl-Otto," La philosophie de A a Z, by Elizabeth Clement, Chantal Demonque, Laurence Hansen-Love, and Pierre Kahn, Paris, 1994, Hatier, 19–20. See Also: Towards a Transformation of Philosophy (Marquette Studies in Philosophy, No 20), by Karl-Otto Apel, trans., Glyn Adey and David Fisby, Milwaukee, 1998, Marquette University Press.
Charles Taylor accuses Popper of exploiting his worldwide fame as an epistemologist to diminish the importance of philosophers of the 20th century continental tradition. According to Taylor, Popper's criticisms are completely baseless, but they are received with an attention and respect that Popper's "intrinsic worth hardly merits".Taylor, Charles, "Overcoming Epistemology", in Philosophical Arguments, Harvard University Press, 1995, ISBN 0-674-66477-9
In 2004, philosopher and psychologist Michel ter Hark (Groningen, The Netherlands) published a book, called Popper, Otto Selz and the rise of evolutionary epistemology, in which he claimed that Popper took some of his ideas from his tutor, the German psychologist Otto Selz. Selz never published his ideas, partly because of the rise of Nazism, which forced him to quit his work in 1933, and the prohibition of referring to Selz' work. Popper, the historian of ideas and his scholarship, is criticised in some academic quarters for his rejection of Plato, Hegel and Marx.See: "Popper is committing a serious historical error in attributing the organic theory of the state to Plato and accusing him of all the fallacies of post-Hegelian and Marxist historicism—the theory that history is controlled by the inexorable laws governing the behavior of superindividual social entities of which human beings and their free choices are merely subordinate manifestations." Plato's Modern Enemies and the Theory of Natural Law, by John Wild, Chicago, 1964, The University of Chicago Press, 23. See Also: "In spite of the high rating one must accord his initial intention of fairness, his hatred for the enemies of the 'open society,' his zeal to destroy whatever seems to him destructive of the welfare of mankind, has led him into the extensive use of what may be called terminological counterpropaganda ..." and "With a few exceptions in Popper's favor, however, it is noticeable that reviewers possessed of special competence in particular fields—and here Lindsay is again to be included—have objected to Popper's conclusions in those very fields ..." and "Social scientists and social philosophers have deplored his radical denial of historical causation, together with his espousal of Hayek's systematic distrust of larger programs of social reform; historical students of philosophy have protested his violent polemical handling of Plato, Aristotle, and particularly Hegel; ethicists have found contradictions in the ethical theory ('critical dualism') upon which his polemic is largely based." In Defense of Plato, by Ronald B. Levinson, New York, 1970, Russell and Russell, 20.
According to John N. Gray, Popper held that "a theory is scientific only in so far as it is falsifiable, and should be given up as soon as it is falsified."John Gray, Straw Dogs, p. 22 Granta Books, London, 2002 By applying Popper's account of scientific method, Gray's Straw Dogs states that this would have "killed the theories of Darwin and Einstein at birth." When they were first advanced, Gray claims, each of them was "at odds with some available evidence; only later did evidence become available that gave them crucial support."John Gray, Straw Dogs, Granta Books, London, 2002 Against this, Gray seeks to establish the irrationalist thesis that "the progress of science comes from acting against reason."John Gray, Straw Dogs, p.2 2, Granta Books, London, 2002
Gray does not, however, give any indication of what available evidence these theories were at odds with, and his appeal to "crucial support" illustrates the very inductivist approach to science that Popper sought to show was logically illegitimate. For, according to Popper, Einstein's theory was at least equally as well corroborated as Newton's upon its initial conception; they both equally well accounted for all the hitherto available evidence. Moreover, since Einstein also explained the empirical refutations of Newton's theory, general relativity was immediately deemed suitable for tentative acceptance on the Popperian account.Karl Popper, "Replies to my Critics," The Philosophy of Karl Popper, Paul A. Schilpp, ed., v. II. Open Court, London, 1974. Indeed, Popper wrote, several decades before Gray's criticism, in reply to a critical essay by Imre Lakatos:
Bibliography
The Two Fundamental Problems of the Theory of Knowledge, 1930–33 (as a typescript circulating as Die beiden Grundprobleme der Erkenntnistheorie; as a German book 1979, as English translation 2008), ISBN 0-415-39431-7
The Logic of Scientific Discovery, 1934 (as Logik der Forschung, English translation 1959), ISBN 0-415-27844-9
The Poverty of Historicism, 1936 (private reading at a meeting in Brussels, 1944/45 as a series of journal articles in Econometrica, 1957 a book), ISBN 0-415-06569-0
The Open Society and Its Enemies, 1945 Vol 1 ISBN 0-415-29063-5, Vol 2 ISBN 0-415-29063-5
Quantum Theory and the Schism in Physics, 1956/57 (as privately circulated galley proofs; published as a book 1982), ISBN 0-415-09112-8
The Open Universe: An Argument for Indeterminism, 1956/57 (as privately circulated galley proofs; published as a book 1982), ISBN 0-415-07865-2
Realism and the Aim of Science, 1956/57 (as privately circulated galley proofs; published as a book 1983), ISBN 0-09-151450-9
Conjectures and Refutations: The Growth of Scientific Knowledge, 1963, ISBN 0-415-04318-2
Objective Knowledge: An Evolutionary Approach, 1972, Rev. ed., 1979, ISBN 0-19-875024-2
Unended Quest: An Intellectual Autobiography, 2002 [1976]. ISBN 0-415-28589-5 (ISBN 0-415-28590-9)
The Self and Its Brain: An Argument for Interactionism (with Sir John C. Eccles), 1977, ISBN 0-415-05898-8
In Search of a Better World, 1984, ISBN 0-415-13548-6
Die Zukunft ist offen (The Future is Open) (with Konrad Lorenz), 1985 (in German), ISBN 3-492-00640-X
A World of Propensities, 1990, ISBN 1-85506-000-0
The Lesson of this Century, (Interviewer: Giancarlo Bosetti, English translation: Patrick Camiller), 1992, ISBN 0-415-12958-3
All life is Problem Solving, 1994, ISBN 0-415-24992-9
The Myth of the Framework: In Defence of Science and Rationality (edited by Mark Amadeus Notturno) 1994. ISBN 0-415-13555-9
Knowledge and the Mind-Body Problem: In Defence of Interaction (edited by Mark Amadeus Notturno) 1994 ISBN 0-415-11504-3
The World of Parmenides, Essays on the Presocratic Enlightenment, 1998, (Edited by Arne F. Petersen with the assistance of Jørgen Mejer), ISBN 0-415-17301-9
After The Open Society, 2008. (Edited by Jeremy Shearmur and Piers Norris Turner, this volume contains a large number of Popper's previously unpublished or uncollected writings on political and social themes.) ISBN 978-0-415-30908-0
Frühe Schriften, 2006 (Edited by Troels Eggers Hansen, includes Popper's writings and publications from before the Logic, including his previously unpublished thesis, dissertation and journal articles published that relate to the Wiener Schulreform) ISBN 978-3-16-147632-7
See also
Calculus of predispositions
Contributions to liberal theory
Critique of psychoanalysis
Evolutionary epistemology
Liberalism in Austria
Popper legend
Positivism dispute
Predispositioning theory
Reflexivity (social theory)
Poper Scientific Stand up
References
Further reading
Lube, Manfred. Karl R. Popper. Bibliographie 1925–2004. Wissenschaftstheorie, Sozialphilosophie, Logik, Wahrscheinlichkeitstheorie, Naturwissenschaften. Frankfurt/Main etc.: Peter Lang, 2005. 576 pp. (Schriftenreihe der Karl Popper Foundation Klagenfurt.3.) (Current edition)
Gattei, Stefano. Karl Popper's Philosophy of Science. 2009.
Miller, David. Critical Rationalism: A Restatement and Defence. 1994.
David Miller (Ed.). Popper Selections.
Watkins, John W. N.. Science and Scepticism. Preface & Contents. Princeton 1984 (Princeton University Press). ISBN 978-0-09-158010-0
Jarvie, Ian Charles, Karl Milford, David W. Miller, ed. (2006). Karl Popper: A Centenary Assessment, Ashgate.
Volume I: Life and Times, and Values in a World of Facts. Description & Contents.
Volume II: Metaphysics and Epistemology Description & Contents.
Volume III: Science. Description & Contents.
Bailey, Richard, Education in the Open Society: Karl Popper and Schooling. Aldershot, UK: Ashgate 2000. The only book-length examination of Popper's relevance to education.
Bartley, William Warren III. Unfathomed Knowledge, Unmeasured Wealth. La Salle, IL: Open Court Press 1990. A look at Popper and his influence by one of his students.
Berkson, William K., and Wettersten, John. Learning from Error: Karl Popper's Psychology of Learning. La Salle, IL: Open Court 1984
Cornforth, Maurice. (1977): The open philosophy and the open society, 2., (rev.) ed., Lawrence & Wishart, London. ISBN 0-85315-384-1. The fundamental critique from the Marxist standpoint.
Edmonds, D., Eidinow, J. Wittgenstein's Poker. New York: Ecco 2001. A review of the origin of the conflict between Popper and Ludwig Wittgenstein, focused on events leading up to their volatile first encounter at 1946 Cambridge meeting.
Feyerabend, Paul Against Method. London: New Left Books, 1975. A polemical, iconoclastic book by a former colleague of Popper's. Vigorously critical of Popper's rationalist view of science.
Hacohen, M. Karl Popper: The Formative Years, 1902–1945. Cambridge: Cambridge University Press, 2000.
Hickey, J. Thomas. History of the Twentieth-Century Philosophy of Science Book V, Karl Popper And Falsificationist Criticism. www.philsci.com . 1995
Kadvany, John Imre Lakatos and the Guises of Reason. Durham and London: Duke University Press, 2001. ISBN 0-8223-2659-0. Explains how Imre Lakatos developed Popper's philosophy into a historicist and critical theory of scientific method.
Keuth, Herbert. The Philosophy of Karl Popper. Cambridge: Cambridge University Press, 2004. An accurate scholarly overview of Popper's philosophy, ideal for students.
Kuhn, Thomas S. The Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962. Central to contemporary philosophy of science is the debate between the followers of Kuhn and Popper on the nature of scientific enquiry. This is the book in which Kuhn's views received their classical statement.
Lakatos, I & Musgrave, A (eds.) (1970), Criticism and the Growth of Knowledge, Cambridge (Cambridge University Press). ISBN 0-521-07826-1
Levinson, Paul, ed. In Pursuit of Truth: Essays on the Philosophy of Karl Popper on the Occasion of his 80th Birthday. Atlantic Highlands, NJ: Humanities Press, 1982. ISBN 0-391-02609-7 A collection of essays on Popper's thought and legacy by a wide range of his followers. With forewords by Isaac Asimov and Helmut Schmidt. Includes an interview with Sir Ernst Gombrich.
Magee, Bryan. Popper. London: Fontana, 1977. An elegant introductory text. Very readable, albeit rather uncritical of its subject, by a former Member of Parliament.
Magee, Bryan. Confessions of a Philosopher, Weidenfeld and Nicolson, 1997. Magee's philosophical autobiography, with a chapter on his relations with Popper. More critical of Popper than in the previous reference.
Munz, Peter. Beyond Wittgenstein's Poker: New Light on Popper and Wittgenstein Aldershot, Hampshire, UK: Ashgate, 2004. ISBN 0-7546-4016-7. Written by the only living student of both Wittgenstein and Popper, an eyewitness to the famous "poker" incident described above (Edmunds & Eidinow). Attempts to synthesize and reconcile the differences between these two philosophers.
Niemann, Hans-Joachim. Lexikon des Kritischen Rationalismus, (Encyclopaedia of Critical Raionalism), Tübingen (Mohr Siebeck) 2004, ISBN 3-16-148395-2. More than a thousand headwords about critical rationalism, the most important arguments of K.R. Popper and H. Albert, quotations of the original wording. Edition for students in 2006, ISBN 3-16-149158-0.
Notturno, Mark Amadeus. "Objectivity, Rationality, and the Third Realm: Justification and the Grounds of Psychologism". Boston: Martinus Nijhoff, 1985.
Notturno, Mark Amadeus. On Popper. Wadsworth Philosophers Series. 2003. A very comprehensive book on Popper's philosophy by an accomplished Popperian.
Notturno, Mark Amadeus. "Science and the Open Society". New York: CEU Press, 2000.
O'Hear, Anthony. Karl Popper. London: Routledge, 1980. A critical account of Popper's thought, viewed from the perspective of contemporary analytic philosophy.
Radnitzky, Gerard, Bartley, W. W., III eds. Evolutionary Epistemology, Rationality, and the Sociology of Knowledge. LaSalle, IL: Open Court Press 1987. ISBN 0-8126-9039-7. A strong collection of essays by Popper, Campbell, Munz, Flew, et al., on Popper's epistemology and critical rationalism. Includes a particularly vigorous answer to Rorty's criticisms.
Richmond, Sheldon. Aesthetic Criteria: Gombrich and the Philosophies of Science of Popper and Polanyi. Rodopi, Amsterdam/Atlanta, 1994, 152 pp. ISBN 90-5183-618-X.
Rowbottom, Darrell P. Popper's Critical Rationalism: A Philosophical Investigation. London: Routledge, 2010. A research monograph on Popper's philosophy of science and epistemology. It critiques and develops critical rationalism in light of more recent advances in mainstream philosophy.
Schilpp, Paul A., ed. The Philosophy of Karl Popper. Description and contents. Chicago, IL: Open Court Press, 1974. One of the better contributions to the Library of Living Philosophers series. Contains Popper's intellectual autobiography (v. I, pp. 2–184, also as a 1976 book), a comprehensive range of critical essays, and Popper's responses to them. ISBN 0-87548-141-8 (vol.I). ISBN 0-87548-142-6 (Vol II)
Schroeder-Heister, P. "Popper, Karl Raimund (1902–94)," International Encyclopedia of the Social & Behavioral Sciences, 2001, pp. 11727–11733. Abstract.
Shearmur, Jeremy. The Political Thought of Karl Popper. London and New York: Routledge, 1996. Study of Popper's political thought by a former assistant of Popper's. Makes use of archive sources and studies the development of Popper's political thought and its inter-connections with his epistemology.
Stokes, G. Popper: Philosophy, Politics and Scientific Method. Cambridge: Polity Press, 1998. A very comprehensive, balanced study, which focuses largely on the social and political side of Popper's thought.
Stove, D.C., Popper and After: Four Modern Irrationalists. Oxford: Pergamon. 1982. A vigorous attack, especially on Popper's restricting himself to deductive logic.
Tausch, Arno. Towards New Maps of Global Human Values, Based on World Values Survey (6) Data (March 31, 2015). Available at SSRN:
Thornton, Stephen. "Karl Popper," Stanford Encyclopedia of Philosophy, 2006.
Weimer, W., Palermo, D., eds. Cognition and the Symbolic Processes. Hillsdale, NJ: Lawrence Erlbaum Associates. 1982. See Hayek's essay, "The Sensory Order after 25 Years", and "Discussion".
Zippelius, Reinhold, Die experimentierende Methode im Recht, Akademie der Wissenschaften Mainz. – Stuttgart: Franz Steiner, 1991, ISBN 3-515-05901-6
External links
Karl Popper on Stanford Encyclopedia of Philosophy
Popper, K. R. "Natural Selection and the Emergence of Mind", 1977.
The Karl Popper Web
Influence on Friesian Philosophy
Sir Karl R. Popper in Prague, May 1994
Synopsis and background of The poverty of historicism
"A Skeptical Look at Karl Popper" by Martin Gardner
"A Sceptical Look at 'A Skeptical Look at Karl Popper'" by J C Lester.
The Liberalism of Karl Popper by John N. Gray
Karl Popper on Information Philosopher
History of Twentieth-Century Philosophy of Science, BOOK V: Karl Popper Site offers free downloads by chapter available for public use.
Karl Popper at Liberal-international.org
A science and technology hypotheses database following Karl Popper's refutability principle
Category:Articles with inconsistent citation formats
Category:1902 births
Category:1994 deaths
Category:20th-century philosophers
Category:20th-century Austrian writers
Category:20th-century British writers
Category:Austrian agnostics
Category:Austrian philosophers
Category:British agnostics
Category:British people of Austrian-Jewish descent
Category:British philosophers
Category:British political philosophers
Category:Cambridge University Moral Sciences Club
Category:Philosophers of mind
Category:Consciousness researchers and theorists
Category:Critical rationalists
Category:Philosophers of science
Category:Mont Pelerin Society members
Category:Fellows of the British Academy
Category:Fellows of the Royal Society (Statute 12)
Category:Fellows of Darwin College, Cambridge
Category:Academics of the London School of Economics
Category:University of Vienna alumni
Category:University of Canterbury faculty
Category:Writers from Vienna
Category:Naturalised citizens of the United Kingdom
Category:Members of the Order of the Companions of Honour
Category:Knights Bachelor
Category:Recipients of the Pour le Mérite (civil class)
Category:Grand Crosses with Star and Sash of the Order of Merit of the Federal Republic of Germany
Category:Kyoto laureates in Arts and Philosophy
Category:Recipients of the Grand Decoration for Services to the Republic of Austria
Category:Recipients of the Austrian Decoration for Science and Art
Category:Presidents of the Aristotelian Society | 16,623 | 2017-01 |
Comprehensive school | A comprehensive school is a secondary school or middle school that is a state school and does not select its intake on the basis of academic achievement or aptitude. This is in contrast to the selective school system, where admission is restricted on the basis of selection criteria. The term is commonly used in relation to England and Wales, where comprehensive schools were introduced on an experimental basis in the 1940s and became more widespread from 1965. About 90% of British secondary school pupils now attend comprehensive schools. They correspond broadly to the public high school in the United States and Canada and to the German Gesamtschule.
Comprehensive schools are primarily about providing an entitlement curriculum to all children, without selection whether due to financial considerations or attainment. A consequence of that is a wider ranging curriculum, including practical subjects such as design and technology and vocational learning, which were less common or non-existent in grammar schools. Providing post-16 education cost-effectively becomes more challenging for smaller comprehensive schools, because of the number of courses needed to cover a broader curriculum with comparatively fewer students. This is why schools have tended to get larger and also why many local authorities have organised secondary education into 11–16 schools, with the post-16 provision provided by sixth form colleges and further education colleges. Comprehensive schools do not select their intake on the basis of academic achievement or aptitude, but there are demographic reasons why the attainment profiles of different schools vary considerably. In addition, government initiatives such as the City Technology Colleges and Specialist schools programmes have made the comprehensive ideal less certain.
In these schools children could be selected on the basis of curriculum aptitude related to the school's specialism even though the schools do take quotas from each quartile of the attainment range to ensure they were not selective by attainment. A problem with this is whether the quotas should be taken from a normal distribution or from the specific distribution of attainment in the immediate catchment area. In the selective school system, which survives in several parts of the United Kingdom, admission is dependent on selection criteria, most commonly a cognitive test or tests. Although comprehensive schools were introduced to England and Wales in 1965, there are 164 selective grammar schools that are still in operation. (though this is a small number compared to approximately 3500 state secondary schools in England).
Most comprehensives are secondary schools for children between the ages of 11 to 16, but in a few areas there are comprehensive middle schools, and in some places the secondary level is divided into two, for students aged 11 to 14 and those aged 14 to 18, roughly corresponding to the US middle school (or junior high school) and high school, respectively. With the advent of key stages in the National Curriculum some local authorities reverted from the Middle School system to 11–16 and 11–18 schools so that the transition between schools corresponds to the end of one key stage and the start of another.
In principle, comprehensive schools were conceived as "neighbourhood" schools for all students in a specified catchment area.
Finland
In most countries, the term "comprehensive school" is used to refer to the education of children after primary school, but in Finland this English term is confusingly used to also refer to all of the earlier grades.
Finland has used comprehensive schools after 5th grade since the 1970s. (Schools up to 4th grade have always been comprehensive in most countries, including Finland.) Until the 1970s, some children stayed on in the primary school (kansaloulu) until the end of the sixth grade whereas others left after the fourth grade to go to a school (oppikoulu) whose higher grades were called lukio (high school) and prepared them for tertiary education.
Since the 1970s, everyone has been required to complete the same nine grades of peruskoulu (literally "basic school"), from the ages of 7 to 16. This change was modeled on the British comprehensive school, which goes from the ages of 11 to 16. (In the USA and Germany, for example, comprehensive schools go to the age of 18 or 19.) From the ages of 16 to 19, students go either to vocational school or high school (lukio).
The division of the peruskoulu into a lower school (grades 1–6, ala-aste, alakoulu) and upper school (grades 7–9, yläaste, yläkoulu) has been discontinued.
Germany
thumb|The comprehensive school of Ludwigshafen-Oggersheim
Comprehensive schools that offer college preparatory classes
Germany has a comprehensive school known as the Gesamtschule.
While some German schools such as the Gymnasium and the Realschule have rather strict entrance requirements, the Gesamtschule does not have such requirements. They offer college preparatory classes for the students who are doing well, general education classes for average students, and remedial courses for those who aren't doing that well. In most cases students attending a Gesamtschule may graduate with the Hauptschulabschluss, the Realschulabschluss or the Abitur depending on how well they did in school.
The percentage of students attending a Gesamtschule varies by Bundesland. In the State of Brandenburg more than 50% of all students attended a Gesamtschule in 2007,Prof Dr. Valentin Merkelbach: "Gesamtschulen und Grundschulen sind das Beste in unserem Schulsystem" http://bildungsklick.de/a/55873/gesamtschulen-und-grundschulen-sind-das-beste-in-unserem-schulsystem/ while in the State of Bavaria less than 1% did.
Starting in 2010/2011, Hauptschulen were merged with Realschulen and Gesamtschulen to form a new type of comprehensive school in the German States of Berlin and Hamburg, called Stadtteilschule in Hamburg and Sekundarschule in Berlin (see: Education in Berlin, Education in Hamburg).
Comprehensive schools that do not offer college preparatory classes
The "Mittelschule" is a school in some States of Germany that offers regular classes and remedial classes but no college preparatory classes. In some States of Germany, the Hauptschule does not exist, and any student who has not been accepted by another school has to attend the Mittelschule. Students may be awarded the Hauptschulabschluss or the Mittlere Reife but not the Abitur.
Controversies
There is some controversy about comprehensive schools. As a rule of thumb those supporting The Left Party, the Social Democratic Party of Germany and Alliance '90/The Greens are in favour of comprehensive schools, while those supporting the Christian Democratic Union and the Free Democratic Party are opposed to them.
Grade inflation
thumb|Integrierte Gesamtschule Ludwigshafen-Gartenstadt
Comprehensive schools have been accused of grade inflation after a study revealed that Gymnasium senior students of average mathematical abilitywho scored 100 on a math test, provided by the scientists found themselves at the very bottom of their class and had an average grade of "Five", which means "Failed". Gesamtschule senior students of average mathematical ability found themselves in the upper half of their class and had an average grade of "Three Plus".Manfred Tücke: "Psychologie in der Schule, Psychologie für die Schule: Eine themenzentrierte Einführung in die Psychologie für (zukünftige) Lehrer". 4 Auflage 2005. Münster: LIT Verlag; p. 127; the study was done in North Rhine-Westphalia, students were attending a Leistungskurs When a central Abitur examination was established in the State of North Rhine-Westphalia, it was revealed that Gesamtschule students did worse than could be predicted by their grades or class rank. Barbara Sommer (Christian Democratic Union), Education Minister of North Rhine-Westphalia, commented that: Looking at the performance gap between comprehensives and the Gymnasium [at the Abitur central examination] [...] it is difficult to understand why the Social Democratic Party of Germany wants to do away with the Gymnasium. [...] The comprehensives do not help students achieve [...] I am sick and tired of the comprehensive schools blaming their problems on the social class origins of their students. What kind of attitude is this to blame their own students? She also called the Abitur awarded by the Gymnasium the true Abitur and the Abitur awarded by the Gesamtschule "Abitur light".Presseinformationen: Sprechzettel von Ministerin Barbara Sommer zur Pressekonferenz am 19.08.2008 "Schuljahresbeginn und Auswertung des Zentralabiturs 2008". Ministerium für Schule und Weiterbildung des Landes Nordrhein-Westfalen As a reaction, Sigrid Beer (Alliance '90/The Greens) stated that comprehensives were structurally discriminated against by the government, which favoured the Gymnasiums. She also said that many of the students awarded the Abitur by the comprehensives came from "underprivileged groups" and sneering at their performance was a "piece of impudence".Stephan Lüke: "Gutes Abitur, schlechte Gesamtschule". WDR Wissen
Unfairness
Gesamtschulen might put bright working class students at risk according to several studies. It could be shown that an achievement gap opens between working class students attending a comprehensive and their middle class peers. Also, working class students attending a Gymnasium or a Realschule outperform students from similar backgrounds attending a Gesamtschulen. However, it is not students attending a Gesamtschulen, but students attending a Hauptschule who perform the poorest.
+ PISA points earnedEhmke et al., 2004, In: PISA-Konsortium Deutschland (Hrsg.): PISA 2003 – Der Bildungsstand der Jugendlichen in Deutschland – Ergebnisse des 2. internationalen Vergleiches, Münster/NewYork: Waxmann, S. 244type schoolsocial class "very low"social class "low"social class "high"social class "very high" Hauptschule 400 429 436 450 Gesamtschule 438 469 489 515 Realschule 482 504 528 526 Gymnasium 578 581 587 602
A study by a group of researchers around Helmut Fend, an Austrian Professor of Pedagogy and proponent of comprehensive schools, revealed that comprehensive schools do not help working class students in the long term. The researchers compared alumni of the tripartite system to alumni of comprehensive schools. While working class alumni of comprehensive schools were awarded better school diplomas, at age 35 they held similar occupational positions as working class alumni of the tripartite system and were as unlikely to graduate from college. Fend concluded that the social background outweighs advantages of the type of school after graduation.Jochen Leffers: "Gesamtschule folgenlos - Bildung wird vererbt". 3 January 2008. Der Spiegel.
According to Kurt A. Heller:
According to Ulrich Sprenger:
Gibraltar
Gibraltar opened its first comprehensive school in 1972. Between the ages of 12 and 16 two comprehensive schools cater for girls and boys separately. Students may also continue into the sixth form to complete their A-levels.
Ireland
Comprehensive schools were introduced into Ireland in 1966 by an initiative by Patrick Hillery, Minister for Education, to give a broader range of education compared to that of the vocational school system, which was then the only system of schools completely controlled by the state. Until then, education in Ireland was largely dominated by religious persuasion, particularly the voluntary secondary school system was a particular realisation of this. The comprehensive school system is still relatively small and to an extent has been superseded by the community school concept. The Irish word for a comprehensive school is a 'scoil chuimsitheach.'
In Ireland comprehensive schools were an earlier model of state schools, introduced in the late 1960s and largely replaced by the secular community model of the 1970s. The comprehensive model generally incorporated older schools that were under Roman Catholic or Protestant ownership, and the various denominations still manage the school as patrons or trustees. The state owns the school property, which is vested in the trustees in perpetuity. The model was adopted to make state schools more acceptable to a largely conservative society of the time.
The introduction of the community school model in the 1970s controversially removed the denominational basis of the schools, but religious interests were invited to be represented on the Boards of Management. Community schools are divided into two models, the community school vested in the Minister for Education and the community college vested in the local Education and Training Board. Community colleges tended to be amalgamations of unviable local schools under the umbrella of a new community school model, but community schools have tended to be entirely new foundations.
Nigeria
Government Comprehensive Secondary School (GCSS) is a government-owned secondary school located in Port Harcourt, the capital of Rivers State, Nigeria.
Sweden
Sweden had used mixed-ability schools for some years before they were introduced into England and Wales, and was chosen as one of the models.
United Kingdom
England and Wales
The first comprehensives were set up after the Second World War. In 1946, for example, Walworth School was one of five 'experimental' comprehensive schools set up by the London County CouncilPeter Medway and Pat Kingwell, ‘A Curriculum in its place: English teaching in one school 1946-1963′, History of Education 39, no. 6 (November 2010): 749-765. Another early comprehensive school was Holyhead County School in Anglesey in 1949.Comps - here to stay?, Phil Tineline, September 2005, BBC, accessed 12 August 2008. Other early examples of comprehensive schools included Woodlands Boys School in Coventry (opened in 1954) and Tividale Comprehensive School in Tipton.
The largest expansion of comprehensive schools in 1965 resulted from a policy decision taken in 1965 by Anthony Crosland, Secretary of State for Education in the 1964–1970 Labour government. The policy decision was implemented by Circular 10/65, an instruction to local education authorities to plan for conversion. Students sat the 11+ examination in their last year of primary education and were sent to one of a secondary modern, secondary technical or grammar school depending on their perceived ability. Secondary technical schools were never widely implemented and for 20 years there was a virtual bipartite system which saw fierce competition for the available grammar school places, which varied between 15% and 25% of total secondary places, depending on location.
In 1970 Margaret Thatcher became Secretary of State for Education of the new Conservative government. She ended the compulsion on local authorities to convert, however, many local authorities were so far down the path that it would have been prohibitively expensive to attempt to reverse the process, and more comprehensive schools were established under Thatcher than any other education secretary.
By 1975 the majority of local authorities in England and Wales had abandoned the 11-Plus examination and moved to a comprehensive system. Over that 10-year period many secondary modern schools and grammar schools were amalgamated to form large neighbourhood comprehensives, whilst a number of new schools were built to accommodate a growing school population. By the mid-1970s the system had been almost fully implemented, with virtually no secondary modern schools remaining. Many grammar schools were either closed or changed to comprehensive status. Some local authorities, including Sandwell and Dudley in the West Midlands, changed all of its state secondary schools to comprehensive schools during the 1970s.
In 1976 the future Labour prime minister James Callaghan launched what became known as the 'great debate' on the education system. He went on to list the areas he felt needed closest scrutiny: the case for a core curriculum, the validity and use of informal teaching methods, the role of school inspection and the future of the examination system. Comprehensive school remains the most common type of state secondary school in England, and the only type in Wales. They account for around 90% of pupils, or 64% if one does not count schools with low-level selection. This figure varies by region.
Since the 1988 Education Reform Act, parents have a right to choose which school their child should go to or whether to not send them to school at all and to home educate them instead. The concept of "school choice" introduces the idea of competition between state schools, a fundamental change to the original "neighbourhood comprehensive" model, and is partly intended as a means by which schools that are perceived to be inferior are forced either to improve or, if hardly anyone wants to go there, to close down. Government policy is currently promoting 'specialisation' whereby parents choose a secondary school appropriate for their child's interests and skills. Most initiatives focus on parental choice and information, implementing a pseudo-market incentive to encourage better schools. This logic has underpinned the controversial league tables of school performance.
Scotland
Scotland has a very different educational system from England and Wales, though also based on comprehensive education. It has different ages of transfer, different examinations and a different philosophy of choice and provision. All publicly funded primary and secondary schools are comprehensive. The Scottish Government has rejected plans for specialist schools as of 2005.
Northern Ireland
Education in Northern Ireland differs slightly from systems used elsewhere in the United Kingdom, but it is more similar to that used in England and Wales than it is to Scotland.
References
External links
Comprehensive Future – the campaign for fair admissions
Centre for the Support of Comprehensive Schools
Comprehensive Education – Examining the Evidence Report of 1999 seminar organised by CASE (the Campaign for State Education in the UK).
Campaign for State Education
Secretary of State for Education Ruth Kelly on comprehensive education
Comp, a BBC Radio 4 documentary about the creation of comprehensive schools
Discussions in 2002 about the future of comprehensives
http://www.arasite.org/edinandsocmods.html
Category:Philosophy of education
Category:Education in the United Kingdom
Category:State schools in the United Kingdom
Category:School types | 699,350 | 2017-01 |
Philadelphia | Philadelphia () is the largest city in the Commonwealth of Pennsylvania and the fifth-most populous city in the United States, with an estimated population of 1,567,442 and more than 6 million in the seventh-largest metropolitan statistical area, . Philadelphia is the economic and cultural anchor of the Delaware Valleya region located in the Northeastern United States at the confluence of the Delaware and Schuylkill rivers with 7.2 million people residing in the eighth-largest combined statistical area in the United States.
In 1682, William Penn founded the city to serve as capital of the Pennsylvania Colony. Philadelphia played an instrumental role in the American Revolution as a meeting place for the Founding Fathers of the United States, who signed the Declaration of Independence in 1776 and the Constitution in 1787. Philadelphia was one of the nation's capitals in the Revolutionary War, and served as temporary U.S. capital while Washington, D.C., was under construction. In the 19th century, Philadelphia became a major industrial center and railroad hub that grew from an influx of European immigrants. It became a prime destination for African-Americans in the Great Migration and surpassed two million occupants by 1950.
Based on the similar shifts underway the nation's economy in the late 1960s Philadelphia experienced a loss of manufacturing companies and jobs to lower taxed regions of the USA and often overseas. As a result, the economic base of Philadelphia, which had historically been manufacturing, declined significantly. In addition, consolidation in several American industries (retailing, financial services and health care in particular) reduced the number of companies headquartered in Philadelphia. The economic impact of these changes would reduce Philadelphia's tax base and the resources of local government. Philadelphia struggled through a long period of adjustment to these economic changes, coupled with significant demographic change as wealthier residents moved into the nearby suburbs and more immigrants moved into the city. The city in fact approached bankruptcy in the late 1980s. Revitalization began in the late 1990s, with gentrification turning around many neighborhoods and reversing its decades-long trend of population loss.
The area's many universities and colleges make Philadelphia a top international study destination, as the city has evolved into an educational and economic hub. With a gross domestic product of $388 billion, Philadelphia ranks ninth among world cities and fourth in the nation. Philadelphia is the center of economic activity in Pennsylvania and is home to seven Fortune 1000 companies. The Philadelphia skyline is growing, with a market of almost 81,900 commercial properties in 2016 including several nationally prominent skyscrapers. The city is known for its arts, culture, and history, attracting over 39 million domestic tourists in 2013. Philadelphia has more outdoor sculptures and murals than any other American city.Gateway to Public Art in Philadelphia, Fairmount Park Art Association. Fairmount Park, when combined with the adjacent Wissahickon Valley Park in the same watershed, is one of the largest contiguous urban park areas in the United States. The 67 National Historic Landmarks in the city helped account for the $10 billion generated by tourism. Philadelphia is the birthplace of the United States Marine Corps, and is also the home of many U.S. firsts, including the first library (1731), first hospital (1751) and medical school (1765), first Capitol (1777), first stock exchange (1790), first zoo (1874), and first business school (1881). Philadelphia is the only World Heritage City in the United States.
History
thumb|210px|An 18th century map of Philadelphia.
Before Europeans arrived, the Philadelphia area was home to the Lenape (Delaware) Indians in the village of Shackamaxon. The Lenape are a Native American tribe and First Nations band government.Pritzker 422 They are also called Delaware IndiansJosephy 188–189 and their historical territory was along the Delaware River watershed, western Long Island and the Lower Hudson Valley.{{efn|1= Description of the Lenape peoples (Delaware nations) historic territories inside the divides of the frequently mountainous landforms flanking the Delaware River's drainage basin. These terrains encompass from South to North and then counter-clockwise:
The Susquehanna-Delaware watershed divides bound the frequently contested 'hunting grounds''' between the rival Susquehannock peoples and the Lenape peoples, whilst the Catskills and Berkshires played a similar boundary role in the northern regions of their original colonial era range.}} Most Lenape were pushed out of their Delaware homeland during the 18th century by expanding European colonies, exacerbated by losses from intertribal conflicts. Lenape communities were weakened by newly introduced diseases, mainly smallpox, and violent conflict with Europeans. Iroquois people occasionally fought the Lenape. Surviving Lenape moved west into the upper Ohio River basin. The American Revolutionary War and United States' independence pushed them further west. In the 1860s, the United States government sent most Lenape remaining in the eastern United States to the Indian Territory (present-day Oklahoma and surrounding territory) under the Indian removal policy. In the 21st century, most Lenape now reside in the US state of Oklahoma, with some communities living also in Wisconsin, Ontario (Canada) and in their traditional homelands.
Europeans came to the Delaware Valley in the early 17th century, with the first settlements founded by the Dutch, who in 1623 built Fort Nassau on the Delaware River opposite the Schuylkill River in what is now Brooklawn, New Jersey. The Dutch considered the entire Delaware River valley to be part of their New Netherland colony. In 1638, Swedish settlers led by renegade Dutch established the colony of New Sweden at Fort Christina (present day Wilmington, Delaware) and quickly spread out in the valley. In 1644, New Sweden supported the Susquehannocks in their military defeat of the English colony of Maryland. In 1648, the Dutch built Fort Beversreede on the west bank of the Delaware, south of the Schuylkill near the present-day Eastwick section of Philadelphia, to reassert their dominion over the area. The Swedes responded by building Fort Nya Korsholm, named New Korsholm after a town that is now in Finland. In 1655, a Dutch military campaign led by New Netherland Director-General Peter Stuyvesant took control of the Swedish colony, ending its claim to independence, although the Swedish and Finnish settlers continued to have their own militia, religion, and court, and to enjoy substantial autonomy under the Dutch. The English conquered the New Netherland colony in 1664, but the situation did not really change until 1682, when the area was included in William Penn's charter for Pennsylvania.
thumb|upright=1.1|Penn's Treaty with the Indians by Benjamin West
In 1681, in partial repayment of a debt, Charles II of England granted William Penn a charter for what would become the Pennsylvania colony. Despite the royal charter, Penn bought the land from the local Lenape to be on good terms with the Native Americans and ensure peace for his colony. Penn made a treaty of friendship with Lenape chief Tammany under an elm tree at Shackamaxon, in what is now the city's Fishtown section. Penn named the city Philadelphia, which is Greek for brotherly love (from philos, "love" or "friendship", and adelphos, "brother"). As a Quaker, Penn had experienced religious persecution and wanted his colony to be a place where anyone could worship freely. This tolerance, far more than afforded by most other colonies, led to better relations with the local Native tribes and fostered Philadelphia's rapid growth into America's most important city.
Penn planned a city on the Delaware River to serve as a port and place for government. Hoping that Philadelphia would become more like an English rural town instead of a city, Penn laid out roads on a grid plan to keep houses and businesses spread far apart, with areas for gardens and orchards. The city's inhabitants did not follow Penn's plans, as they crowded by the Delaware River, the port, and subdivided and resold their lots.Philadelphia: A 300-Year History, pages 7, 14 – 16 Before Penn left Philadelphia for the last time, he issued the Charter of 1701 establishing it as a city. It became an important trading center, poor at first, but with tolerable living conditions by the 1750s. Benjamin Franklin, a leading citizen, helped improve city services and founded new ones, such as fire protection, a library, and one of the American colonies' first hospitals.
thumb|upright=1.1|Benjamin Franklin, 1777
A number of important philosophical societies were formed, which were centers of the city's intellectual life: the Philadelphia Society for Promoting Agriculture (1785), the Pennsylvania Society for the Encouragement of Manufactures and the Useful Arts (1787), the Academy of Natural Sciences (1812), and the Franklin Institute (1824). These worked to develop and finance new industries and attract skilled and knowledgeable immigrants from Europe.
Philadelphia's importance and central location in the colonies made it a natural center for America's revolutionaries. By the 1750s, Philadelphia had surpassed Boston to become the largest city and busiest port in British America, and second in the British Empire, behind London. The city hosted the First Continental Congress before the American Revolutionary War; the Second Continental Congress, which signed the United States Declaration of Independence, during the war; and the Constitutional Convention (1787) after the war. Several battles were fought in and near Philadelphia as well.
thumb|upright=1.1|President's House, Philadelphia. This mansion at 6th & Market Streets served as the presidential mansion of George Washington and John Adams, 1790–1800.
Philadelphia served as the temporary capital of the United States, 1790–1800, while the Federal City was under construction in the District of Columbia.Insight Guides: Philadelphia and Surroundings, pages 30–33 In 1793, the largest yellow fever epidemics in U.S. history killed at least 4,000 and up to 5,000 people in Philadelphia, roughly 10% of the city's population.
The state government left Philadelphia in 1799, and the federal government was moved to Washington, DC in 1800 with completion of the White House and Capitol. The city remained the young nation's largest with a population of nearly 50,000 at the turn of the 19th century; it was a financial and cultural center. Before 1800, its free black community founded the African Methodist Episcopal Church (AME), the first independent black denomination in the country, and the first black Episcopal Church. The free black community also established many schools for its children, with the help of Quakers. New York City soon surpassed Philadelphia in population, but with the construction of roads, canals, and railroads, Philadelphia became the first major industrial city in the United States.
thumb|upright=1.1|Opening day ceremonies at the Centennial Exhibition at Memorial Hall, 1876, first World's Fair in the US.Throughout the 19th century, Philadelphia had a variety of industries and businesses, the largest being textiles. Major corporations in the 19th and early 20th centuries included the Baldwin Locomotive Works, William Cramp and Sons Ship and Engine Building Company, and the Pennsylvania Railroad.Philadelphia: A 300-Year History, pages 214, 218, 428 – 429 Industry, along with the U.S. Centennial, was celebrated in 1876 with the Centennial Exposition, the first official World's Fair in the United States. Immigrants, mostly Irish and German, settled in Philadelphia and the surrounding districts. The rise in population of the surrounding districts helped lead to the Act of Consolidation of 1854, which extended the city limits of Philadelphia from the 2 square miles of present-day Center City to the roughly 130 square miles of Philadelphia County.
thumb|upright=1.1|Library and Surgeon's Hall, Fifth-street.
These immigrants were largely responsible for the first general strike in North America in 1835, in which workers in the city won the ten-hour workday. The city was a destination for thousands of Irish immigrants fleeing the Great Famine in the 1840s; housing for them was developed south of South Street, and was later occupied by succeeding immigrants. They established a network of Catholic churches and schools, and dominated the Catholic clergy for decades. Anti-Irish, anti-Catholic Nativist riots had erupted in Philadelphia in 1844. In the latter half of the century, immigrants from Russia, Eastern Europe and Italy; and African Americans from the southern U.S. settled in the city.Insight Guides: Philadelphia and Surroundings, pages 38–39 Between 1880 and 1930, the African-American population of Philadelphia increased from 31,699 to 219,559."Notes on the historical development of population in West Philadelphia", University of Pennsylvania."Detroit and the Great Migration, 1916–1929 by Elizabeth Anne Martin". Bentley Historical Library, University of Michigan. Twentieth-century black newcomers were part of the Great Migration out of the rural South to northern and midwestern industrial cities.
thumb|upright=1.1|An anti-Irish Catholic nativist riot in Southwark, July 7, 1844.
thumb|upright=1.1|Eighth and Market Streets, 1840
In the American Civil War, Philadelphia was represented by the Washington Grays (Philadelphia).
thumb|upright=1.1|8th and Market Street, showing the Strawbridge and Clothier department store, 1910s
By the 20th century, Philadelphia had become known as "corrupt and contented", with a complacent population and an entrenched Republican political machine.Philadelphia: A 300-Year History, pages 535, 537 The first major reform came in 1917 when outrage over the election-year murder of a police officer led to the shrinking of the Philadelphia City Council from two houses to just one.Philadelphia: A 300-Year History, pages 563 – 564 In July 1919, Philadelphia was one of more than 36 industrial cities nationally to suffer a race riot of ethnic whites against blacks during Red Summer, in post-World War I unrest, as recent immigrants competed with blacks for jobs. In the 1920s, the public flouting of Prohibition laws, organized crime or mob violence, and police involvement in illegal activities led to the appointment of Brigadier General Smedley Butler of the U.S. Marine Corps as director of public safety, but political pressure prevented any long-term success in fighting crime and corruption.Philadelphia: A 300-Year History, pages 578 – 581
In 1940, non-Hispanic whites constituted 86.8% of the city's population. The population peaked at more than two million residents in 1950, then began to decline with the restructuring of industry, which led to the loss of many middle-class union jobs. In addition, suburbanization had been drawing off many of the wealthier residents to outlying railroad commuting towns and newer housing. Revitalization and gentrification of neighborhoods began in the late 1970s and continues into the 21st century, with much of the development in the Center City and University City areas of the city. After many of the old manufacturers and businesses left Philadelphia or shut down, the city started attracting service businesses and began to more aggressively market itself as a tourist destination. Glass-and-granite skyscrapers were built in Center City. Historic areas such as Independence National Historical Park located in Old City and Society Hill were renovated during the reformist mayoral era of the 1950s through the 1980s. They are now among the most desirable living areas of Center City. This has slowed the city's 40-year population decline after it lost nearly one-quarter of its population.Insight Guides: Philadelphia and Surroundings, pages 44–45A Concise History of Philadelphia, page 78
Geography
thumb|A simulated-color image of Philadelphia and the Delaware River, taken by NASA's Landsat 7 satellite
Topography
Philadelphia is at 39° 57′ north latitude and 75° 10′ west longitude, and the 40th parallel north passes through the northern parts of the city. The city encompasses , of which is land and , or 5.29%, is water. Bodies of water include the Delaware and Schuylkill rivers, and Cobbs, Wissahickon, and Pennypack creeks.
The lowest point is above sea level, while the highest point is in Chestnut Hill, about above sea level (near the intersection of Germantown Avenue and Bethlehem Pike). (Example coordinates of high point: Latitude: 40° 04′ 37″, Longitude: −75° 12′ 29″.)
Philadelphia sits on the Fall Line that separates the Atlantic Coastal Plain from the Piedmont.Railsback, Bruce. "The Fall Line." GEOL 1122: Earth's History of Global Change. University of Georgia Department of Geology. The rapids on the Schuylkill River at East Falls were inundated by the completion of the Fairmount Dam."Philadelphia Neighborhoods and Place Names, A–K". Philadelphia Information Locator System.
The city is the seat of its own county. The adjacent counties are Montgomery to the north; Bucks to the northeast; Burlington County, New Jersey, to the east; Camden County, New Jersey, to the southeast; Gloucester County, New Jersey, to the south; and Delaware County to the west.
Cityscape
City planning
thumb|left|The heart of Logan Square at night.
Philadelphia's central city was created in the 17th century following the plan by William Penn's surveyor Thomas Holme. Center City is structured with long straight streets running east-west and north-south forming a grid pattern. The original city plan was designed to allow for easy travel and to keep residences separated by open space that would help prevent the spread of fire. The Delaware River and Schuylkill Rivers served as early boundaries between which the city's early street plan was kept within. In addition, Penn planned the creation of five public parks in the city which were renamed in 1824 (in parenthesis): Centre Square, North East Publick Square (Franklin Square), Northwest Square (Logan Square), Southwest Square (Rittenhouse Square), and Southeast Square (Washington Square). Center City has grown into the second-most populated downtown area in the United States, after Midtown Manhattan in New York City, with an estimated 183,240 residents in 2015.
Philadelphia's neighborhoods are divided into large sections—North, Northeast, Northwest, West, South and Southwest Philadelphia—all of which surround Center City, which corresponds closely with the city's limits before consolidation in 1854. Each of these large areas contains numerous neighborhoods, some of whose boundaries derive from the boroughs, townships, and other communities that made up Philadelphia County before their absorption into the city.
thumb|Elfreth's Alley, "Our nation's oldest residential street", dating to 1702.Historical marker on Elfreth's Alley
The City Planning Commission, tasked with guiding growth and development of the city, has divided the city into 18 planning districts as part of the Philadelphia2035 physical development plan. Much of the city's 1980 zoning code was overhauled from 2007–2012 as part of a joint effort between former mayors John F. Street and Michael Nutter. The zoning changes were intended to rectify incorrect zoning mapping that would streamline future community preferences and development, which the city forecasts an additional 100,000 residents and 40,000 jobs to be added to Philadelphia in 2035.
The Philadelphia Housing Authority is the largest landlord in Pennsylvania. Established in 1937, it is the nation's fourth-largest housing authority, housing about 84,000 people and employing 1,250. In 2013, its budget was $371 million. The Philadelphia Parking Authority works to ensure adequate parking for city residents, businesses and visitors.
Architecture
thumb|left|Christ Church, a sophisticated example of Georgian architecture.
Philadelphia's architectural history dates back to Colonial times and includes a wide range of styles. The earliest structures were of logs construction, but brick structures were common by 1700. During the 18th century, the cityscape was dominated by Georgian architecture, including Independence Hall and Christ Church.
thumb|Center City Philadelphia, showing the One Liberty Place skyscraper behind City Hall and their contrast in architectural styles.
thumb|left|Georgian style homes in Society Hill.
In the first decades of the 19th century, Federal architecture and Greek Revival architecture were dominated by Philadelphia architects such as Benjamin Latrobe, William Strickland, John Haviland, John Notman, Thomas U. Walter, and Samuel Sloan. Frank Furness is considered Philadelphia's greatest architect of the second half of the 19th century, but his contemporaries included John McArthur, Jr., Addison Hutton, Wilson Eyre, the Wilson Brothers, and Horace Trumbauer. In 1871, construction began on the Second Empire-style Philadelphia City Hall. The Philadelphia Historical Commission was created in 1955 to preserve the cultural and architectural history of the city. The commission maintains the Philadelphia Register of Historic Places, adding historic buildings, structures, sites, objects and districts as it sees fit.
In 1932, Philadelphia became home to the first International Style skyscraper in the United States, The PSFS Building, designed by George Howe and William Lescaze. It is the United States' first modern skyscraper and considered the most important one built in the first part of the 20th century.
The City Hall remained the tallest building in the city until 1987 when One Liberty Place was constructed. Numerous glass and granite skyscrapers were built in Philadelphia's Center City from the late 1980s onwards. In 2007, the Comcast Center surpassed One Liberty Place to become the city's tallest building. The Comcast Innovation and Technology Center is under construction in Center City and is planned to reach a height of 1,121 feet (342 meters); upon completion, the tower is expected to be the tallest skyscraper in the United States outside of New York City and Chicago.
For much of Philadelphia's history, the typical home has been the row house. The row house was introduced to the United States via Philadelphia in the early 19th century and, for a time, row houses built elsewhere in the United States were known as "Philadelphia rows". A variety of row houses are found throughout the city, from Victorian-style homes in North Philadelphia to twin row houses in West Philadelphia. While newer homes are scattered throughout the city, much of the housing is from the early 20th century or older. The great age of the homes has created numerous problems, including blight and vacant lots in many parts of the city, while other neighborhoods such as Society Hill, which has the largest concentration of 18th-century architecture in the United States, have been rehabilitated and gentrified.
Climate
Under the Köppen climate classification, Philadelphia falls in the northern periphery of the humid subtropical climate zone (Köppen Cfa). Under the Trewartha climate classification, the city has a temperate maritime climate (Do).Trewartha GT, Horn LH (1980) Introduction to climate, 5th edn. McGraw Hill, New York, NY Summers are typically hot and muggy, fall and spring are generally mild, and winter is cold.
Snowfall is highly variable, with some winters bringing only light snow and others bringing several major snowstorms, with the normal seasonal snowfall standing at ; snow in November or April is rare, and a sustained snow cover is rare. Precipitation is generally spread throughout the year, with eight to twelve wet days per month, at an average annual rate of , but historically ranging from in 1922 to in 2011. The most rain recorded in one day occurred on July 28, 2013, when fell at Philadelphia International Airport.
The January daily average is , though, in a normal winter, the temperature frequently rises to during thaws and dips to for 2 or 3 nights. July averages , although heat waves accompanied by high humidity and heat indices are frequent; highs reach or exceed on 27 days of the year. The average window for freezing temperatures is November 6 thru April 2, allowing a growing season of 217 days. Early fall and late winter are generally dry; February's average of makes it the area's driest month. The dewpoint in the summer averages between to .
Seasonal snowfall accumulation has ranged from trace amounts in 1972–73 to in the winter of 2009–10. The city's heaviest single-storm snowfall, at , occurred in January 1996.
The highest recorded temperature was on August 7, 1918, but + temperatures are uncommon. The lowest officially recorded temperature was on February 9, 1934, but with the last such occurrence being January 19, 1994, temperatures at or below the mark are rare. The record low maximum is on February 10, 1899 and December 30, 1880, while the record high minimum is on July 23, 2011 and July 24, 2010.
In the American Lung Association 2015 State of the Air report, Philadelphia County received an ozone grade of F and a 24-hour particle pollution rating of C. The county passed the annual particle pollution rating.
Demographics
According to the 2014 United States Census estimates, there were 1,560,297 people residing in the City of Philadelphia, representing a 2.2% increase since 2010. From the 1960s up until 2006, the city's population declined year after year. It eventually reached a low of 1,488,710 residents in 2006 before beginning to rise again. Since 2006, Philadelphia added 71,587 residents in eight years. A study done by the city projected that the population would increase to about 1,630,000 residents by 2035, an increase of about 100,000 from 2010.
The racial makeup of the city in 2014 was 45.3% White (35.8% Non-Hispanic), 44.1% Black or African American, 0.8% Native American and Alaska Native, 7.2% Asian, 0.1% Native Hawaiian and Other Pacific Islander, 2.5% Two or More Races, and 13.6% were Hispanic or Latino.
Census Racial composition 2010 2000 1990 1980Pennsylvania – Race and Hispanic Origin for Selected Cities and Other Places: Earliest Census to 1990 1970 White (includes White Hispanics) 41.8% 45.0% 53.5% 58.2% 65.6% —Non-Hispanic White 36.9% 42.5% 52.1% 57.1% 63.8From 15% sample Black or African American 43.6% 43.2% 37.8% 39.9% 33.6% —Non-Hispanic Black 42.2% 42.6% 39.3% 37.5% 33.3% Native American 0.5% 0.3% 0.2% 0.1% 0.1% Asian 6.3% 4.5% 2.7% 1.1% 0.3% Native Hawaiian and Other Pacific Islander 0.0% 0.0% 0.0% Some other race 5.9% 4.8% 3.7% 2.7% 0.4% Two or more races 2.8% 2.2% n/a n/a n/a Hispanic or Latino (of any race) 12.3% 8.5% 5.6% 3.8% 2.4%
In comparison, the 2010 Census Redistricting Data indicated that the racial makeup of the city was 661,839 (43.4%) African American, 626,221 (41.0%) White, 6,996 (0.5%) Native American, 96,405 (6.3%) Asian (2.0% Chinese, 1.2% Indian, 0.9% Vietnamese, 0.6% Cambodian, 0.4% Korean, 0.3% Filipino, 0.2% Pakistani, 0.1% Indonesian), 744 (0.0%) Pacific Islander, 90,731 (5.9%) from other races, and 43,070 (2.8%) from two or more races. Hispanic or Latino of any race were 187,611 persons (12.3%); 8.0% of Philadelphia is Puerto Rican, 1.0% Dominican, 1.0% Mexican, 0.3% Cuban, and 0.3% Colombian. The racial breakdown of Philadelphia's Hispanic/Latino population was 63,636 (33.9%) White, 17,552 (9.4%) African American, 3,498 (1.9%) Native American, 884 (0.47%) Asian, 287 (0.15%) Pacific Islander, 86,626 (46.2%) from other races, and 15,128 (8.1%) from two or more races. The five largest European ancestries reported in the 2010 United States Census Census included Irish (12.5%), Italian (8.4%), German (8.1%), Polish (3.6%), and English (3.0%).
According to a 2014 study by the Pew Research Center, 68% of the population of the city identified themselves as Christians, with 41% professing attendance at a variety of churches that could be considered Protestant, and 26% professing Roman Catholic beliefs,Major U.S. metropolitan areas differ in their religious profiles, Pew Research Center while 24% claim no religious affiliation. The same study says that other religions (including Judaism, Buddhism, Islam, and Hinduism) collectively make up about 8% of the population
thumb|left|"Leacht Quimhneachain Na Gael", Irish famine memorial located in Penn's Landing, honoring Philadelphia's large Irish community (14.2% of the city's population).
The average population density was 11,457 people per square mile (4,405.4/km²). The Census reported that 1,468,623 people (96.2% of the population) lived in households, 38,007 (2.5%) lived in non-institutionalized group quarters, and 19,376 (1.3%) were institutionalized. In 2013, the city reported having 668,247 total housing units, down slightly from 670,171 housing units in 2010. , 87 percent of housing units were occupied, while 13 percent were vacant, a slight change from 2010 where 89.5 percent of units were occupied, or 599,736 and 10.5 percent were vacant, or 70,435. Of the city's residents, 32 percent reported having no vehicles available while 23 percent had two or more vehicles available, .
In 2010, 24.9 percent of households reported having children under the age of 18 living with them, 28.3 percent were married couples living together and 22.5 percent had a female householder with no husband present, 6.0 percent had a male householder with no wife present, and 43.2 percent were non-families. The city reported 34.1 percent of all households were made up of individuals while 10.5 percent had someone living alone who was 65 years of age or older. The average household size was 2.45 and the average family size was 3.20. In 2013, the percentage of women who gave birth in the previous 12 months who were unmarried was 56 percent. Of Philadelphia's adults, 31 percent were married or lived as a couple, 55 percent were not married, 11 percent were divorced or separated, and 3 percent were widowed.
According to the Census Bureau, the median household income in 2013 was $36,836, down 7.9 percent from 2008 when the median household income was $40,008 (in 2013 dollars). For comparison, the median household income among metropolitan areas was $60,482, down 8.2 percent in the same period, and the national median household income was $55,250, down 7.0 percent from 2008. The city's wealth disparity is evident when neighborhoods are compared. Residents in Society Hill had a median household income of $93,720 while residents in one of North Philadelphia's districts reported the lowest median household income, $14,185.
During the last decade, Philadelphia experienced a large shift in its age profile. In 2000, the city's population pyramid had a largely stationary shape. In 2013, the city took on an expansive pyramid shape, with an increase in the three millennial age groups, 20 to 24, 25 to 29, and 30 to 34. The city's 25- to 29-year-old age group was the city's largest age cohort. According to the 2010 Census, 343,837 (22.5%) were under the age of 18; 203,697 (13.3%) from 18 to 25; 434,385 (28.5%) from 25 to 44; 358,778 (23.5%) from 45 to 64; and 185,309 (12.1%) who were 65 years of age or older. The median age was 33.5 years. For every 100 females there were 89.4 males. For every 100 females age 18 and over, there were 85.7 males. The city had 22,018 births in 2013, down from a peak 23,689 births in 2008. Philadelphia's death rate was at its lowest in at least a half-century, 13,691 deaths in 2013. Another factor attributing to the population increase is Philadelphia's immigration rate. In 2013, 12.7 percent of residents were foreign-born, just shy of the national average, 13.1 percent.
thumb|right|Italian Market, reflecting South Philadelphia's Italian heritage.
Irish, Italians, Polish, Germans, English, and Greeks are the largest ethnic European groups in the city. Philadelphia has the second-largest Irish and Italian populations in the United States, after New York City. South Philadelphia remains one of the largest Italian neighborhoods in the country and is home to the Italian Market. The Pennsport neighborhood and Gray's Ferry section of South Philadelphia, home to many Mummer clubs, are well known as Irish neighborhoods. The Kensington section, Port Richmond, and Fishtown have historically been heavily Irish and Polish. Port Richmond is well known in particular as the center of the Polish immigrant and Polish-American community in Philadelphia, and it remains a common destination for Polish immigrants. Northeast Philadelphia, although known for its Irish and Irish-American population, is also home to a large Jewish and Russian population. Mount Airy in Northwest Philadelphia also contains a large Jewish community, while nearby Chestnut Hill is historically known as an Anglo-Saxon Protestant stronghold.
left|thumb|Washington Square West, the heart of the Gayborhood.
There has also been an increase of yuppie, bohemian, and hipster types particularly around Center City, the neighborhood of Northern Liberties, and in the neighborhoods around the city's universities, such as near Temple in North Philadelphia and particularly near Drexel and University of Pennsylvania in West Philadelphia. Philadelphia is also home to a significant gay and lesbian population. Philadelphia's Gayborhood, which is located near Washington Square, is home to a large concentration of gay and lesbian friendly businesses, restaurants, and bars.
The Black American population in Philadelphia is the third-largest in the country, after New York City and Chicago. Historically, West Philadelphia and North Philadelphia were largely black neighborhoods, but many are leaving these areas in favor of the Northeast and Southwest sections of Philadelphia. There is a higher proportion of Muslims in the Black American population than most cities in America. West Philadelphia also has significant Caribbean and African immigrant populations.
The Puerto Rican population in Philadelphia is the second-largest after New York City, and the second fastest-growing after Orlando. There are large Puerto Rican and Dominican populations in North Philadelphia and the Northeast, as well as a significant Mexican population in South Philadelphia.
Philadelphia has significant Asian populations mainly hailing from countries like India, China, Vietnam, and South Korea. Chinatown and the Northeast have the largest Asian presences, with a large Korean community in Olney, Philadelphia. South Philadelphia is also home to large Cambodian, Vietnamese, and Chinese communities. It has the fifth largest Muslim population among American cities.Overcoming the World Missions Crisis: Thinking Strategically to Reach the World, Russell Penney, page 110, 2001
Languages
, 79.12% (1,112,441) of Philadelphia residents age 5 and older spoke English at home as a primary language, while 9.72% (136,688) spoke Spanish, 1.64% (23,075) Chinese, 0.89% (12,499) Vietnamese, 0.77% (10,885) Russian, 0.66% (9,240) French, 0.61% (8,639) other Asian languages, 0.58% (8,217) African languages, 0.56% (7,933) Cambodian (Mon-Khmer), and Italian was spoken as a main language by 0.55% (7,773) of the population over the age of five. In total, 20.88% (293,544) of Philadelphia's population age 5 and older spoke a mother language other than English.
Economy
Top publicly traded companiesin Philadelphia for 2014 Corporation RankComcast44Aramark209Crown Holdings313FMC581Urban Outfitters715Chemtura775Pep Boys945NotesRankings for fiscal year ended 2014. Source: Fortune
Philadelphia is the center of economic activity in Pennsylvania with the headquarters of seven Fortune 1000 companies located within city limits. According to the Bureau of Economic Analysis, the Philadelphia area had a total gross metropolitan product of $347 billion in 2010, the seventh-largest metropolitan economy in the United States. Philadelphia was rated by the GaWC5 as an 'Alpha- City' in its categorization of world cities.
Philadelphia's economic sectors include information technology, manufacturing, oil refining, food processing, health care, biotechnology, tourism, and financial services. Financial activities account for the largest sector of the metropolitan area's economy, and it is one of the largest health education and research centers in the United States.
thumb|left|Philadelphia Stock Exchange, the oldest stock exchange in the United States.
The city is home to the Philadelphia Stock Exchange and some of the area's largest companies including cable television and internet provider Comcast, insurance companies Colonial Penn, CIGNA, Independence Blue Cross, energy company Sunoco, food services company Aramark and Crown, chemical makers Rohm and Haas and FMC, pharmaceutical company GlaxoSmithKline, Boeing Rotorcraft Systems, and automotive parts retailer Pep Boys.
Philadelphia's an annualized unemployment rate was 7.8% in 2014, down from 10.0%the previous year. This is higher than the national average of 6.2%. Similarly, the rate of new jobs added to the city's economy lagged behind the national job growth. In 2014, about 8,800 jobs were added to the city's economy. Sectors with the largest number of jobs added were in education and health services, leisure and hospitality, and professional and business services. Declines were seen in the city's manufacturing and government sectors.
While about 31.9% of the city's population is not in the labor force, the city's largest employers are the federal and city governments, respectively. Philadelphia's largest private employer is the University of Pennsylvania followed by the Children's Hospital of Philadelphia. A study commissioned by the city's government projected 40,000 jobs to be added to the city by 2035, raising the city's 2010 number of jobs from 675,000 total to an estimated 715,000 jobs.
Philadelphia's history attracts many tourists, with the Independence National Historical Park (which includes the Liberty Bell, Independence Hall, and other historical sites) receiving over 3.6 million visitors in 2014. The Greater Philadelphia region was visited by 39 million people in 2013 generating $10 billion in economic impact.
Culture
thumb|left|Independence Hall, where both the United States Declaration of Independence and the United States Constitution were debated and adopted.
Philadelphia is home to many national historical sites that relate to the founding of the United States. Independence National Historical Park is the center of these historical landmarks being one of the country's 22 UNESCO World Heritage Sites. Independence Hall, where the Declaration of Independence was signed, and the Liberty Bell are the city's most famous attractions. Other historic sites include homes for Edgar Allan Poe, Betsy Ross, and Thaddeus Kosciuszko, early government buildings like the First and Second Banks of the United States, Fort Mifflin, and the Gloria Dei (Old Swedes') Church. Philadelphia alone has 67 National Historic Landmarks, the third most of any city in the country.
thumb|right|First Bank of the United States
Philadelphia's major science museums include the Franklin Institute, which contains the Benjamin Franklin National Memorial; the Academy of Natural Sciences; the Mütter Museum; and the University of Pennsylvania Museum of Archaeology and Anthropology. History museums include the National Constitution Center, the Atwater Kent Museum of Philadelphia History, the National Museum of American Jewish History, the African American Museum in Philadelphia, the Historical Society of Pennsylvania, the Grand Lodge of Free and Accepted Masons in the state of Pennsylvania and The Masonic Library and Museum of Pennsylvania and Eastern State Penitentiary. Philadelphia is home to the United States' first zoo and hospital, as well as Fairmount Park, one of America's oldest and largest urban parks.
The city is home to important archival repositories, including the Library Company of Philadelphia, established in 1731, and the Athenaeum of Philadelphia, founded in 1814. The Presbyterian Historical Society, the country's oldest continuous denominational historical society, is also located there.
Accent
The Philadelphia dialect, which is spread throughout the Delaware Valley and South Jersey, is part of Mid-Atlantic American English, and as such it is similar in many ways to the Baltimore dialect. Unlike the Baltimore dialect, however, the Philadelphia accent also shares many similarities with the New York accent. Thanks to over a century of linguistics data collected by researchers at the University of Pennsylvania, the Philadelphia dialect under sociolinguist William Labov has been one of the best-studied forms of American English.
Arts
thumb|Walnut Street Theatre, the oldest continuously operating theatre in the English-speaking world and the oldest in the United States.
The city contains many art museums, such as the Pennsylvania Academy of the Fine Arts and the Rodin Museum, which holds the largest collection of work by Auguste Rodin outside France. The city's major art museum, the Philadelphia Museum of Art, is one of the largest art museums in the United States. Its long flight of steps to the main entrance became famous after the film Rocky (1976).
thumb|left|Philadelphia Museum of Art, amongst the largest art museums in the United States.Main Web page, Philadelphia Museum of Art, accessed April 26, 2007
The city is home to the Philadelphia Sketch Club, one of the country's oldest artists' clubs, and The Plastic Club, started by women excluded from the Sketch Club. It has a profusion of art galleries, many of which participate in the First Friday event. The first Friday of every month, galleries in Old City are open late. Annual events include film festivals and parades, the most famous being the New Year's Day Mummers Parade.
Areas such as South Street and Old City have a vibrant night life. The Avenue of the Arts in Center City contains many restaurants and theaters, such as the Kimmel Center for the Performing Arts, which is home to the Philadelphia Orchestra, generally considered one of the top five orchestras in the United States, and the Academy of Music, the nation's oldest continually operating opera house, home to the Opera Company of Philadelphia and the Pennsylvania Ballet. The Wilma Theatre and Philadelphia Theatre Company have new buildings constructed in the last decade on the avenue. They produce a variety of new works. Several blocks to the east are the Walnut Street Theatre, America's oldest theatre and the largest subscription theater in the world; as well as the Lantern Theatre at St. Stephens Church, one of a number of smaller venues.
Philadelphia has more public art than any other American city. In 1872, the Association for Public Art (formerly the Fairmount Park Art Association) was created, the first private association in the United States dedicated to integrating public art and urban planning. In 1959, lobbying by the Artists Equity Association helped create the Percent for Art ordinance, the first for a U.S. city. The program, which has funded more than 200 pieces of public art, is administered by the Philadelphia Office of Arts and Culture, the city's art agency.
thumb|Academy of Music, home of the Philadelphia Orchestra, 1900–2001
thumb|left|Pennsylvania Academy of the Fine Arts, the nation's oldest art school and art museum
Philadelphia has more murals than any other U.S. city, thanks in part to the 1984 creation of the Department of Recreation's Mural Arts Program, which seeks to beautify neighborhoods and provide an outlet for graffiti artists. The program has funded more than 2,800 murals by professional, staff and volunteer artists and educated more than 20,000 youth in underserved neighborhoods throughout Philadelphia.
Philadelphia artists have had a prominent national role in popular music. In the 1970s, Philadelphia soul influenced the music of that and later eras. On July 13, 1985, Philadelphia hosted the American end of the Live Aid concert at John F. Kennedy Stadium. The city reprised this role for the Live 8 concert, bringing some 700,000 people to the Ben Franklin Parkway on July 2, 2005. Philadelphia is home to the world-renowned Philadelphia Boys Choir & Chorale, which has performed its music all over the world. Dr. Robert G. Hamilton, founder of the choir, is a notable native Philadelphian. The Philly Pops is another famous Philadelphia music group. The city has played a major role in the development and support of American rock music and rap music. Hip-hop/Rap artists such as The Roots, DJ Jazzy Jeff & The Fresh Prince, The Goats, Freeway, Schoolly D, Eve, and Lisa "Left Eye" Lopes hail from the city.
Cuisine
thumb|right|Pat's Steaks in the foreground and Geno's Steaks in the background
thumb|left|McGillin's Olde Ale House
The city is known for its hoagies, scrapple, soft pretzels, water ice, Irish potato candy, Tastykake, and is home to the cheesesteak, developed by German and Italian immigrants. Philadelphia boasts a number of cheesesteak establishments, however two locations in South Philadelphia are perhaps the most famous among tourists: Pat's King of Steaks and its across the street rival Geno's Steaks.
Its high-end restaurants include Morimoto, Iron Chef Masaharu Morimoto's first restaurant, Vetri, famous on the East Coast for its take on Northern Italian cuisine, and Lacroix, a staple restaurant situated in Rittenhouse Square. Italian specialties have been supplemented by many new Vietnamese and other Asian restaurants, both budget and high-end.
McGillin's Olde Ale House, located on Drury Street in Center City, is the oldest continuously operated tavern in the city.
Philadelphia is also home to a landmark eatery founded in 1892, the Reading Terminal Market. The enclosed public market hosts over a hundred merchants offering Pennsylvania Dutch specialties, artisan cheese and meat, locally grown groceries, and specialty and ethnic foods.
Marijuana
Philadelphia has decriminalized small amounts of marijuana in the city, reducing penalties for possession and public use to minor fines and community service. The move makes Philadelphia the largest city in the United States to decriminalize pot.
Sports
thumb|Citizens Bank Park, home of the Phillies
Philadelphia's professional sports teams date at least to the 1860 founding of baseball's Athletics. The city is one of 12 U.S. cities to have all four major sports: the Philadelphia Phillies in the National League of Major League Baseball, the Philadelphia Eagles of the National Football League, the Philadelphia Flyers of the National Hockey League, and the Philadelphia 76ers of the National Basketball Association.
The Philadelphia metro area is also home of the Philadelphia Union of Major League Soccer. The Union play their home games at Talen Energy Stadium, a soccer-specific stadium in Chester, Pennsylvania. Philadelphia began play in MLS in 2010, after beating several other cities in competition for the rights to an MLS expansion franchise.
The city's professional teams went without a championship from 1983, when the 76ers won the NBA Championship, until 2008, when the Phillies won the World Series. In 2004, ESPN ranked Philadelphia second on its list of The Fifteen Most Tortured Sports Cities. The failure was sometimes attributed in jest to the "Curse of Billy Penn." The sports fans of Philadelphia are known for being referred to as the "Meanest Fans in America".
thumb|left|The Flyers play at the Wells Fargo Center
Major-sport professional sports teams that originated in Philadelphia but ultimately moved to other cities include the Golden State Warriors basketball team and the Oakland Athletics baseball team.
Philadelphia is also the home city of the Philadelphia Spinners, a professional ultimate team that is part of the Major League Ultimate. They are one of the original eight teams of the American Ultimate Disc League that began in April 2012. They played at Franklin Field and won the inaugural AUDL championship. , the Spinners play in the newer MLU at various stadiums through the city and surrounding southern suburbs.
Rowing has been popular in Philadelphia since the 18th century. Boathouse Row is a symbol of Philadelphia's rich rowing history, and each Big Five member has its own boathouse. Philadelphia hosts numerous local and collegiate rowing clubs and competitions, including the annual Dad Vail Regatta, the largest intercollegiate rowing event in the U.S, the Stotesbury Cup Regatta, and the Head of the Schuylkill Regatta, all of which are held on the Schuylkill River. The regattas are hosted and organized by the Schuylkill Navy, an association of area rowing clubs that has produced numerous Olympic rowers.
thumb|tight|Historic Boathouse Row at night on the Schuylkill, an enduring symbol of Philadelphia's rich rowing history.
Philadelphia is home to professional, semi-professional and elite amateur teams in cricket, rugby league (Philadelphia Fight), rugby union and other sports. Major sporting events in the city include the Penn Relays, Philadelphia Marathon, Broad Street Run, and the Philadelphia International Championship bicycle race. The Collegiate Rugby Championship is played every June at Talen Energy Stadium; the CRC is broadcast live on NBC and regularly draws attendances of 18,000.
Philadelphia is home to the Philadelphia Big 5, a group of five Division I college basketball programs. The Big 5 are Saint Joseph's University, University of Pennsylvania, La Salle University, Temple University, and Villanova University. The sixth NCAA Division I school in Philadelphia is Drexel University. At least one of the teams is competitive nearly every year and at least one team has made the NCAA tournament for the past four decades.
Club League Sport Venue Attendance Founded Championships Philadelphia Eagles NFL American Football Lincoln Financial Field 69,144 1933 1948, 1949, 1960 Philadelphia Phillies MLB Baseball Citizens Bank Park 29,924 1883 1980, 2008 Philadelphia Flyers NHL Ice Hockey Wells Fargo Center 19,786 1967 1973–74, 1974–75 Philadelphia Union MLS Soccer Talen Energy Stadium 18,053 2010 none Philadelphia 76ers NBA Basketball Wells Fargo Center 13,869 1963 1966–67, 1982–83 Philadelphia Soul AFL Arena Football Wells Fargo Center 9,000 2004 2008, 2016
Olympic bidding
The city of Philadelphia has placed four bids for the Olympics in 1920, 1948, 1952 and 1956, losing all their bids and having also pulled their bids another three times for the 2004, 2016 and 2024 games.
On April 22, 2013, Mayor Michael Nutter's office declared Philadelphia's interest in bidding for the 2024 Games. The city had expressed interest in hosting the 2016 Games, but lost out to Chicago as the USOC's bid city. The City of Philadelphia withdrew from consideration on May 28, 2014 in a letter to the USOC, citing "timing" as a major factor in the decision. The city reiterated a continued interest in pursuing the games in the future. On May 28, 2014, Mayor Michael Nutter announced that he had written to the USOC earlier that month, informing it of the city's decision not to pursue a bid to host the 2024 Summer Olympic Games.
Parks
thumb|left|Fairmount Park, ca. 1900
, the total city parkland, including municipal, state and federal parks within the city limits, amounts to . Philadelphia's largest park is Fairmount Park which includes the Philadelphia Zoo and encompasses of the total parkland, while the adjacent Wissahickon Valley Park contains . Fairmount Park, when combined with Wissahickon Valley Park, is one of the largest contiguous urban park areas in the United States. The two parks, along with the historic Colonial Revival, Georgian and Federal architecture contained in them, have been listed as one entity on the National Register of Historic Places since 1972.
Law and government
thumb|City Hall, Philadelphia's tallest building until 1987.
From a governmental perspective, Philadelphia County is a legal nullity, as all county functions were assumed by the city in 1952, which has been coterminous with the county since 1854.
Philadelphia's 1952 Home Rule Charter was written by the City Charter Commission, which was created by the Pennsylvania General Assembly in an Act of April 21, 1949, and a city ordinance of June 15, 1949. The existing City Council received a proposed draft on February 14, 1951, and the electors approved it in an election held April 17, 1951. The first elections under the new Home Rule Charter were held in November 1951, and the newly elected officials took office in January 1952.
The city uses the strong-mayor version of the mayor-council form of government, which is headed by one mayor, in whom executive authority is vested. Elected at-large, the mayor is limited to two consecutive four-year terms under the city's home rule charter, but can run for the position again after an intervening term. The Mayor is Jim Kenney, who replaced Michael Nutter, who served two terms from 2009 to January 2016. Kenney, as all Philadelphia mayors have been since 1952, is a member of the Democratic Party, which tends to dominate local politics so thoroughly that the Democratic Mayoral primary is often more widely covered than the general election. The legislative branch, the Philadelphia City Council, consists of ten council members representing individual districts and seven members elected at large. Democrats currently hold 14 seats, with Republicans representing two allotted at-large seats for the minority party, as well as the Northeast-based Tenth District. The current council president is Darrell Clarke.
Courts
The Philadelphia County Court of Common Pleas (First Judicial District) is the trial court of general jurisdiction for Philadelphia, hearing felony-level criminal cases and civil suits above the minimum jurisdictional limit of $7000 (excepting small claims cases valued between $7000 and $12000 and landlord-tenant issues heard in the Municipal Court) under its original jurisdiction; it also has appellate jurisdiction over rulings from the Municipal and Traffic Courts and over decisions of certain Pennsylvania state agencies (e.g. the Pennsylvania Liquor Control Board). It has 90 legally trained judges elected by the voters. It is funded and operated largely by city resources and employees. The current District Attorney is Seth Williams, a Democrat. The last Republican to hold the office is Ron Castille, who left in 1991 and is currently the Chief Justice of the Pennsylvania Supreme Court.
The Philadelphia Municipal Court handles matters of limited jurisdiction as well as landlord-tenant disputes, appeals from traffic court, preliminary hearings for felony-level offenses, and misdemeanor criminal trials. It has 25 legally trained judges elected by the voters.
Philadelphia Traffic Court is a court of special jurisdiction that hears violations of traffic laws. It has seven judges elected by the voters. As with magisterial district judges, the judges need not be lawyers, but must complete the certifying course and pass the qualifying examination administered by the Minor Judiciary Education Board.
Pennsylvania's three appellate courts also have sittings in Philadelphia. The Supreme Court of Pennsylvania, the court of last resort in the state, regularly hears arguments in Philadelphia City Hall. Also, the Superior Court of Pennsylvania and the Commonwealth Court of Pennsylvania sit in Philadelphia several times a year. Judges for these courts are elected at large. Each court has a prothonotary's office in Philadelphia as well.
Additionally, Philadelphia is home to the federal United States District Court for the Eastern District of Pennsylvania and the Court of Appeals for the Third Circuit, both of which are housed in the James A. Byrne United States Courthouse.
Politics
+ Philadelphia County vote by party in presidential elections Year Republican Democratic201615.3% 108,748
|align="center" |82.3% 584,020201214.0% 96,467
|align="center" |85.2% 588,806200816.3% 117,221
|align="center" |83.0% 595,980200419.3% 130,099
|align="center" |80.4% 542,205200018.0% 100,959
|align="center" |80.0% 449,182199616.0% 85,345
|align="center" |77.4% 412,988199220.9% 133,328
|align="center" |68.2% 434,904198832.5% 219,053
|align="center" |66.6% 449,566198434.6% 267,178
|align="center" |64.9% 501,369198034.0% 244,108
|align="center" |58.7% 421,253197632.0% 239,000
|align="center" |66.3% 494,579197243.9% 344,096
|align="center" |55.1% 431,736196829.9% 254,153
|align="center" |61.8% 525,768196426.2% 239,733
|align="center" |73.4% 670,645196031.8% 291,000
|align="center" |68.0% 622,544
As of December 31, 2009, there were 1,057,038 registered voters in Philadelphia. Registered voters constitute 68.3% of the total population.1057038/1547901=68.3% 1,057,038 registered voters in Philadelphia, divided by the population as of December 2, 2009: 1,547,901 (6th)
Democratic: 829,873 (78.5%)
Republican: 134,216 (12.7%)
Libertarian 2,631 (0.2%)
Other Parties and No party: 90,318 (8.5%)
From the American Civil War until the mid-20th century, Philadelphia was a bastion of the Republican Party, which arose from the staunch pro-Northern views of Philadelphia residents during and after the war (Philadelphia was chosen as the host city for the first Republican National Convention in 1856). After the Great Depression, Democratic registrations increased, but the city was not carried by Democrat Franklin D. Roosevelt in his landslide victory of 1932 (in which Pennsylvania was one of the few states won by Republican Herbert Hoover). Four years later, however, voter turnout surged and the city finally flipped to the Democrats. Roosevelt carried Philadelphia with over 60% of the vote in 1936. The city has remained loyally Democratic in every presidential election since. It is now one of the most Democratic in the country; in 2008, Democrat Barack Obama drew 83% of the city's vote. Obama's win was even greater in 2012, capturing 85% of the vote.
Philadelphia once comprised six congressional districts. However, as a result of the city's declining population, it now has only four: the 1st district, represented by Bob Brady; the 2nd, represented by Chaka Fattah; the 8th, represented by Mike Fitzpatrick; and the 13th, represented by Brendan Boyle. All but Fitzpatrick are Democrats. Although they are usually swamped by Democrats in city, state and national elections, Republicans still have some support in the area, primarily in the northeast. A Republican represented a significant portion of Philadelphia in the House as late as 1983, and Sam Katz ran competitive mayoral races as the Republican nominee in both 1999 and 2003.
Pennsylvania's longest-serving Senator, Arlen Specter, was from Philadelphia; he served as a Republican from 1981 and as a Democrat from 2009, losing that party's primary in 2010 and leaving office in January 2011. He was also the city's District Attorney from 1966 to 1974.
Philadelphia has hosted various national conventions, including in 1848 (Whig), 1856 (Republican), 1872 (Republican), 1900 (Republican), 1936 (Democratic), 1940 (Republican), 1948 (Republican), 1948 (Progressive), 2000 (Republican), and 2016 (Democratic). Philadelphia has been home to one Vice President, George M. Dallas, and one Civil War general who won his party's nomination for president but lost in the general election: George B. McClellan.
Crime
thumb|right|Philadelphia Police Department Headquarters known as "The Roundhouse"
Like many American cities, Philadelphia saw a gradual yet pronounced rise in crime in the years following World War II. There were 525 murders in 1990, a rate of 31.5 per 100,000. There were an average of about 600 murders a year for most of the 1990s. The murder count dropped in 2002 to 288, then rose four years later to 406 in 2006 and 392 in 2007. A few years later, Philadelphia began to see a rapid drop in homicides and violent crime. In 2013, there were 246 murders, which is a decrease of over 25% from the previous year, and a decrease of over 44% since 2007. And in 2014, there were 248 homicides, up by one since 2013. In 2015, according to annual homicide statistics and crime maps provided on the Philadelphia Police Department's website, there were 280 murders in the city. The same departmental site documents that the number of homicides fell slightly (1.07%) the following year, with 277 murders in Philadelphia in 2016.
In 2006, Philadelphia's homicide rate of 27.7 per 100,000 people was the highest of the country's 10 most populous cities. In 2012, Philadelphia had the fourth-highest homicide rate among the country's most populous cities. And in 2014, the rate dropped to 16.0 homicides per 100,000 residents placing Philadelphia as the sixth-highest city in the country.
In 2004, there were 7,513.5 crimes per 200,000 people in Philadelphia. Among its neighboring Mid-Atlantic cities in the same population group, Baltimore and Washington, D.C. were ranked second- and third-most dangerous cities in the United States, respectively. Camden, New Jersey, a city across the Delaware River from Philadelphia, was ranked as the most dangerous city in the United States.
The number of shootings in the city has declined significantly in the last 10 years. Shooting incidents peaked in 2006 when 1,857 shootings were recorded. That number has dropped 44 percent to 1,047 shootings in 2014. Similarly, major crimes in the city has decreased gradually in the last ten years since its peak in 2006 when 85,498 major crimes were reported. In the past three years, the number of reported major crimes fell 11 percent to a total of 68,815. Violent crimes, which include homicide, rape, aggravated assault, and robbery, decreased 14 percent in the past three years with a reported 15,771 occurrences in 2014. Based on the rate of violent crimes per 1,000 residents in American cities with 25,000 people or more, Philadelphia was ranked as the 54th most dangerous city in 2015.
Education
Primary and secondary education
thumb|William Penn Charter School, established in 1689, is the oldest Quaker school in the nation
Education in Philadelphia is provided by many private and public institutions. The School District of Philadelphia runs the city's public schools. The Philadelphia School District is the eighth largest school district in the United States with 142,266 students in 218 public schools and 86 charter schools .
The city's K-12 enrollment in district run schools has dropped in the last five years from 156,211 students in 2010 to 130,104 students in 2015. During the same time period, the enrollment in charter schools has increased from 33,995 students in 2010 to 62,358 students in 2015. This consistent drop in enrollment has led the city to close 24 of its public schools in 2013. During the 2014 school year, the city spent an average of $12,570 per pupil, below the average among comparable urban school districts.
Graduation rates among district-run schools, meanwhile, have steadily increased in the last ten years. In 2005, Philadelphia had a district graduation rate of 52%. This number has increased to 65% in 2014, still below the national and state averages. Scores on the state's standardized test, the Pennsylvania System of School Assessment (PSSA) have trended upward from 2005 to 2011 but have decreased since. In 2005, the district-run schools scored an average of 37.4% on math and 35.5% on reading. The city's schools reached its peak scores in 2011 with 59.0% on math and 52.3% on reading. In 2014, the scores dropped significantly to 45.2% on math and 42.0% on reading.
Of the city's public high schools, including charter schools, only four performed above the national average on the SAT (1497) in 2014: Masterman, Central, Girard, and MaST Community Charter School. All other district-run schools were below average.
Higher education
thumb|right|Quadrangle at the University of Pennsylvania in the winter.
thumb|left|Perelman School of Medicine, the oldest medical school in the United States
Philadelphia has the third-largest student concentration on the East Coast, with over 120,000 college and university students enrolled within the city and nearly 300,000 in the metropolitan area. There are over 80 colleges, universities, trade, and specialty schools in the Philadelphia region. One of the founding members of the Association of American Universities is in city, the University of Pennsylvania, an Ivy League institution with claims to being the oldest university in the country.
The city's largest private school by number of students is Temple University, followed by Drexel University. Along with the University of Pennsylvania, Temple University and Drexel University make up the city's major research universities. The city is also home to five schools of medicine: Drexel University College of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia College of Osteopathic Medicine, Temple University School of Medicine, and the Thomas Jefferson University. Hospitals, universities, and higher education research institutions in Philadelphia's four congressional districts received more than $252 million in National Institutes of Health grants in 2015.
Other institutions of higher learning within the city's borders include:
Saint Joseph's University
La Salle University
Peirce College
University of the Sciences in Philadelphia
The University of the Arts
Pennsylvania Academy of the Fine Arts
Curtis Institute of Music
Thomas Jefferson UniversityMoore College of Art and Design
The Art Institute of Philadelphia
The Restaurant School at Walnut Hill College
Philadelphia University
Chestnut Hill College
Holy Family University
Community College of Philadelphia
Messiah College Philadelphia Campus
Media
thumb|The Inquirer Building on North Broad Street
Newspapers
Philadelphia's two major daily newspapers are The Philadelphia Inquirer, which is the eighteenth largest newspaper and third-oldest surviving daily newspaper in the country, and the Philadelphia Daily News. Both newspapers were purchased from The McClatchy Company (after buying out Knight Ridder) in 2006 by Philadelphia Media Holdings and operated by the group until the organization declared bankruptcy in 2010. After two years of financial struggle, the two newspapers were sold to Interstate General Media in 2012. The two newspapers have a combined circulation of about 500,000 readers.
The city also has a number of other, smaller newspapers and magazine in circulation such as the Philadelphia Tribune, which serves the African-American community, the Philadelphia, a monthly regional magazine; Philadelphia Weekly, a weekly-printed alternative newspaper; Philadelphia City Paper another weekly-printed newspaper; Philadelphia Gay News, which services the LGBT community; The Jewish Exponent a weekly-printed newspaper servicing the Jewish community; Philadelphia Metro, free daily newspaper; and Al Día, a weekly newspaper servicing the Latino community.
In addition, there are several student-run newspapers including The Daily Pennsylvanian, The Temple News, and The Triangle.
Radio and television
The first experimental radio license was issued in Philadelphia in August 1912 to St. Joseph's College. The first commercial broadcasting radio stations appeared in 1922: first WIP, then owned by Gimbel's department store, on March 17, followed the same year by WFIL, WOO, WCAU and WDAS. The highest-rated stations in Philadelphia include soft rock WBEB, KYW Newsradio, and urban adult contemporary WDAS-FM. Philadelphia is served by three major non-commercial public radio stations, WHYY-FM (NPR), WRTI (jazz, classical), and WXPN-FM (adult alternative music), as well as several smaller stations.
Rock stations WMMR and WYSP had historically been intense rivals. However, in 2011, WYSP switched to sports talk as WIP-FM, which broadcasts all Philadelphia Eagles games. WMMR's The Preston and Steve Show has been the area's top-rated morning show since Howard Stern left broadcast radio for satellite-based Sirius Radio.
Four urban stations (WUSL ("Power 99"), WPHI ("Hot 107.9"), WDAS and WRNB ("Old School 100.3")) are popular choices on the FM dial. WBEB is the city's Adult Contemporary station, while WZMP ("Wired 96.5") is the major Rhythmic Top 40 station.
In the 1930s, the experimental station W3XE, owned by Philco, became the first television station in Philadelphia; it became NBC's first affiliate in 1939, and later became KYW-TV (CBS). WCAU-TV, WPVI-TV, WHYY-TV, WPHL-TV, and WTXF-TV had all been founded by the 1970s. In 1952, WFIL (now WPVI) premiered the television show Bandstand, which later became the nationally broadcast American Bandstand hosted by Dick Clark. Today, as in many large metropolitan areas, each of the commercial networks has an affiliate, and call letters have been replaced by corporate IDs: CBS3, 6ABC, NBC10, Fox29, Telefutura28, Telemundo62, Univision65, plus My PHL 17 and CW Philly 57. The region is served also by public broadcasting stations WYBE-TV (Philadelphia), WHYY-TV (Wilmington, Delaware and Philadelphia), WLVT-TV (Lehigh Valley), and NJTV (New Jersey). In September 2007, Philadelphia approved a Public-access television cable TV channel.
Until September 2014, Philadelphia was the only media market in the United States with owned-and-operated stations of all five English-language major broadcast networks (NBC – WCAU, CBS – KYW-TV, ABC – WPVI-TV, Fox – WTXF-TV and The CW – WPSG); three of the major Spanish-language networks (Univision, UniMas and Telemundo) also have O&Os serving the market (respectively, WUVP-DT, WFPA-CD and WWSI).
The city is also the nation's fourth-largest consumer in media market, as ranked by the Nielsen Media Research, with over 2.9 million TV homes.
Infrastructure
Transportation
thumb|30th Street Station, with Cira Centre in the background
Philadelphia is served by the Southeastern Pennsylvania Transportation Authority (SEPTA), which operates buses, trains, rapid transit, trolleys, and trackless trolleys throughout Philadelphia, the four Pennsylvania suburban counties of Bucks, Chester, Delaware, and Montgomery, in addition to service to Mercer County, New Jersey and New Castle County, Delaware. The city's subway, opened in 1907, is the third-oldest in America.
thumb|left|Market–Frankford Line train departing 52nd Street station.
In 1981, large sections of the SEPTA Regional Rail service to the far suburbs of Philadelphia were discontinued due to lack of funding. Several projects have been proposed to extend rail service back to these areas, but lack of funding has again been the chief obstacle to implementation. These projects include the proposed Schuylkill Valley Metro to Wyomissing, PA, and extension of the Media/Elwyn line back to Wawa, PA. SEPTA's Airport Regional Rail Line Regional Rail offers direct service to the Philadelphia International Airport.
Philadelphia's 30th Street Station is a major railroad station on Amtrak's Northeast Corridor, which offers access to Amtrak, SEPTA, and NJ Transit lines.
The PATCO Speedline provides rapid transit service to Camden, Collingswood, Westmont, Haddonfield, Woodcrest (Cherry Hill), Ashland (Voorhees), and Lindenwold, New Jersey, from stations on Locust Street between 16th and 15th, 13th and 12th, and 10th and 9th Streets, and on Market Street at 8th Street.
Airports
Two airports serve Philadelphia: the Philadelphia International Airport (PHL), straddling the southern boundary of the city, and the Northeast Philadelphia Airport (PNE), a general aviation reliever airport in Northeast Philadelphia. Philadelphia International Airport provides scheduled domestic and international air service, while Northeast Philadelphia Airport serves general and corporate aviation. In 2013, Philadelphia International Airport was the 15th busiest airport in the world measured by traffic movements (i.e. takeoffs and landings). It is also the second largest hub and primary international hub for American Airlines.
Roads
William Penn initially planned a Philadelphia that had numbered streets traversing north and south and "tree" named streets traversing east and west, with the two main streets Broad Street and High Street converging at Centre Square. The plans have since expanded to include major highways that span other major sections of Philadelphia.
thumb|left|Aerial view showing the major highways circumscribing Philadelphia
Interstate 95 runs through the city along the Delaware River as a main north-south artery known as the Delaware Expressway. The city is also served by the Schuylkill Expressway, a portion of Interstate 76 that runs along the Schuylkill River. It meets the Pennsylvania Turnpike at King of Prussia, Pennsylvania, providing access to Harrisburg, Pennsylvania and points west. Interstate 676, the Vine Street Expressway, was completed in 1991 after years of planning. A link between I-95 and I-76, it runs below street level through Center City, connecting to the Ben Franklin Bridge at its eastern end.
Roosevelt Boulevard and the Roosevelt Expressway (U.S. 1) connect Northeast Philadelphia with Center City. Woodhaven Road (Route 63), built in 1966, and Cottman Avenue (Route 73) serve the neighborhoods of Northeast Philadelphia, running between Interstate 95 and the Roosevelt Boulevard (U.S. 1). The Fort Washington Expressway (Route 309) extends north from the city's northern border, serving Montgomery County and Bucks County. U.S. 30, extending east-west from West Philadelphia to Lancaster, is known as Lancaster Avenue throughout most of the city and through the adjacent Main Line suburbs.
thumb|The Ben Franklin Bridge, viewed at night from Center City toward Camden, New Jersey
Interstate 476, commonly nicknamed the "Blue Route" through Delaware County, bypasses the city to the west, serving the city's western suburbs, as well as providing a link to Allentown and points north. Similarly, Interstate 276, the Pennsylvania Turnpike's Delaware River Extension, acts as a bypass and commuter route to the north of the city as well as a link to the New Jersey Turnpike to New York.
However, other planned freeways have been canceled, such as an Interstate 695 running southwest from downtown; two freeways connecting Interstate 95 to Interstate 76 that would have replaced Girard Avenue and South Street; and a freeway upgrade of Roosevelt Boulevard.
The Delaware River Port Authority operates four bridges in the Philadelphia area across the Delaware River to New Jersey: the Walt Whitman Bridge (I-76), the Benjamin Franklin Bridge (I-676 and US 30), the Betsy Ross Bridge (Route 90), and the Commodore Barry Bridge (US 322). The Tacony-Palmyra Bridge connects PA Route 73 in the Tacony section of Northeast Philadelphia with New Jersey's Route 73 in Palmyra, Camden County, and is maintained by the Burlington County Bridge Commission.
Bus service
Philadelphia is also a major hub for Greyhound Lines, which operates 24-hour service to points east of the Mississippi River. Most of Greyhound's services in Philadelphia operate to/from the Philadelphia Greyhound Terminal, located at 1001 Filbert Street in Center City Philadelphia. In 2006, the Philadelphia Greyhound Terminal was the second busiest Greyhound terminal in the United States, after the Port Authority Bus Terminal in New York. Besides Greyhound, six other bus operators provide service to the Center City Greyhound terminal: Bieber Tourways, Capitol Trailways, Martz Trailways, Peter Pan Bus Lines, Susquehanna Trailways, and the bus division for New Jersey Transit. Other services include Megabus and Bolt Bus.
Rail
thumb|left|Suburban Station with art deco architecture
Since the early days of rail transport in the United States, Philadelphia has served as hub for several major rail companies, particularly the Pennsylvania Railroad and the Reading Railroad. The Pennsylvania Railroad first operated Broad Street Station, then 30th Street Station and Suburban Station, and the Reading Railroad operated out of Reading Terminal, now part of the Pennsylvania Convention Center. The two companies also operated competing commuter rail systems in the area, known collectively as the Regional Rail system. The two systems today, for the most part still intact but now connected, operate as a single system under the control of the SEPTA, the regional transit authority. Additionally, the PATCO Speedline subway system and NJ Transit's Atlantic City Line operate successor services to southern New Jersey.
Philadelphia, once home to more than 4,000 trolleys on 65 lines, is one of the few North American cities to maintain streetcar lines. Today, SEPTA operates five "subway-surface" trolleys that run on street-level tracks in West Philadelphia and subway tunnels in Center City. SEPTA also recently reintroduced trolley service to the Girard Avenue Line, Route 15.
Today, Philadelphia is a regional hub of the federally owned Amtrak system, with 30th Street Station being a primary stop on the Washington-Boston Northeast Corridor and the Keystone Corridor to Harrisburg and Pittsburgh. 30th Street also serves as a major station for services via the Pennsylvania Railroad's former Pennsylvania Main Line to Chicago. 30th Street is Amtrak's third-busiest station in numbers of passengers as of fiscal year 2013.
Walkability
A 2015 study by Walk Score ranked Philadelphia the fourth most walkable major city in the United States.
Utilities
thumb|Fairmount Water Works, Philadelphia's second municipal waterworks.
Historically, Philadelphia sourced its water by the Fairmount Water Works, the nation's first major urban water supply system. In 1909, Water Works was decommissioned as the city transitioned to modern sand filtration methods. Today, the Philadelphia Water Department (PWD) provides drinking water, wastewater collection, and stormwater services for Philadelphia, as well as surrounding counties. PWD draws about 57 percent of its drinking water from the Delaware River and the balance from the Schuylkill River. The public wastewater system consists of three water pollution control plants, 21 pumping stations, and about 3,657 miles of sewers. A 2007 investigation by the Environmental Protection Agency found elevated levels of Iodine-131 in the city's potable water. In 2012, the EPA's readings discovered that the city had the highest readings of I-131 in the nation. The city campaigned against an Associated Press report that the high levels of I-131 were the results of local gas drilling in the Upper Delaware River.
PECO Energy Company, founded as the Philadelphia Electric Company in 1881, provides electricity to over 1.6 million customers in the southeastern Pennsylvania. The company has over 500 power substations and 29,000 miles of distribution of transmission lines in its service making it the largest combination utility in the state.
Philadelphia Gas Works (PGW), overseen by the Pennsylvania Public Utility Commission, is the nation's largest municipally owned natural gas utility. It serves over 500,000 homes and businesses in the Philadelphia area. Founded in 1836, the company came under city ownership in 1987 and has been providing the majority of gas distributed within city limits. In 2014, the Philadelphia City Council refused to conduct hearings on a $1.86 billion sale of PGW, part of a two-year effort that was proposed by the mayor. The refusal led to the prospective buyer terminating its offer.
Southeastern Pennsylvania was assigned the 215 area code in 1947 when the North American Numbering Plan of the "Bell System" went into effect. The geographic area covered by the code was split nearly in half in 1994 when area code 610 was created, with the city and its northern suburbs retaining 215. Overlay area code 267 was added to the 215 service area in 1997, and 484 was added to the 610 area in 1999. A plan in 2001 to introduce a third overlay code to both service areas (area code 445 to 215, area code 835 to 610) was delayed and later rescinded.
An effort was approved on 2005 to provide low-cost, citywide Wi-Fi service to the city. Wireless Philadelphia would have been the first municipal internet utility offering in a large US city, but the plan was abandoned in 2008 as EarthLink pushed back the completion date several times. Mayor Nutter's administration closed the project in 2009 after an attempt to revitalize it failed.
Notable people
Twin towns – Sister cities
thumb|Chinatown Gate at 10th and Arch, a symbol of Philadelphia's friendship with Tianjin.
Philadelphia has eight official sister cities, as designated by the Citizen Diplomacy International – Philadelphia:
City Country Date Florence 1964 Tel Aviv 1966 Toruń 1976 Tianjin 1980 Incheon 1984 Douala 1986 Nizhny Novgorod 1992 Frankfurt 2015
Philadelphia also has three partnership cities or regions:
City Country Date Kobe 1986 Abruzzo 1997 Aix-en-Provence 1999
Philadelphia has dedicated landmarks to its sister cities. Dedicated in June 1976, the Sister Cities Plaza, a site of located at 18th and Benjamin Franklin Parkway, honors Philadelphia's relationships with Tel Aviv and Florence which were its first sister cities. Another landmark, the Toruń Triangle, honoring the sister city relationship with Toruń, Poland, was constructed in 1976, west of the United Way building at 18th Street and the Benjamin Franklin Parkway. In addition, the Triangle contains the Copernicus monument. Renovations were made to Sister Cities Park in mid-2011 and on May 10, 2012, SCP was reopened and currently features an interactive fountain honoring Philadelphia's ten sister and friendship cities, a café and visitor's center, children's play area, outdoor garden, and boat pond, as well as pavilion built to environmentally friendly standards.
The Chinatown Gate, erected in 1984 and crafted by artisans of Tianjin, stands astride the intersection of 10th and Arch Streets as an elaborate and colorful symbol of the sister city relationship. The CDI of Philadelphia has participated in the U.S. Department of State's "Partners for Peace" project with Mosul, Iraq,IVC of Philadelphia Partners with Mosul, Iraq in Groundbreaking Program Retrieved January 26, 2011. as well as accepting visiting delegations from dozens of other countries.Inbound delegations visiting Philadelphia Retrieved January 26, 2011.
Image gallery
See also
Largest metropolitan areas in the Americas
List of companies based in the Philadelphia area
List of people from Philadelphia
National Register of Historic Places listings in Philadelphia
United States metropolitan areas
Notes
References
Further reading
Abigail Perkiss, Making Good Neighbors: Civil Rights, Liberalism, and Integration in Postwar Philadelphia.'' Ithaca, NY: Cornell University Press, 2014.
External links
City of Philadelphia government
Encyclopedia of Greater Philadelphia, Historical Encyclopedia in progress
Historic Philadelphia Photographs
Greater Philadelphia GeoHistory Network – historical maps and atlases of Philadelphia
philly.com – Local news
Visitor Site for Greater Philadelphia
Official Convention & Visitors Site for Philadelphia
Category:1682 establishments in Pennsylvania
Category:Cities in Pennsylvania
Category:Consolidated city-counties in the United States
Category:County seats in Pennsylvania
Category:Former capitals of the United States
Pennsylvania
Category:Planned cities in the United States
Category:Populated places established in 1682
Category:Populated places on the Schuylkill River
Category:Port cities and towns of the United States Atlantic coast
Category:Ukrainian communities in the United States | 50,585 | 2017-01 |
Seattle | Seattle () is a seaport city on the west coast of the United States and the seat of King County, Washington. With an estimated 684,451 residents , Seattle is the largest city in both the state of Washington and the Pacific Northwest region of North America. In July 2013, it was the fastest-growing major city in the United States and remained in the Top 5 in May 2015 with an annual growth rate of 2.1%. The city is situated on an isthmus between Puget Sound (an inlet of the Pacific Ocean) and Lake Washington, about south of the Canada–United States border. A major gateway for trade with Asia, Seattle is the fourth-largest port in North America in terms of container handling .
The Seattle area was previously inhabited by Native Americans for at least 4,000 years before the first permanent European settlers. Arthur A. Denny and his group of travelers, subsequently known as the Denny Party, arrived from Illinois via Portland, Oregon, on the schooner Exact at Alki Point on November 13, 1851. The settlement was moved to the eastern shore of Elliott Bay and named "Seattle" in 1852, after Chief Si'ahl of the local Duwamish and Suquamish tribes.
Logging was Seattle's first major industry, but by the late-19th century, the city had become a commercial and shipbuilding center as a gateway to Alaska during the Klondike Gold Rush. Growth after World War II was partially due to the local Boeing company, which established Seattle as a center for aircraft manufacturing. The Seattle area developed as a technology center beginning in the 1980s, with companies like Microsoft becoming established in the region. In 1994, Internet retailer Amazon was founded in Seattle. The stream of new software, biotechnology, and Internet companies led to an economic revival, which increased the city's population by almost 50,000 between 1990 and 2000.
Seattle has a noteworthy musical history. From 1918 to 1951, nearly two dozen jazz nightclubs existed along Jackson Street, from the current Chinatown/International District, to the Central District. The jazz scene developed the early careers of Ray Charles, Quincy Jones, Ernestine Anderson, and others. Seattle is also the birthplace of rock musician Jimi Hendrix and the alternative rock subgenre grunge.
History
Founding
Archaeological excavations suggest that Native Americans have inhabited the Seattle area for at least 4,000 years. By the time the first European settlers arrived, the people (subsequently called the Duwamish tribe) occupied at least seventeen villages in the areas around Elliott Bay. (Publication date per "Native Art of the Northwest Coast: Collection Insight")
The first European to visit the Seattle area was George Vancouver, in May 1792 during his 1791–95 expedition to chart the Pacific Northwest.
In 1851, a large party led by Luther Collins made a location on land at the mouth of the Duwamish River; they formally claimed it on September 14, 1851. Thirteen days later, members of the Collins Party on the way to their claim passed three scouts of the Denny Party. Members of the Denny Party claimed land on Alki Point on September 28, 1851. The rest of the Denny Party set sail from Portland, Oregon and landed on Alki point during a rainstorm on November 13, 1851.
Duwamps 1852–1853
thumb|The Battle of Seattle (1856)
After a difficult winter, most of the Denny Party relocated across Elliott Bay and claimed land a second time at the site of present-day Pioneer Square, naming this new settlement Duwamps. Charles Terry and John Low remained at the original landing location and reestablished their old land claim and called it "New York", but renamed "New York Alki" in April 1853, from a Chinook word meaning, roughly, "by and by" or "someday". For the next few years, New York Alki and Duwamps competed for dominance, but in time Alki was abandoned and its residents moved across the bay to join the rest of the settlers.
David Swinson "Doc" Maynard, one of the founders of Duwamps, was the primary advocate to name the settlement after Chief Sealth ("Seattle") of the Duwamish and Suquamish tribes. Includes bibliography.
Incorporations
The name "Seattle" appears on official Washington Territory papers dated May 23, 1853, when the first plats for the village were filed. In 1855, nominal land settlements were established. On January 14, 1865, the Legislature of Territorial Washington incorporated the Town of Seattle with a board of trustees managing the city. The town of Seattle was disincorporated January 18, 1867, and remained a mere precinct of King County until late 1869, when a new petition was filed and the city was re-incorporated December 2, 1869, with a Mayor-council government. The corporate seal of the City of Seattle carries the date "1869" and a likeness of Chief Sealth in left profile.
Timber town
thumb|Seattle's first streetcar, at the corner of Occidental and Yesler, 1884. All of the buildings visible in this picture were destroyed by fire five years later.
Seattle has a history of boom-and-bust cycles, like many other cities near areas of extensive natural and mineral resources. Seattle has risen several times economically, then gone into precipitous decline, but it has typically used those periods to rebuild solid infrastructure. Author has granted blanket permission for material from that paper to be reused in Wikipedia. Now at s:Seattle: Booms and Busts.
The first such boom, covering the early years of the city, rode on the lumber industry. (During this period the road now known as Yesler Way won the nickname "Skid Road", supposedly after the timber skidding down the hill to Henry Yesler's sawmill. The later dereliction of the area may be a possible origin for the term which later entered the wider American lexicon as Skid Row.) Like much of the American West, Seattle saw numerous conflicts between labor and management, as well as ethnic tensions that culminated in the anti-Chinese riots of 1885–1886. Kinnear's article, originally appearing in the Seattle Post-Intelligencer, was later privately published in a small volume. This violence originated with unemployed whites who were determined to drive the Chinese from Seattle (anti-Chinese riots also occurred in Tacoma). In 1900, Asians were 4.2% of the population. Authorities declared martial law and federal troops arrived to put down the disorder.
Seattle achieved sufficient economic success that when the Great Seattle Fire of 1889 destroyed the central business district, a far grander city-center rapidly emerged in its place. Finance company Washington Mutual, for example, was founded in the immediate wake of the fire. However, the Panic of 1893 hit Seattle hard.
Gold Rush, World War I, and the Great Depression
thumb|right|The Alaska-Yukon-Pacific Exposition had just over 3.7 million visitors during its 138-day run
The second and most dramatic boom and bust resulted from the Klondike Gold Rush, which ended the depression that had begun with the Panic of 1893; in a short time, Seattle became a major transportation center. On July 14, 1897, the S.S. Portland docked with its famed "ton of gold", and Seattle became the main transport and supply point for the miners in Alaska and the Yukon. Few of those working men found lasting wealth, however; it was Seattle's business of clothing the miners and feeding them salmon that panned out in the long run. Along with Seattle, other cities like Everett, Tacoma, Port Townsend, Bremerton, and Olympia, all in the Puget Sound region, became competitors for exchange, rather than mother lodes for extraction, of precious metals. The boom lasted well into the early part of the 20th century and funded many new Seattle companies and products. In 1907, 19-year-old James E. Casey borrowed $100 from a friend and founded the American Messenger Company (later UPS). Other Seattle companies founded during this period include Nordstrom and Eddie Bauer. Seattle brought in the Olmsted Brothers landscape architecture firm to design a system of parks and boulevards.
thumb|upright|Pioneer Square in 1917 featuring the Smith Tower, the Seattle Hotel and to the left the Pioneer Building
The Gold Rush era culminated in the Alaska-Yukon-Pacific Exposition of 1909, which is largely responsible for the layout of today's University of Washington campus.
A shipbuilding boom in the early part of the 20th century became massive during World War I, making Seattle somewhat of a company town; the subsequent retrenchment led to the Seattle General Strike of 1919, the first general strike in the country. A 1912 city development plan by Virgil Bogue went largely unused. Seattle was mildly prosperous in the 1920s but was particularly hard hit in the Great Depression, experiencing some of the country's harshest labor strife in that era. Violence during the Maritime Strike of 1934 cost Seattle much of its maritime traffic, which was rerouted to the Port of Los Angeles.BOLA Architecture + Planning & Northwest Archaeological Associates, Inc., , Port of Seattle, April 5, 2005, pp. 12–13 (which is pp. 14–15 of the PDF). Retrieved July 25, 2008.
Seattle was also the home base of impresario Alexander Pantages who, starting in 1902, opened a number of theaters in the city exhibiting vaudeville acts and silent movies. His activities soon expanded, and the thrifty Greek went on and became one of America's greatest theater and movie tycoons. Between Pantages and his rival John Considine, Seattle was for a while the western United States' vaudeville mecca. B. Marcus Priteca, the Scottish-born and Seattle-based architect, built several theaters for Pantages, including some in Seattle. The theaters he built for Pantages in Seattle have been either demolished or converted to other uses, but many other theaters survive in other cities of the U.S., often retaining the Pantages name; Seattle's surviving Paramount Theatre, on which he collaborated, was not a Pantages theater.
Post-war years: aircraft and software
thumb|upright|left|Building the Seattle Center Monorail, 1961. Looking north up Fifth Avenue from Virginia Street.
War work again brought local prosperity during World War II, this time centered on Boeing aircraft. The war dispersed the city's numerous Japanese-American businessmen due to the Japanese American internment. After the war, the local economy dipped. It rose again with Boeing's growing dominance in the commercial airliner market. Seattle celebrated its restored prosperity and made a bid for world recognition with the Century 21 Exposition, the 1962 World's Fair. Another major local economic downturn was in the late 1960s and early 1970s, at a time when Boeing was heavily affected by the oil crises, loss of Government contracts, and costs and delays associated with the Boeing 747. Many people left the area to look for work elsewhere, and two local real estate agents put up a billboard reading "Will the last person leaving Seattle – Turn out the lights."The real estate agents were Bob McDonald and Jim Youngren, as cited at Don Duncan, Washington: the First One Hundred Years, 1889–1989 (Seattle: The Seattle Times, 1989), 108, 109–110; The Seattle Times, February 25, 1986, p. A3; Ronald R. Boyce, Seattle–Tacoma and the Southern Sound (Bozeman, Montana: Northwest Panorama Publishing, 1986), 99; Walt Crowley, Rites of Passage: A Memoir of the Sixties in Seattle (Seattle: University of Washington Press, 1995), 297.
Seattle remained the corporate headquarters of Boeing until 2001, when the company separated its headquarters from its major production facilities; the headquarters were moved to Chicago. The Seattle area is still home to Boeing's Renton narrow-body plant (where the 707, 720, 727, and 757 were assembled, and the 737 is assembled today) and Everett wide-body plant (assembly plant for the 747, 767, 777, and 787). The company's credit union for employees, BECU, remains based in the Seattle area, though it is now open to all residents of Washington.
As prosperity began to return in the 1980s, the city was stunned by the Wah Mee massacre in 1983, when 13 people were killed in an illegal gambling club in the International District, Seattle's Chinatown. Beginning with Microsoft's 1979 move from Albuquerque, New Mexico, to nearby Bellevue, Washington, Seattle and its suburbs became home to a number of technology companies including Amazon.com, RealNetworks, Nintendo of America, McCaw Cellular (now part of AT&T Mobility), VoiceStream (now T-Mobile), and biomedical corporations such as HeartStream (later purchased by Philips), Heart Technologies (later purchased by Boston Scientific), Physio-Control (later purchased by Medtronic), ZymoGenetics, ICOS (later purchased by Eli Lilly and Company) and Immunex (later purchased by Amgen). This success brought an influx of new residents with a population increase within city limits of almost 50,000 between 1990 and 2000, and saw Seattle's real estate become some of the most expensive in the country. In 1993, the movie Sleepless in Seattle brought the city further national attention. Many of the Seattle area's tech companies remained relatively strong, but the frenzied dot-com boom years ended in early 2001. Gomes considers the bubble to have ended with the peak of the March 2000 peak of NASDAQ. Ewalt refers to the advertising on Super Bowl XXXIV (January 2000) as "the dot-com bubble's Waterloo".
Seattle in this period attracted widespread attention as home to these many companies, but also by hosting the 1990 Goodwill Games and the APEC leaders conference in 1993, as well as through the worldwide popularity of grunge, a sound that had developed in Seattle's independent music scene. Another bid for worldwide attention—hosting the World Trade Organization Ministerial Conference of 1999—garnered visibility, but not in the way its sponsors desired, as related protest activity and police reactions to those protests overshadowed the conference itself. The city was further shaken by the Mardi Gras Riots in 2001, and then literally shaken the following day by the Nisqually earthquake.
Yet another boom began as the city emerged from the Great Recession. Amazon.com moved its headquarters from North Beacon Hill to South Lake Union and began a rapid expansion. For the five years beginning in 2010, Seattle gained an average of 14,511 residents per year, with the growth strongly skewed toward the center of the city, as unemployment dropped from roughly 9 percent to 3.6 percent. The city has found itself "bursting at the seams", with over 45,000 households spending more than half their income on housing and at least 2,800 people homeless, and with the country's sixth-worst rush hour traffic.
Geography
With a land area of 83.9 square miles (217.3 km²), Seattle is the northernmost city with at least 500,000 people in the United States, farther north than Canadian cities such as Toronto, Ottawa, and Montreal, at about the same latitude as Salzburg, Austria.
The topography of Seattle is hilly. The city lies on several hills, including Capitol Hill, First Hill, West Seattle, Beacon Hill, Magnolia, Denny Hill, and Queen Anne. The Kitsap and the Olympic peninsulas along with the Olympic mountains lie to the west of Puget Sound, while the Cascade Range and Lake Sammamish lie to the east of Lake Washington. The city has over of parkland.
Cityscape
Topography
thumb|upright=1.3|Treemap comparing the volume of earth moved by the megaprojects that transformed the landscape in and around Seattle. The Denny and other regrades moved a combined total of more than 35 million cubic yards of earth. Creating Harbor Island involved 7 million cubic yards, while the Ballard Locks project moved 1.6 million, twice that of the Alaskan Way Viaduct replacement tunnel. Straightening the Duwamish River and filling its tideflats was the largest single project, at nearly 22 million cubic yards.
Seattle is located between the saltwater Puget Sound (an arm of the Pacific Ocean) to the west and Lake Washington to the east. The city's chief harbor, Elliott Bay, is part of Puget Sound, which makes the city an oceanic port. To the west, beyond Puget Sound, are the Kitsap Peninsula and Olympic Mountains on the Olympic Peninsula; to the east, beyond Lake Washington and the Eastside suburbs, are Lake Sammamish and the Cascade Range. Lake Washington's waters flow to Puget Sound through the Lake Washington Ship Canal (consisting of two man-made canals, Lake Union, and the Hiram M. Chittenden Locks at Salmon Bay, ending in Shilshole Bay on Puget Sound).
thumb|alt=Aerial view of downtown Seattle.|Downtown Seattle is bounded by Elliott Bay (lower left), Broadway (from upper left to lower right), South Dearborn Street (lower right), and Denny Way (upper left, obscured by clouds).
The sea, rivers, forests, lakes, and fields surrounding Seattle were once rich enough to support one of the world's few sedentary hunter-gatherer societies. The surrounding area lends itself well to sailing, skiing, bicycling, camping, and hiking year-round.
The city itself is hilly, though not uniformly so. Like Rome, the city is said to lie on seven hills; the lists vary but typically include Capitol Hill, First Hill, West Seattle, Beacon Hill, Queen Anne, Magnolia, and the former Denny Hill. The Wallingford, Mount Baker, and Crown Hill neighborhoods are technically located on hills as well. Many of the hilliest areas are near the city center, with Capitol Hill, First Hill, and Beacon Hill collectively constituting something of a ridge along an isthmus between Elliott Bay and Lake Washington. The break in the ridge between First Hill and Beacon Hill is man-made, the result of two of the many regrading projects that reshaped the topography of the city center.Peterson, Lorin & Davenport, Noah C. (1950), Living in Seattle, Seattle: Seattle Public Schools, p. 44. The topography of the city center was also changed by the construction of a seawall and the artificial Harbor Island (completed 1909) at the mouth of the city's industrial Duwamish Waterway, the terminus of the Green River. The highest point within city limits is at High Point in West Seattle, which is roughly located near 35th Ave SW and SW Myrtle St. Other notable hills include Crown Hill, View Ridge/Wedgwood/Bryant, Maple Leaf, Phinney Ridge, Mt. Baker Ridge, and Highlands/Carkeek/Bitterlake.
thumb|alt=Aerial view of Lake Union on July 4, 2011, with numerous boats gathered for the July 4th fireworks show.|Boats gather on Lake Union in preparation for the July 4 fireworks show.
North of the city center, Lake Washington Ship Canal connects Puget Sound to Lake Washington. It incorporates four natural bodies of water: Lake Union, Salmon Bay, Portage Bay, and Union Bay.
Due to its location in the Pacific Ring of Fire, Seattle is in a major earthquake zone. On February 28, 2001, the magnitude 6.8 Nisqually earthquake did significant architectural damage, especially in the Pioneer Square area (built on reclaimed land, as are the Industrial District and part of the city center), but caused only one fatality.
Other strong quakes occurred on January 26, 1700 (estimated at 9 magnitude), December 14, 1872 (7.3 or 7.4), April 13, 1949 (7.1), and April 29, 1965 (6.5). The 1965 quake caused three deaths in Seattle directly and one more by heart failure. Although the Seattle Fault passes just south of the city center, neither it nor the Cascadia subduction zone has caused an earthquake since the city's founding. The Cascadia subduction zone poses the threat of an earthquake of magnitude 9.0 or greater, capable of seriously damaging the city and collapsing many buildings, especially in zones built on fill.
According to the United States Census Bureau, the city has a total area of , of which is land and , water (41.16% of the total area).
Climate
245px|thumb|alt=Vew of the downtown Seattle skyline, on the waterfront, with the Seatle Aquarium on the left and Seattle Great Wheel on the right.|Downtown Seattle averages 71 completely sunny days a year, with most of those days occurring between May and September
Seattle's climate is classified as oceanic or temperate marine, with cool, wet winters and mild, relatively dry summers.
The city and environs are part of USDA hardiness zone 8b, with isolated coastal pockets falling under 9a.
Temperature extremes are moderated by the adjacent Puget Sound, greater Pacific Ocean, and Lake Washington. Thus extreme heat waves are rare in the Seattle area, as are very cold temperatures (below about 15 F). The Seattle area is the cloudiest region of the United States, due in part to frequent storms and lows moving in from the adjacent Pacific Ocean. Despite having a reputation for frequent rain, Seattle receives less precipitation than many other US cities like Chicago or New York City. However, unlike many other US cities, Seattle has many more "rain days", when a very light drizzle falls from the sky for many days.
In an average year, at least of precipitation falls on 150 days, more than nearly all U.S. cities east of the Rocky Mountains. It is cloudy 201 days out of the year and partly cloudy 93 days. Official weather and climatic data is collected at Seattle–Tacoma International Airport, located about south of downtown in the city of SeaTac, which is at a higher elevation, and records more cloudy days and fewer partly cloudy days per year.
Hot temperature extremes are enhanced by dry, compressed wind from the west slopes of the Cascades, while cold temperatures are generated mainly from the Fraser Valley in British Columbia.
From 1981 to 2010, the average annual precipitation measured at Seattle–Tacoma International Airport was 37.49 inches (952 mm). Annual precipitation has ranged from in 1952 to in 1950; for water year (October 1 – September 30) precipitation, the range is in 1976–77 to in 1996–97. Due to local variations in microclimate, Seattle also receives significantly lower precipitation than some other locations west of the Cascades. Around to the west, the Hoh Rain Forest in Olympic National Park on the western flank of the Olympic Mountains receives an annual average precipitation of . Sixty miles (95 km) to the south of Seattle, the state capital Olympia, which is out of the Olympic Mountains' rain shadow, receives an annual average precipitation of . The city of Bremerton, about west of downtown Seattle on the other side of the Puget Sound, receives of precipitation annually.
Conversely, the northeastern portion of the Olympic Peninsula, which lies east of the Olympic Mountains is located within the Olympic rain shadow and receives significantly less precipitation than its surrounding areas. Prevailing airflow from the west is forced to cool and compress when colliding with the mountain range, resulting in high levels of precipitation within the mountains and its western slopes. Once the airflow reaches the leeward side of the mountains it then lowers and expands resulting in warmer, and significantly dryer air. Sequim, Washington, nicknamed "Sunny Sequim", is located approximately 40 miles northwest of downtown Seattle and receives just 16.51" of annual precipitation, more comparable to that of Los Angeles. Oftentimes an area devoid of cloud cover can be seen extending out over the Puget Sound to the north and east of Sequim. On average Sequim observes 127 sunny days per year in addition to 127 days with partial cloud cover. Other areas influenced by the Olympic rain shadow include Port Angeles, Port Townsend, extending as far north as Victoria, British Columbia.
In November, Seattle averages more rainfall than any other U.S. city of more than 250,000 people; it also ranks highly in winter precipitation. Conversely, the city receives some of the lowest precipitation amounts of any large city from June to September. Seattle is one of the five rainiest major U.S. cities as measured by the number of days with precipitation, and it receives some of the lowest amounts of annual sunshine among major cities in the lower 48 states, along with some cities in the Northeast, Ohio and Michigan. Thunderstorms are rare, as the city reports thunder on just seven days per year. By comparison, Fort Myers, Florida, reports thunder on 93 days per year, Kansas City on 52, and New York City on 25.
Seattle experiences its heaviest rainfall during the months of November, December and January, receiving roughly half of its annual rainfall (by volume) during this period. In late fall and early winter, atmospheric rivers (also known as "Pineapple Express" systems), strong frontal systems, and Pacific low pressure systems are common. Light rain & drizzle are the predominant forms of precipitation during the remainder of the year; for instance, on average, less than of rain falls in July and August combined when rain is rare. On occasion, Seattle experiences somewhat more significant weather events. One such event occurred on December 2–4, 2007, when sustained hurricane-force winds and widespread heavy rainfall associated with a strong Pineapple Express event occurred in the greater Puget Sound area and the western parts of Washington and Oregon. Precipitation totals exceeded in some areas with winds topping out at along coastal Oregon. It became the second wettest event in Seattle history when a little over of rain fell on Seattle in a 24-hour period. Lack of adaptation to the heavy rain contributed to five deaths and widespread flooding and damage.
Autumn, winter, and early spring are frequently characterized by rain. Winters are cool and wet with December, the coolest month, averaging , with 28 annual days with lows that reach the freezing mark, and 2.0 days where the temperature stays at or below freezing all day; the temperature rarely lowers to . Summers are sunny, dry and warm, with August, the warmest month, with high temperatures averaging , and reaching on 3.1 days per year. In 2015 the city recorded 13 days over 90 °F. The hottest officially recorded temperature was on July 29, 2009;Because of its proximity to the sea, Seattle generally remains milder than its outlying suburbs. the coldest recorded temperature was on January 31, 1950; the record cold daily maximum is on January 14, 1950, while, conversely, the record warm daily minimum is the day the official record high was set. The average window for freezing temperatures is November 16 through March 10, allowing a growing season of 250 days.
Seattle typically receives some snowfall on an annual basis but heavy snow is rare. Average annual snowfall, as measured at Sea-Tac Airport, is . Single calendar-day snowfall of six inches (15 cm) or greater has occurred on only 15 days since 1948, and only once since February 17, 1990, when of snow officially fell at Sea-Tac airport on January 18, 2012. This moderate snow event was officially the 12th snowiest calendar day at the airport since 1948 and snowiest since November 1985. Much of the city of Seattle proper received somewhat lesser snowfall accumulations. Locations to the south of Seattle received more, with Olympia and Chehalis receiving . Another moderate snow event occurred from December 12–25, 2008, when over one foot (30 cm) of snow fell and stuck on much of the roads over those two weeks, when temperatures remained below , causing widespread difficulties in a city not equipped for clearing snow. The largest documented snowstorm occurred from January 5–9, 1880, with snow drifting to in places at the end of the snow event. From January 31 to February 2, 1916, another heavy snow event occurred with of snow on the ground by the time the event was over. With official records dating to 1948, the largest single-day snowfall is on January 13, 1950. Seasonal snowfall has ranged from zero in 1991–92 to in 1968–69, with trace amounts having occurred as recently as 2009–10. The month of January 1950 was particularly severe, bringing of snow, the most of any month along with the aforementioned record cold.
The Puget Sound Convergence Zone is an important feature of Seattle's weather. In the convergence zone, air arriving from the north meets air flowing in from the south. Both streams of air originate over the Pacific Ocean; airflow is split by the Olympic Mountains to Seattle's west, then reunited to the east. When the air currents meet, they are forced upward, resulting in convection. Thunderstorms caused by this activity are usually weak and can occur north and south of town, but Seattle itself rarely receives more than occasional thunder and small hail showers. The Hanukkah Eve Wind Storm in December 2006 is an exception that brought heavy rain and winds gusting up to , an event that was not caused by the Puget Sound Convergence Zone and was widespread across the Pacific Northwest.
One of many exceptions to Seattle's reputation as a damp location occurs in El Niño years, when marine weather systems track as far south as California and little precipitation falls in the Puget Sound area. Since the region's water comes from mountain snow packs during the dry summer months, El Niño winters can not only produce substandard skiing but can result in water rationing and a shortage of hydroelectric power the following summer.
Demographics
Racial composition 2010 1990 1970 1940White 69.5% 75.3% 87.4% 96.1%—Non-Hispanic 66.3% 73.7% 85.3%From 15% sample n/aBlack or African American 7.9% 10.1% 7.1% 1.0%Hispanic or Latino (of any race) 6.6% 3.6% 2.0% n/aAsian 13.8% 11.8% 4.2% 2.8%Other race 2.4% n/a n/a n/aTwo or more races 5.1% n/a n/a n/a
According to the 2010 United States Census, Seattle had a population of 608,660 with a racial and ethnic composition as follows:Race, Hispanic or Latino, Age, and Housing Occupancy: 2010 more information 2010 Census Redistricting Data (Public Law 94-171) Summary File . Factfinder2census.gov. (2010). Retrieved December 30, 2011.
White: 69.5% (Non-Hispanic Whites: 66.3%)
Asian: 13.8% (4.1% Chinese, 2.6% Filipino, 2.2% Vietnamese, 1.3% Japanese, 1.1% Korean, 0.8% Indian, 0.3% Cambodian, 0.3% Laotian, 0.2% Pakistanis, 0.2% Indonesian, 0.2% Thai)
Black or African American: 7.9%
Hispanic or Latino (of any race): 6.6% (4.1% Mexican, 0.3% Puerto Rican, 0.2% Guatemalan, 0.2% Salvadoran, 0.2% Cuban)
American Indian and Alaska Native: 0.8%
Native Hawaiian and Other Pacific Islander: 0.4%
Other race: 2.4%
Two or more races: 5.1%
Seattle's population historically has been predominantly white. The 2010 census showed that Seattle was one of the whitest big cities in the country, although its proportion of white residents has been gradually declining. In 1960, whites comprised 91.6% of the city's population, while in 2010 they comprised 69.5%. According to the 2006–2008 American Community Survey, approximately 78.9% of residents over the age of five spoke only English at home. Those who spoke Asian languages other than Indo-European languages made up 10.2% of the population, Spanish was spoken by 4.5% of the population, speakers of other Indo-European languages made up 3.9%, and speakers of other languages made up 2.5%.
Seattle's foreign-born population grew 40% between the 1990 and 2000 censuses. The Chinese population in the Seattle area has origins in mainland China, Hong Kong, Southeast Asia, and Taiwan. The earliest Chinese-Americans that came in the late 19th and early 20th centuries were almost entirely from Guangdong Province. The Seattle area is also home to a large Vietnamese population of more than 55,000 residents, as well as over 30,000 Somali immigrants. The Seattle-Tacoma area is also home to one of the largest Cambodian communities in the United States, numbering about 19,000 Cambodian Americans, and one of the largest Samoan communities in the mainland U.S., with over 15,000 people having Samoan ancestry. Additionally, the Seattle area had the highest percentage of self-identified mixed-race people of any large metropolitan area in the United States, according to the 2000 United States Census Bureau. According to a 2012 HistoryLink study, Seattle's 98118 ZIP code (in the Columbia City neighborhood) was one of the most diverse ZIP Code Tabulation Areas in the United States.
In 1999, the median income of a city household was $45,736, and the median income for a family was $62,195. Males had a median income of $40,929 versus $35,134 for females. The per capita income for the city was $30,306. 11.8% of the population and 6.9% of families are below the poverty line. Of people living in poverty, 13.8% are under the age of 18 and 10.2% are 65 or older.
It is estimated that King County has 8,000 homeless people on any given night, and many of those live in Seattle. In September 2005, King County adopted a "Ten-Year Plan to End Homelessness", one of the near-term results of which is a shift of funding from homeless shelter beds to permanent housing.
In recent years, the city has experienced steady population growth, and has been faced with the issue of accommodating more residents. In 2006, after growing by 4,000 citizens per year for the previous 16 years, regional planners expected the population of Seattle to grow by 200,000 people by 2040. However, former mayor Greg Nickels supported plans that would increase the population by 60%, or 350,000 people, by 2040 and worked on ways to accommodate this growth while keeping Seattle's single-family housing zoning laws. The Seattle City Council later voted to relax height limits on buildings in the greater part of Downtown, partly with the aim to increase residential density in the city center. As a sign of increasing inner-city growth, the downtown population crested to over 60,000 in 2009, up 77% since 1990.
Seattle also has large lesbian, gay, bisexual, and transgender populations. According to a 2006 study by UCLA, 12.9% of city residents polled identified as gay, lesbian, or bisexual. This was the second-highest proportion of any major U.S. city, behind San Francisco Greater Seattle also ranked second among major U.S. metropolitan areas, with 6.5% of the population identifying as gay, lesbian, or bisexual. According to 2012 estimates from the United States Census Bureau, Seattle has the highest percentage of same-sex households in the United States, at 2.6 per cent, surpassing San Francisco.
In addition, Seattle has a relatively high number of people living alone. According to the 2000 U.S. Census interim measurements of 2004, Seattle has the fifth highest proportion of single-person households nationwide among cities of 100,000 or more residents, at 40.8%.
Economy
thumb|right|Washington Mutual's last headquarters, the WaMu Center, (now the Russell Investments Center) (center left) and its headquarters prior, Washington Mutual Tower (now the 1201 Third Avenue Tower) (center right).
Seattle's economy is driven by a mix of older industrial companies, and "new economy" Internet and technology companies, service, design and clean technology companies. The city's gross metropolitan product was $231 billion in 2010, making it the 11th largest metropolitan economy in the United States. The Port of Seattle, which also operates Seattle–Tacoma International Airport, is a major gateway for trade with Asia and cruises to Alaska, and is the 8th largest port in the United States in terms of container capacity. Though it was affected by the Great Recession, Seattle has retained a comparatively strong economy, and remains a hotbed for start-up businesses, especially in green building and clean technologies: it was ranked as America's No. 1 "smarter city" based on its government policies and green economy.. Smartercities.nrdc.org In February 2010, the city government committed Seattle to becoming North America's first "climate neutral" city, with a goal of reaching zero net per capita greenhouse gas emissions by 2030.
thumb|upright|Amazon headquarters building in South Lake Union
Still, very large companies dominate the business landscape. Four companies on the 2013 Fortune 500 list of the United States' largest companies, based on total revenue, are headquartered in Seattle: Internet retailer Amazon.com (#49), coffee chain Starbucks (#208), department store Nordstrom (#227), and freight forwarder Expeditors International of Washington (#428). Other Fortune 500 companies popularly associated with Seattle are based in nearby Puget Sound cities. Warehouse club chain Costco (#22), the largest retail company in Washington, is based in Issaquah. Microsoft (#35) is located in Redmond. Weyerhaeuser, the forest products company (#363), is based in Federal Way. Finally, Bellevue is home to truck manufacturer Paccar (#168). Other major companies in the area include Nintendo of America in Redmond, T-Mobile US in Bellevue, Expedia Inc. in Bellevue and Providence Health & Services — the state's largest health care system and fifth largest employer — in Renton. The city has a reputation for heavy coffee consumption; coffee companies founded or based in Seattle include Starbucks, Seattle's Best Coffee, and Tully's. There are also many successful independent artisanal espresso roasters and cafés.
Prior to moving its headquarters to Chicago, aerospace manufacturer Boeing (#30) was the largest company based in Seattle. Its largest division is still headquartered in nearby Renton, and the company has large aircraft manufacturing plants in Everett and Renton, so it remains the largest private employer in the Seattle metropolitan area. Former Seattle Mayor Greg Nickels announced a desire to spark a new economic boom driven by the biotechnology industry in 2006. Major redevelopment of the South Lake Union neighborhood is underway, in an effort to attract new and established biotech companies to the city, joining biotech companies Corixa (acquired by GlaxoSmithKline), Immunex (now part of Amgen), Trubion, and ZymoGenetics. Vulcan Inc., the holding company of billionaire Paul Allen, is behind most of the development projects in the region. While some see the new development as an economic boon, others have criticized Nickels and the Seattle City Council for pandering to Allen's interests at taxpayers' expense. Also in 2006, Expansion Magazine ranked Seattle among the top 10 metropolitan areas in the nation for climates favorable to business expansion. In 2005, Forbes ranked Seattle as the most expensive American city for buying a house based on the local income levels. In 2013, however, the magazine ranked Seattle No. 9 on its list of the Best Places for Business and Careers.
Alaska Airlines, operating a hub at Seattle–Tacoma International Airport, maintains its headquarters in the city of SeaTac, next to the airport.
Seattle is a hub for global health with the headquarters of the Bill & Melinda Gates Foundation, PATH, Infectious Disease Research Institute, Fred Hutchinson Cancer Research Center and the Institute for Health Metrics and Evaluation. In 2015, the Washington Global Health Alliance counted 168 global health organizations in Washington state, many are headquartered in Seattle.
Culture
thumb|upright|Seattle Central Library
Nicknames
From 1869 until 1982, Seattle was known as the "Queen City". Seattle's current official nickname is the "Emerald City", the result of a contest held in 1981; the reference is to the lush evergreen forests of the area. Seattle is also referred to informally as the "Gateway to Alaska" for being the nearest major city in the contiguous US to Alaska, "Rain City" for its frequent cloudy and rainy weather, and "Jet City" from the local influence of Boeing. The city has two official slogans or mottos: "The City of Flowers", meant to encourage the planting of flowers to beautify the city, and "The City of Goodwill", adopted prior to the 1990 Goodwill Games. Seattle residents are known as Seattleites.
Performing arts
thumb|left|The façade of Marion Oliver McCaw Hall at Seattle Center, seen from Kreielsheimer Promenade, with the Space Needle in the background
Seattle has been a regional center for the performing arts for many years. The century-old Seattle Symphony Orchestra is among the world's most recorded and performs primarily at Benaroya Hall. The Seattle Opera and Pacific Northwest Ballet, which perform at McCaw Hall (opened 2003 on the site of the former Seattle Opera House at Seattle Center), are comparably distinguished, This press release from New York's Metropolitan Opera describes the Seattle Opera as "one of the leading opera companies in the United States... recognized internationally..." with the Opera being particularly known for its performances of the works of Richard Wagner and the PNB School (founded in 1974) ranking as one of the top three ballet training institutions in the United States. The Seattle Youth Symphony Orchestras (SYSO) is the largest symphonic youth organization in the United States. The city also boasts lauded summer and winter chamber music festivals organized by the Seattle Chamber Music Society.Hahn, Sumi Seattle Chamber Music Society's summer festivals: for newbies and longtime fans. The Seattle Times, July 6, 2008. Retrieved December 30, 2011.
The 5th Avenue Theatre, built in 1926, stages Broadway-style musical shows featuring both local talent and international stars.Examples of local talent are Billy Joe Huels (lead singer of the Dusty 45s) starring in Buddy – The Buddy Holly Story and Sarah Rudinoff in Wonderful Town. National-level stars include Stephen Lynch in The Wedding Singer, which went on to Broadway and Cathy Rigby in Peter Pan Seattle has "around 100" theatrical production companies "around 100 theater companies ... Twenty-eight have some sort of Actors' Equity contract ..." and over two dozen live theatre venues, many of them associated with fringe theatre; This lists 23 distinct venues in Seattle hosting live theater (in the narrow sense) that week; it also lists 7 other venues hosting burlesque or cabaret, and three hosting improv. In any given week, some theaters are "dark". This article mentions five fringe theater groups that were new at that time, each with a venue. Seattle is probably second only to New York for number of equity theaters (28 Seattle theater companies have some sort of Actors' Equity contract).
In addition, the 900-seat Romanesque Revival Town Hall on First Hill hosts numerous cultural events, especially lectures and recitals.
thumb|upright|right|Seattle Symphony Orchestra on stage in Benaroya Hall in Downtown Seattle. Benaroya has been the symphony's home since 1998.
Between 1918 and 1951, there were nearly two dozen jazz nightclubs along Jackson Street, running from the current Chinatown/International District to the Central District. The jazz scene developed the early careers of Ray Charles, Quincy Jones, Bumps Blackwell, Ernestine Anderson, and others.
Early popular musical acts from the Seattle/Puget Sound area include the collegiate folk group The Brothers Four, vocal group The Fleetwoods, 1960s garage rockers The Wailers and The Sonics, and instrumental surf group The Ventures, some of whom are still active.
Seattle is considered the home of grunge music, having produced artists such as Nirvana, Soundgarden, Alice in Chains, Pearl Jam, and Mudhoney, all of whom reached international audiences in the early 1990s. The city is also home to such varied artists as avant-garde jazz musicians Bill Frisell and Wayne Horvitz, hot jazz musician Glenn Crytzer, hip hop artists Sir Mix-a-Lot, Macklemore, Blue Scholars, and Shabazz Palaces, smooth jazz saxophonist Kenny G, classic rock staples Heart and Queensrÿche, and alternative rock bands such as Foo Fighters, Harvey Danger, The Presidents of the United States of America, The Posies, Modest Mouse, Band of Horses, Death Cab for Cutie, and Fleet Foxes. Rock musicians such as Jimi Hendrix, Duff McKagan, and Nikki Sixx spent their formative years in Seattle.
The Seattle-based Sub Pop record company continues to be one of the world's best-known independent/alternative music labels.
Over the years, a number of songs have been written about Seattle.
Seattle annually sends a team of spoken word slammers to the National Poetry Slam and considers itself home to such performance poets as Buddy Wakefield, two-time Individual World Poetry Slam Champ; Anis Mojgani, two-time National Poetry Slam Champ; and Danny Sherrard, 2007 National Poetry Slam Champ and 2008 Individual World Poetry Slam Champ. Seattle also hosted the 2001 national Poetry Slam Tournament. The Seattle Poetry Festival is a biennial poetry festival that (launched first as the Poetry Circus in 1997) has featured local, regional, national, and international names in poetry.
The city also has movie houses showing both Hollywood productions and works by independent filmmakers. Among these, the Seattle Cinerama stands out as one of only three movie theaters in the world still capable of showing three-panel Cinerama films.
Tourism
thumb|210 cruise ship visits brought 886,039 passengers to Seattle in 2008.
Among Seattle's prominent annual fairs and festivals are the 24-day Seattle International Film Festival, Northwest Folklife over the Memorial Day weekend, numerous Seafair events throughout July and August (ranging from a Bon Odori celebration to the Seafair Cup hydroplane races), the Bite of Seattle, one of the largest Gay Pride festivals in the United States, and the art and music festival Bumbershoot, which programs music as well as other art and entertainment over the Labor Day weekend. All are typically attended by 100,000 people annually, as are the Seattle Hempfest and two separate Independence Day celebrations.
Other significant events include numerous Native American pow-wows, a Greek Festival hosted by St. Demetrios Greek Orthodox Church in Montlake, and numerous ethnic festivals (many associated with Festál at Seattle Center).
There are other annual events, ranging from the Seattle Antiquarian Book Fair & Book Arts Show; an anime convention, Sakura-Con; Relevant information is on "Location" and "History" pages. Penny Arcade Expo, a gaming convention; a two-day, 9,000-rider Seattle to Portland Bicycle Classic; and specialized film festivals, such as the Maelstrom International Fantastic Film Festival, the Seattle Asian American Film Festival (formerly known as the Northwest Asian American Film Festival), Children's Film Festival Seattle, Translation: the Seattle Transgender Film Festival, the Seattle Gay and Lesbian Film Festival, Seattle Latino Film Festival, and the Seattle Polish Film Festival.
The Henry Art Gallery opened in 1927, the first public art museum in Washington. The Seattle Art Museum (SAM) opened in 1933; SAM opened a museum downtown in 1991 (expanded and reopened 2007); since 1991, the 1933 building has been SAM's Seattle Asian Art Museum (SAAM). SAM also operates the Olympic Sculpture Park (opened 2007) on the waterfront north of the downtown piers. The Frye Art Museum is a free museum on First Hill.
Regional history collections are at the Loghouse Museum in Alki, Klondike Gold Rush National Historical Park, the Museum of History and Industry, and the Burke Museum of Natural History and Culture. Industry collections are at the Center for Wooden Boats and the adjacent Northwest Seaport, the Seattle Metropolitan Police Museum, and the Museum of Flight. Regional ethnic collections include the Nordic Heritage Museum, the Wing Luke Asian Museum, and the Northwest African American Museum. Seattle has artist-run galleries, including ten-year veteran Soil Art Gallery, and the newer Crawl Space Gallery.
thumb|Seattle Great WheelThe Seattle Great Wheel, one of the largest Ferris wheels in the US, opened in June 2012 as a new, permanent attraction on the city's waterfront, at Pier 57, next to Downtown Seattle. The city also has many community centers for recreation, including Rainier Beach, Van Asselt, Rainier, and Jefferson south of the Ship Canal and Green Lake, Laurelhurst, Loyal Heights north of the Canal, and Meadowbrook.
Woodland Park Zoo opened as a private menagerie in 1889 but was sold to the city in 1899. The Seattle Aquarium has been open on the downtown waterfront since 1977 (undergoing a renovation 2006). The Seattle Underground Tour is an exhibit of places that existed before the Great Fire.
Since the middle 1990s, Seattle has experienced significant growth in the cruise industry, especially as a departure point for Alaska cruises. In 2008, a record total of 886,039 cruise passengers passed through the city, surpassing the number for Vancouver, BC, the other major departure point for Alaska cruises.
right|thumb|CenturyLink Field, home of the Seattle Seahawks and Seattle Sounders FC
Professional sports
thumb|Safeco Field, home of the Seattle Mariners.
ClubSportLeagueVenue (capacity)FoundedTitlesRecord AttendanceSeattle SeahawksAmerican footballNFLCenturyLink Field (69,000)1976169,005Seattle MarinersBaseballMLBSafeco Field (47,574)1977046,596Seattle Sounders FCSoccerMLSCenturyLink Field (38,300 for Sounders FC matches)2007167,385Seattle StormBasketballWNBAKeyArena (17,072)200027,486Seattle Reign FCSoccerNWSLMemorial Stadium (12,000; capped at 6,000 for most matches)201206,303
Seattle has three major men's professional sports teams: the National Football League (NFL)'s Seattle Seahawks, Major League Baseball (MLB)'s Seattle Mariners, and Major League Soccer (MLS)'s Seattle Sounders FC. Other professional sports teams include the Women's National Basketball Association (WNBA)'s Seattle Storm, who won the WNBA championship in 2004 and 2010, and the Seattle Reign of the National Women's Soccer League.
The Seahawks' CenturyLink Field has hosted NFL playoff games in 2006, 2008, 2011, 2014, 2015, and 2017. The Seahawks have advanced to the Super Bowl three times: 2005, 2013 and 2014. They defeated the Denver Broncos 43–8 to win their first Super Bowl championship in Super Bowl XLVIII, but lost 24–28 against the New England Patriots in Super Bowl XLIX. The Seahawks also held the NFL playoffs at the Kingdome in 1983, 1984 and 2000. The 2000 playoff game was the last game of football of any type and of any sport at The Kingdome.
Seattle Sounders FC has played in Major League Soccer since 2009, sharing CenturyLink Field with the Seahawks, as a continuation of earlier teams in the lower divisions of American soccer. The Sounders have won the MLS Supporters' Shield in 2014 and the Lamar Hunt U.S. Open Cup on four occasions: 2009, 2010, 2011, and 2014. The Sounders won their first MLS Cup after defeating Toronto FC, 5-4 in penalty kicks, in MLS Cup 2016. With the Sounders' first MLS Cup championship in franchise history, the Mariners are currently the only men's professional sports team in the city without a championship, let alone a championship series appearance.
thumbnail|left|CenturyLink Field
Seattle's professional sports history began at the start of the 20th century with the PCHA's Seattle Metropolitans, which in 1917 became the first American hockey team to win the Stanley Cup.
Seattle was also home to a previous Major League Baseball franchise in 1969: the Seattle Pilots. The Pilots relocated to Milwaukee, Wisconsin and became the Milwaukee Brewers for the 1970 season.
From 1967 to 2008 Seattle was also home to an National Basketball Association (NBA) franchise: the Seattle SuperSonics, who were the 1978–79 NBA champions. The SuperSonics relocated to Oklahoma City and became the Oklahoma City Thunder for the 2008–09 season.
The Major League Baseball All-Star Game was held in Seattle twice, first at the Kingdome in 1979 and again at Safeco Field in 2001. That same year, the Seattle Mariners tied the all-time single regular season wins record with 116 wins. The NBA All-Star Game was also held in Seattle twice: the first in 1974 at the Seattle Center Coliseum and the second in 1987 at the Kingdome.
The Seattle Thunderbirds hockey team plays in the Canadian major-junior Western Hockey League and are based in the Seattle suburb of Kent.
Seattle also boasts a strong history in collegiate sports. The University of Washington and Seattle University are NCAA Division I schools. The University of Washington's athletic program, nicknamed the Huskies, competes in the Pac-12 Conference, and Seattle University's athletic program, nicknamed the Redhawks, competes in the Western Athletic Conference.
Parks and recreation
thumb|left|Lake Union Park, South Lake Union and downtown Seattle
thumb|right|An attraction of Green Lake Park is a trail around the lake.
Seattle's mild, temperate, marine climate allows year-round outdoor recreation, including walking, cycling, hiking, skiing, snowboarding, kayaking, rock climbing, motor boating, sailing, team sports, and swimming.
In town, many people walk around Green Lake, through the forests and along the bluffs and beaches of Discovery Park (the largest park in the city) in Magnolia, along the shores of Myrtle Edwards Park on the Downtown waterfront, along the shoreline of Lake Washington at Seward Park, along Alki Beach in West Seattle, or along the Burke-Gilman Trail.
thumb|left|Downtown Seattle from Gas Works Park
Gas Works Park features the majestic preserved superstructure of a coal gasification plant closed in 1956. Located across Lake Union from downtown, the park provides panoramic views of the Seattle skyline.
Also popular are hikes and skiing in the nearby Cascade or Olympic Mountains and kayaking and sailing in the waters of Puget Sound, the Strait of Juan de Fuca, and the Strait of Georgia. In 2005, Men's Fitness magazine named Seattle the fittest city in the United States.
In its 2013 ParkScore ranking, the Trust for Public Land reported that Seattle had the tenth best park system among the 50 most populous US cities.Van Sant, Ashley "Seattle parks ranked 10th best in US". June 6, 2013. Q13 Fox News. Retrieved July 18, 2013. ParkScore ranks city park systems by a formula that analyzes acreage, access, and service and investment.
Government and politics
thumb|upright=1.0|The city council consists of two at-large positions and seven district seats representing the areas shown.
Seattle is a charter city, with a mayor–council form of government. From 1911 to 2013, Seattle's nine city councillors were elected at large, rather than by geographic subdivisions. For the 2015 election, this changed to a hybrid system of seven district members and two at-large members as a result of a ballot measure passed on November 5, 2013. The only other elected offices are the city attorney and Municipal Court judges. All city offices are officially non-partisan.
Like some other parts of the United States, government and laws are also run by a series of ballot initiatives (allowing citizens to pass or reject laws), referenda (allowing citizens to approve or reject legislation already passed), and propositions (allowing specific government agencies to propose new laws or tax increases directly to the people). Federally, Seattle is part of Washington's 7th congressional district, represented by Democrat Jim McDermott, elected in 1988 and one of Congress's liberal members. Ed Murray is currently serving as mayor.
Seattle's political culture is very liberal and progressive for the United States, with over 80% of the population voting for the Democratic Party. All precincts in Seattle voted for Democratic Party candidate Barack Obama in the 2012 presidential election. In partisan elections for the Washington State Legislature and United States Congress, nearly all elections are won by Democrats. Seattle is considered the first major American city to elect a female mayor, Bertha Knight Landes. It has also elected an openly gay mayor, Ed Murray, and a socialist councillor, Kshama Sawant. For the first time in United States history, an openly gay black woman was elected to public office when Sherry Harris was elected as a Seattle city councillor in 1991. The majority of the current city council is female, while white men comprise a minority.
Seattle is widely considered one of the most liberal cities in the United States, even surpassing its neighbor, Portland, Oregon. Support for issues such as same-sex marriage and reproductive rights are largely taken for granted in local politics. In the 2012 U.S. general election, an overwhelming majority of Seattleites voted to approve Referendum 74 and legalize gay marriage in Washington state. In the same election, an overwhelming majority of Seattleites also voted to approve the legalization of the recreational use of cannabis in the state. Like much of the Pacific Northwest (which has the lowest rate of church attendance in the United States and consistently reports the highest percentage of atheism), church attendance, religious belief, and political influence of religious leaders are much lower than in other parts of America.Religious identification in the U.S. Religioustolerance.org. Retrieved December 30, 2011.
Seattle also has a thriving alternative press, with the Web-based daily Seattle Post-Intelligencer, several other online dailies (including Publicola and Crosscut), The Stranger (an alternative, left-leaning weekly), Seattle Weekly, and a number of issue-focused publications, including the nation's two largest online environmental magazines, Worldchanging and Grist.org.
In July 2012, Seattle banned plastic shopping bags. In June 2014 the city passed a local ordinance to increase the minimum wage to $15 an hour on a staged basis from 2015 to 2021. When fully implemented the $15 hourly rate will be the highest minimum wage in the nation.
On October 6, 2014, Seattle officially replaced Columbus Day with Indigenous Peoples' Day, honoring Seattle's Native American community and controversies surrounding the legacy of Christopher Columbus.
Education
Of the city's population over the age of 25, 53.8% (vs. a national average of 27.4%) hold a bachelor's degree or higher, and 91.9% (vs. 84.5% nationally) have a high school diploma or equivalent. A 2008 United States Census Bureau survey showed that Seattle had the highest percentage of college and university graduates of any major U.S. city. The city was listed as the most literate of the country's 69 largest cities in 2005 and 2006, the second most literate in 2007 and the most literate in 2008 in studies conducted by Central Connecticut State University.
thumb|University of Washington Quad in spring
Seattle Public Schools desegregated without a court order but continue to struggle to achieve racial balance in a somewhat ethnically divided city (the south part of town having more ethnic minorities than the north). In 2007, Seattle's racial tie-breaking system was struck down by the United States Supreme Court, but the ruling left the door open for desegregation formulae based on other indicators (e.g., income or socioeconomic class).
The public school system is supplemented by a moderate number of private schools: five of the private high schools are Catholic, one is Lutheran, and six are secular.
Seattle is home to the University of Washington, as well as the institution's professional and continuing education unit, the University of Washington Educational Outreach. A study by Newsweek International in 2006 cited the University of Washington as the twenty-second best university in the world. Seattle also has a number of smaller private universities including Seattle University and Seattle Pacific University, the former a Jesuit Catholic institution, the latter Free Methodist; universities aimed at the working adult, like City University and Antioch University; colleges within the Seattle Colleges District system, comprising North, Central, and South; seminaries, including Western Seminary and a number of arts colleges, such as Cornish College of the Arts, Pratt Fine Arts Center, and The Art Institute of Seattle. In 2001, Time magazine selected Seattle Central Community College as community college of the year, stating the school "pushes diverse students to work together in small teams".
Media
, Seattle has one major daily newspaper, The Seattle Times. The Seattle Post-Intelligencer, known as the P-I, published a daily newspaper from 1863 to March 17, 2009, before switching to a strictly on-line publication. There is also the Seattle Daily Journal of Commerce, and the University of Washington publishes The Daily, a student-run publication, when school is in session. The most prominent weeklies are the Seattle Weekly and The Stranger; both consider themselves "alternative" papers. The weekly LGBT newspaper is the Seattle Gay News. Real Change is a weekly street newspaper that is sold mainly by homeless persons as an alternative to panhandling. There are also several ethnic newspapers, including The Facts, Northwest Asian Weekly and the International Examiner, and numerous neighborhood newspapers.
Seattle is also well served by television and radio, with all major U.S. networks represented, along with at least five other English-language stations and two Spanish-language stations. Seattle cable viewers also receive CBUT 2 (CBC) from Vancouver, British Columbia.
Non-commercial radio stations include NPR affiliates KUOW-FM 94.9 and KNKX 88.5 (Tacoma), as well as classical music station KING-FM 98.1. Other non-commercial stations include KEXP-FM 90.3 (affiliated with the UW), community radio KBCS-FM 91.3 (affiliated with Bellevue College), and high school radio KNHC-FM 89.5, which broadcasts an electronic dance music radio format and is owned by the public school system and operated by students of Nathan Hale High School. Many Seattle radio stations are also available through Internet radio, with KEXP in particular being a pioneer of Internet radio. Seattle also has numerous commercial radio stations. In a March 2012 report by the consumer research firm Arbitron, the top FM stations were KRWM (adult contemporary format), KIRO-FM (news/talk), and KISW (active rock) while the top AM stations were KOMO (AM) (all news), KJR (AM) (all sports), KIRO (AM) (all sports).
Seattle-based online magazines Worldchanging and Grist.org were two of the "Top Green Websites" in 2007 according to TIME.
Seattle also has many online news media websites. The two largest are The Seattle Times and the Seattle Post-Intelligencer.
Infrastructure
Health systems
The University of Washington is consistently ranked among the country's top leading institutions in medical research, earning special merits for programs in neurology and neurosurgery. Seattle has seen local developments of modern paramedic services with the establishment of Medic One in 1970. In 1974, a 60 Minutes story on the success of the then four-year-old Medic One paramedic system called Seattle "the best place in the world to have a heart attack".
Three of Seattle's largest medical centers are located on First Hill. Harborview Medical Center, the public county hospital, is the only Level I trauma hospital in a region that includes Washington, Alaska, Montana, and Idaho. Virginia Mason Medical Center and Swedish Medical Center's two largest campuses are also located in this part of Seattle, including the Virginia Mason Hospital. This concentration of hospitals resulted in the neighborhood's nickname "Pill Hill".
Located in the Laurelhurst neighborhood, Seattle Children's, formerly Children's Hospital and Regional Medical Center, is the pediatric referral center for Washington, Alaska, Montana, and Idaho. The Fred Hutchinson Cancer Research Center has a campus in the Eastlake neighborhood. The University District is home to the University of Washington Medical Center which, along with Harborview, is operated by the University of Washington. Seattle is also served by a Veterans Affairs hospital on Beacon Hill, a third campus of Swedish in Ballard, and Northwest Hospital and Medical Center near Northgate Mall.
Transportation
thumb|Interstate 5 in Washington as it passes through downtown Seattle
The first streetcars appeared in 1889 and were instrumental in the creation of a relatively well-defined downtown and strong neighborhoods at the end of their lines. The advent of the automobile sounded the death knell for rail in Seattle. Tacoma–Seattle railway service ended in 1929 and the Everett–Seattle service came to an end in 1939, replaced by inexpensive automobiles running on the recently developed highway system. Rails on city streets were paved over or removed, and the opening of the Seattle trolleybus system brought the end of streetcars in Seattle in 1941. This left an extensive network of privately owned buses (later public) as the only mass transit within the city and throughout the region.
thumb|King County Water Taxi and downtown Seattle
King County Metro provides frequent stop bus service within the city and surrounding county, as well as a South Lake Union Streetcar line between the South Lake Union neighborhood and Westlake Center in downtown. Seattle is one of the few cities in North America whose bus fleet includes electric trolleybuses. Sound Transit currently provides an express bus service within the metropolitan area, two Sounder commuter rail lines between the suburbs and downtown, and its Central Link light rail line between the University of Washington and Sea-Tac Airport. Washington State Ferries, which manages the largest network of ferries in the United States and third largest in the world, connects Seattle to Bainbridge and Vashon Islands in Puget Sound and to Bremerton and Southworth on the Kitsap Peninsula.
thumb|left|Central Link light rail trains in the Downtown Seattle Transit Tunnel at the University Street Station
According to the 2007 American Community Survey, 18.6% of Seattle residents used one of the three public transit systems that serve the city, giving it the highest transit ridership of all major cities without heavy or light rail prior to the completion of Sound Transit's Central Link line. The city has also been described by Bert Sperling as the fourth most walkable U.S. city and by Walk Score as the sixth most walkable of the fifty largest U.S. cities.
Seattle–Tacoma International Airport, locally known as Sea-Tac Airport and located just south in the neighboring city of SeaTac, is operated by the Port of Seattle and provides commercial air service to destinations throughout the world. Closer to downtown, Boeing Field is used for general aviation, cargo flights, and testing/delivery of Boeing airliners.
thumb|Alaskan Way Viaduct, port of Seattle on the right, stadium in the background
The main mode of transportation, however, relies on Seattle's streets, which are laid out in a cardinal directions grid pattern, except in the central business district where early city leaders Arthur Denny and Carson Boren insisted on orienting their plats relative to the shoreline rather than to true North. Only two roads, Interstate 5 and State Route 99 (both limited-access highways), run uninterrupted through the city from north to south. State Route 99 runs through downtown Seattle on the Alaskan Way Viaduct, which was built in 1953. However, due to damage sustained during the 2001 Nisqually earthquake the viaduct will be replaced by a tunnel. The Alaskan Way Viaduct replacement tunnel was originally scheduled to be completed in December 2015 at a cost of US$4.25 billion. Unfortunately, due to issues with the worlds largest tunnel boring machine (TBM), which is nicknamed "Bertha" and is in diameter, the projected date of completion has been pushed back to 2017. Seattle has the 8th worst traffic congestion of all American cities, and is 10th among all North American cities.
The city has started moving away from the automobile and towards mass transit. From 2004 to 2009, the annual number of unlinked public transportation trips increased by approximately 21%. In 2006, voters in King County passed proposition 2 (Transit Now) which increased bus service hours on high ridership routes and paid for five bus rapid transit lines called RapidRide. After rejecting a roads and transit measure in 2007, Seattle-area voters passed a transit only measure in 2008 to increase ST Express bus service, extend the Link light rail system, and expand and improve Sounder commuter rail service. A light rail line from downtown heading south to Sea-Tac Airport began service on December 19, 2009, giving the city its first rapid transit line with intermediate stations within the city limits. An extension north to the University of Washington opened on March 19, 2016; and further extensions are planned to reach Lynnwood to the north, Des Moines to the south, and Bellevue and Redmond to the east by 2023.Regional Transit System Plan . (PDF). soundtransit.org. Retrieved December 30, 2011. Voters in the Puget Sound region approved an additional tax increase in November, 2016 to expand light rail to West Seattle and Ballard as well as Tacoma, Everett, and Issaquah.
Utilities
Water and electric power are municipal services, provided by Seattle Public Utilities and Seattle City Light respectively. Other utility companies serving Seattle include Puget Sound Energy (natural gas, electricity); Seattle Steam Company (steam); Waste Management, Inc and CleanScapes, Inc. (curbside recycling and solid waste removal); CenturyLink, Frontier Communications, Wave Broadband, and Comcast (telecommunications and television).
About 90% of Seattle's electricity is produced using hydropower. Less than 2% of electricity is produced using fossil fuels.
Notable people
Sister cities
Seattle is partnered with:
Beersheba, Israel (since 1977)
Bergen, Norway (since 1967)
Cebu City, Philippines (since 1991)
Chongqing, China (since 1983)
Christchurch, New Zealand (since 1981)
Daejeon, South Korea (since 1989)
Galway, Ireland (since 1986)
Gdynia, Poland (since 1993)
Haiphong, Vietnam (since 1996)
Kaohsiung, Taiwan (since 1991)
Kobe, Japan (since 1957)
Limbe, Cameroon (since 1984)
Mazatlán, Mexico (since 1979)
Mombasa, Kenya (since 1981)
Nantes, France (since 1980)
Pécs, Hungary (since 1991)
Perugia, Italy (since 1993)
Reykjavík, Iceland (since 1986)
Sihanoukville, Cambodia (since 1999)
Surabaya, Indonesia (since 1992)
Tashkent, Uzbekistan (since 1973)
See also
National Register of Historic Places listings in Seattle, Washington
Seattle Freeze
Seattle process
Seattle tugboats
Tillicum Village
References
Footnotes
Citations
Bibliography
Further reading
Sanders, Jeffrey Craig. Seattle and the Roots of Urban Sustainability: Inventing Ecotopia (University of Pittsburgh Press; 2010) 288 pages; the rise of environmental activism
External links
Historylink.org, history of Seattle and Washington
Seattle Photographs from the University of Washington Digital Collections
Seattle Historic Photograph Collection from the Seattle Public Library
Seattle Civil Rights and Labor History Project
Seattle, a National Park Service Discover Our Shared Heritage Travel Itinerary
Category:1853 establishments in Oregon Territory
Category:Cities in the Seattle metropolitan area
Category:Cities in Washington (state)
Category:County seats in Washington (state)
Category:Isthmuses of the United States
Category:Populated places established in 1853
Category:Cities in King County, Washington
Category:Populated places on Puget Sound
Category:Port settlements in Washington (state) | 11,388,236 | 2017-01 |
Glass | thumb|300px|The joining of two tubes made of lead glass during glass welding
Glass is a non-crystalline amorphous solid that is often transparent and has widespread practical, technological, and decorative usage in, for example, window panes, tableware, and optoelectronics. The most familiar, and historically the oldest, types of glass are "silicate glasses" based on the chemical compound silica (silicon dioxide, or quartz), the primary constituent of sand. The term glass, in popular usage, is often used to refer only to this type of material, which is familiar from use as window glass and in glass bottles. Of the many silica-based glasses that exist, ordinary glazing and container glass is formed from a specific type called soda-lime glass, composed of approximately 75% silicon dioxide (SiO2), sodium oxide (Na2O) from sodium carbonate (Na2CO3), calcium oxide, also called lime (CaO), and several minor additives.
Many applications of silicate glasses derive from their optical transparency, giving rise to their primary use as window panes. Glass will transmit, reflect and refract light; these qualities can be enhanced by cutting and polishing to make optical lenses, prisms, fine glassware, and optical fibers for high speed data transmission by light. Glass can be coloured by adding metallic salts, and can also be painted and printed with vitreous enamels. These qualities have led to the extensive use of glass in the manufacture of art objects and in particular, stained glass windows. Although brittle, silicate glass is extremely durable, and many examples of glass fragments exist from early glass-making cultures. Because glass can be formed or moulded into any shape, it has been traditionally used for vessels: bowls, vases, bottles, jars and drinking glasses. In its most solid forms it has also been used for paperweights, marbles, and beads. When extruded as glass fiber and matted as glass wool in a way to trap air, it becomes a thermal insulating material, and when these glass fibers are embedded into an organic polymer plastic, they are a key structural reinforcement part of the composite material fiberglass. Some objects historically were so commonly made of silicate glass that they are simply called by the name of the material, such as drinking glasses and reading glasses.
Scientifically, the term "glass" is often defined in a broader sense, encompassing every solid that possesses a non-crystalline (that is, amorphous) structure at the atomic-scale and that exhibits a glass transition when heated towards the liquid state. Porcelains and many polymer thermoplastics familiar from everyday use are glasses. These sorts of glasses can be made of quite different kinds of materials than silica: metallic alloys, ionic melts, aqueous solutions, molecular liquids, and polymers. For many applications, like glass bottles or eyewear, polymer glasses (acrylic glass, polycarbonate or polyethylene terephthalate) are a lighter alternative than traditional glass.
Silicate glass
Ingredients
Silica (the chemical compound SiO2) is a common fundamental constituent of glass. In nature, vitrification of quartz occurs when lightning strikes sand, forming hollow, branching rootlike structures called fulgurite.
Fused quartz is a glass made from chemically-pure SiO2 (silica). It has excellent thermal shock characteristics, being able to survive immersion in water while red hot. However, its high melting-temperature (1723 °C) and viscosity make it difficult to work with. Normally, other substances are added to simplify processing. One is sodium carbonate (Na2CO3, "soda"), which lowers the glass transition temperature. The soda makes the glass water-soluble, which is usually undesirable, so lime (calcium oxide [CaO], generally obtained from limestone), some magnesium oxide (MgO) and aluminium oxide (Al2O3) are added to provide for a better chemical durability. The resulting glass contains about 70 to 74% silica by weight and is called a soda-lime glass.B. H. W. S. de Jong, "Glass"; in "Ullmann's Encyclopedia of Industrial Chemistry"; 5th edition, vol. A12, VCH Publishers, Weinheim, Germany, 1989, ISBN 978-3-527-20112-9, pp. 365–432. Soda-lime glasses account for about 90% of manufactured glass.
Most common glass contains other ingredients to change its properties. Lead glass or flint glass is more 'brilliant' because the increased refractive index causes noticeably more specular reflection and increased optical dispersion. Adding barium also increases the refractive index. Thorium oxide gives glass a high refractive index and low dispersion and was formerly used in producing high-quality lenses, but due to its radioactivity has been replaced by lanthanum oxide in modern eyeglasses. Iron can be incorporated into glass to absorb infrared energy, for example in heat absorbing filters for movie projectors, while cerium(IV) oxide can be used for glass that absorbs UV wavelengths.
The following is a list of the more common types of silicate glasses, and their ingredients, properties, and applications:
Fused quartz, also called fused silica glass, vitreous silica glass: silica (SiO2) in vitreous or glass form (i.e., its molecules are disordered and random, without crystalline structure). It has very low thermal expansion, is very hard, and resists high temperatures (1000–1500 °C). It is also the most resistant against weathering (caused in other glasses by alkali ions leaching out of the glass, while staining it). Fused quartz is used for high temperature applications such as furnace tubes, lighting tubes, melting crucibles, etc.
Soda-lime-silica glass, window glass: silica + sodium oxide (Na2O) + lime (CaO) + magnesia (MgO) + alumina (Al2O3). Is transparent, easily formed and most suitable for window glass (see flat glass). It has a high thermal expansion and poor resistance to heat (500–600 °C). It is used for windows, some low temperature incandescent light bulbs, and tableware. Container glass is a soda-lime glass that is a slight variation on flat glass, which uses more alumina and calcium, and less sodium and magnesium which are more water-soluble. This makes it less susceptible to water erosion.
Sodium borosilicate glass, Pyrex: silica + boron trioxide (B2O3) + soda (Na2O) + alumina (Al2O3). Stands heat expansion much better than window glass. Used for chemical glassware, cooking glass, car head lamps, etc. Borosilicate glasses (e.g. Pyrex) have as main constituents silica and boron trioxide. They have fairly low coefficients of thermal expansion (7740 Pyrex CTE is 3.25/°CCorning, Inc. Pyrex data sheet. (PDF). Retrieved 2012-05-15. as compared to about 9/°C for a typical soda-lime glassAR-GLAS Schott, N.A., Inc data sheet), making them more dimensionally stable. The lower coefficient of thermal expansion (CTE) also makes them less subject to stress caused by thermal expansion, thus less vulnerable to cracking from thermal shock. They are commonly used for reagent bottles, optical components and household cookware.
Lead-oxide glass, crystal glass: silica + lead oxide (PbO) + potassium oxide (K2O) + soda (Na2O) + zinc oxide (ZnO) + alumina. Because of its high density (resulting in a high electron density) it has a high refractive index, making the look of glassware more brilliant (called "crystal", though of course it is a glass and not a crystal). It also has a high elasticity, making glassware "ring". It is also more workable in the factory, but cannot stand heating very well.
Aluminosilicate glass: silica + alumina + lime + magnesia + barium oxide (BaO) + boric oxide (B2O3). Extensively used for fiberglass, used for making glass-reinforced plastics (boats, fishing rods, etc.) and for halogen bulb glass.
Germanium oxide glass: alumina + germanium dioxide (GeO2). Extremely clear glass, used for fiber-optic waveguides in communication networks. Light loses only 5% of its intensity through 1 km of glass fiber.Mining the sea sand. Seafriends.org.nz (1994-02-08). Retrieved 2012-05-15.
Another common glass ingredient is crushed alkali glass or "cullet" ready for recycled glass. The recycled glass saves on raw materials and energy. Impurities in the cullet can lead to product and equipment failure. Fining agents such as sodium sulfate, sodium chloride, or antimony oxide may be added to reduce the number of air bubbles in the glass mixture. Glass batch calculation is the method by which the correct raw material mixture is determined to achieve the desired glass composition.
Physical properties
Optical properties
Glass is in widespread use largely due to the production of glass compositions that are transparent to visible light. In contrast, polycrystalline materials do not generally transmit visible light. The individual crystallites may be transparent, but their facets (grain boundaries) reflect or scatter light resulting in diffuse reflection. Glass does not contain the internal subdivisions associated with grain boundaries in polycrystals and hence does not scatter light in the same manner as a polycrystalline material. The surface of a glass is often smooth since during glass formation the molecules of the supercooled liquid are not forced to dispose in rigid crystal geometries and can follow surface tension, which imposes a microscopically smooth surface. These properties, which give glass its clearness, can be retained even if glass is partially light-absorbing—i.e., colored.
Glass has the ability to refract, reflect, and transmit light following geometrical optics, without scattering it. It is used in the manufacture of lenses and windows. Common glass has a refraction index around 1.5. This may be modified by adding low-density materials such as boron, which lowers the index of refraction (see crown glass), or increased (to as much as 1.8) with high-density materials such as (classically) lead oxide (see flint glass and lead glass), or in modern uses, less toxic oxides of zirconium, titanium, or barium. These high-index glasses (inaccurately known as "crystal" when used in glass vessels) cause more chromatic dispersion of light, and are prized for their diamond-like optical properties.
According to Fresnel equations, the reflectivity of a sheet of glass is about 4% per surface (at normal incidence in air), and the transmissivity of one element (two surfaces) is about 90%. Glass with high germanium oxide content also finds application in optoelectronics—e.g., for light-transmitting optical fibers.
Other properties
In the process of manufacture, silicate glass can be poured, formed, extruded and molded into forms ranging from flat sheets to highly intricate shapes. The finished product is brittle and will fracture, unless laminated or specially treated, but is extremely durable under most conditions. It erodes very slowly and can withstand the action of water. It is resilient to chemical attack and is an ideal material for the manufacture of containers for foodstuffs and most chemicals.
Contemporary production
Following the glass batch preparation and mixing, the raw materials are transported to the furnace. Soda-lime glass for mass production is melted in gas fired units. Smaller scale furnaces for specialty glasses include electric melters, pot furnaces, and day tanks.
After melting, homogenization and refining (removal of bubbles), the glass is formed. Flat glass for windows and similar applications is formed by the float glass process, developed between 1953 and 1957 by Sir Alastair Pilkington and Kenneth Bickerstaff of the UK's Pilkington Brothers, who created a continuous ribbon of glass using a molten tin bath on which the molten glass flows unhindered under the influence of gravity. The top surface of the glass is subjected to nitrogen under pressure to obtain a polished finish.
Container glass for common bottles and jars is formed by blowing and pressing methods. This glass is often slightly modified chemically (with more alumina and calcium oxide) for greater water resistance. Further glass forming techniques are summarized in the table Glass forming techniques.
Once the desired form is obtained, glass is usually annealed for the removal of stresses.
Surface treatments, coatings or lamination may follow to improve the chemical durability (glass container coatings, glass container internal treatment), strength (toughened glass, bulletproof glass, windshields), or optical properties (insulated glazing, anti-reflective coating).
Color
Color in glass may be obtained by addition of electrically charged ions (or color centers) that are homogeneously distributed, and by precipitation of finely dispersed particles (such as in photochromic glasses).
Ordinary soda-lime glass appears colorless to the naked eye when it is thin, although iron(II) oxide (FeO) impurities of up to 0.1 wt% produce a green tint, which can be viewed in thick pieces or with the aid of scientific instruments. Further FeO and Cr2O3 additions may be used for the production of green bottles. Sulfur, together with carbon and iron salts, is used to form iron polysulfides and produce amber glass ranging from yellowish to almost black.David M Issitt. Substances Used in the Making of Coloured Glass 1st.glassman.com. A glass melt can also acquire an amber color from a reducing combustion atmosphere. Manganese dioxide can be added in small amounts to remove the green tint given by iron(II) oxide. When used in art glass or studio glass is colored using closely guarded recipes that involve specific combinations of metal oxides, melting temperatures and "cook" times. Most colored glass used in the art market is manufactured in volume by vendors who serve this market, although there are some glassmakers with the ability to make their own color from raw materials.
History of silicate glass
thumb|upright|left|Bohemian flashed and engraved ruby glass (19th-century)
thumb|upright|Wine goblet, mid-19th century. Qajar dynasty. Brooklyn Museum.
thumb|right|upright|Roman cage cup from the 4th century CE
right|thumb|upright|Studio glass. Multiple colors within a single object increase the difficulty of production, as glasses of different colors have different chemical and physical properties when molten.
Naturally occurring glass, especially the volcanic glass obsidian, has been used by many Stone Age societies across the globe for the production of sharp cutting tools and, due to its limited source areas, was extensively traded. But in general, archaeological evidence suggests that the first true glass was made in coastal north Syria, Mesopotamia or ancient Egypt. The earliest known glass objects, of the mid third millennium BCE, were beads, perhaps initially created as accidental by-products of metal-working (slags) or during the production of faience, a pre-glass vitreous material made by a process similar to glazing.True glazing over a ceramic body was not used until many centuries after the production of the first glass.
Glass remained a luxury material, and the disasters that overtook Late Bronze Age civilizations seem to have brought glass-making to a halt. Indigenous development of glass technology in South Asia may have begun in 1730 BCE. In ancient China, though, glassmaking seems to have a late start, compared to ceramics and metal work. The term glass developed in the late Roman Empire. It was in the Roman glassmaking center at Trier, now in modern Germany, that the late-Latin term glesum originated, probably from a Germanic word for a transparent, lustrous substance. Glass objects have been recovered across the Roman empire in domestic, industrial and funerary contexts.
Glass was used extensively during the Middle Ages. Anglo-Saxon glass has been found across England during archaeological excavations of both settlement and cemetery sites. Glass in the Anglo-Saxon period was used in the manufacture of a range of objects including vessels, beads, windows and was also used in jewelry. From the 10th-century onwards, glass was employed in stained glass windows of churches and cathedrals, with famous examples at Chartres Cathedral and the Basilica of Saint Denis. By the 14th-century, architects were designing buildings with walls of stained glass such as Sainte-Chapelle, Paris, (1203–1248)Rene Hughe, Byzantine and Medieval Art, Paul Hamlyn, (1963) and the East end of Gloucester Cathedral.John Harvey, English Cathedrals, Batsford, (1961) Stained glass had a major revival with Gothic Revival architecture in the 19th-century.
With the Renaissance, and a change in architectural style, the use of large stained glass windows became less prevalent. The use of domestic stained glass increased until most substantial houses had glass windows. These were initially small panes leaded together, but with the changes in technology, glass could be manufactured relatively cheaply in increasingly larger sheets. This led to larger window panes, and, in the 20th-century, to much larger windows in ordinary domestic and commercial buildings.
In the 20th century, new types of glass such as laminated glass, reinforced glass and glass bricks have increased the use of glass as a building material and resulted in new applications of glass. Multi-storey buildings are frequently constructed with curtain walls made almost entirely of glass. Similarly, laminated glass has been widely applied to vehicles for windscreens. While glass containers have always been used for storage and are valued for their hygienic properties, glass has been utilized increasingly in industry. Optical glass for spectacles has been used since the late Middle Ages. The production of lenses has become increasingly proficient, aiding astronomers as well as having other application in medicine and science. Glass is also employed as the aperture cover in many solar energy systems.
From the 19th century, there was a revival in many ancient glass-making techniques including cameo glass, achieved for the first time since the Roman Empire and initially mostly used for pieces in a neo-classical style. The Art Nouveau movement made great use of glass, with René Lalique, Émile Gallé, and Daum of Nancy producing colored vases and similar pieces, often in cameo glass, and also using luster techniques. Louis Comfort Tiffany in America specialized in stained glass, both secular and religious, and his famous lamps. The early 20th-century saw the large-scale factory production of glass art by firms such as Waterford and Lalique. From about 1960 onwards there have been an increasing number of small studios hand-producing glass artworks, and glass artists began to class themselves as in effect sculptors working in glass, and their works as part fine arts.
In the 21st century, scientists observing the properties of ancient stained glass windows, in which suspended nanoparticles prevent UV light from causing chemical reactions that change image colors, are developing photographic techniques that use similar stained glass to capture true color images of Mars for the 2019 ESA Mars Rover mission.
Chronology of advances in architectural glass
1226: "Broad Sheet" first produced in Sussex.
1330: "Crown glass" for art work and vessels first produced in Rouen, France. "Broad Sheet" also produced. Both were also supplied for export.
1500s: A method of making mirrors out of plate glass was developed by Venetian glassmakers on the island of Murano, who covered the back of the glass with a mercury-tin amalgam, obtaining near-perfect and undistorted reflection.
1620: "Blown Plate" first produced in London. Used for mirrors and coach plates.
1678: "Crown Glass" first produced in London. This process dominated until the 19th century.
1843: An early form of "Float Glass" invented by Henry Bessemer, pouring glass onto liquid tin. Expensive and not a commercial success.
1874: Tempered glass is developed by Francois Barthelemy Alfred Royer de la Bastie (1830-1901) of Paris, France by quenching almost molten glass in a heated bath of oil or grease.
1888: "Machine Rolled" glass introduced allowing patterns to be introduced.
1898: "Wired Cast" glass invented by Pilkington for use where safety or security was an issue.
1959: "Float Glass" launched in UK. Invented by Sir Alastair Pilkington.History of Glass Manufacture: London Crown Glass co.
Other types of glass
New chemical glass compositions or new treatment techniques can be initially investigated in small-scale laboratory experiments. The raw materials for laboratory-scale glass melts are often different from those used in mass production because the cost factor has a low priority. In the laboratory mostly pure chemicals are used. Care must be taken that the raw materials have not reacted with moisture or other chemicals in the environment (such as alkali or alkaline earth metal oxides and hydroxides, or boron oxide), or that the impurities are quantified (loss on ignition). Evaporation losses during glass melting should be considered during the selection of the raw materials, e.g., sodium selenite may be preferred over easily evaporating SeO2. Also, more readily reacting raw materials may be preferred over relatively inert ones, such as Al(OH)3 over Al2O3. Usually, the melts are carried out in platinum crucibles to reduce contamination from the crucible material. Glass homogeneity is achieved by homogenizing the raw materials mixture (glass batch), by stirring the melt, and by crushing and re-melting the first melt. The obtained glass is usually annealed to prevent breakage during processing.
To make glass from materials with poor glass forming tendencies, novel techniques are used to increase cooling rate, or reduce crystal nucleation triggers. Examples of these techniques include aerodynamic levitation (cooling the melt whilst it floats on a gas stream), splat quenching (pressing the melt between two metal anvils) and roller quenching (pouring the melt through rollers).
Network glasses
thumb|right|A CD-RW (CD). Chalcogenide glasses form the basis of rewritable CD and DVD solid-state memory technology.
Some glasses that do not include silica as a major constituent may have physico-chemical properties useful for their application in fiber optics and other specialized technical applications. These include fluoride glasses, aluminosilicates, phosphate glasses, borate glasses, and chalcogenide glasses.
There are three classes of components for oxide glasses: network formers, intermediates, and modifiers. The network formers (silicon, boron, germanium) form a highly cross-linked network of chemical bonds. The intermediates (titanium, aluminium, zirconium, beryllium, magnesium, zinc) can act as both network formers and modifiers, according to the glass composition. The modifiers (calcium, lead, lithium, sodium, potassium) alter the network structure; they are usually present as ions, compensated by nearby non-bridging oxygen atoms, bound by one covalent bond to the glass network and holding one negative charge to compensate for the positive ion nearby. Some elements can play multiple roles; e.g. lead can act both as a network former (Pb4+ replacing Si4+), or as a modifier.
The presence of non-bridging oxygens lowers the relative number of strong bonds in the material and disrupts the network, decreasing the viscosity of the melt and lowering the melting temperature.
The alkali metal ions are small and mobile; their presence in glass allows a degree of electrical conductivity, especially in molten state or at high temperature. Their mobility decreases the chemical resistance of the glass, allowing leaching by water and facilitating corrosion. Alkaline earth ions, with their two positive charges and requirement for two non-bridging oxygen ions to compensate for their charge, are much less mobile themselves and also hinder diffusion of other ions, especially the alkalis. The most common commercial glasses contain both alkali and alkaline earth ions (usually sodium and calcium), for easier processing and satisfying corrosion resistance. Corrosion resistance of glass can be achieved by dealkalization, removal of the alkali ions from the glass surface by reaction with e.g. sulfur or fluorine compounds. Presence of alkaline metal ions has also detrimental effect to the loss tangent of the glass, and to its electrical resistance; glasses for electronics (sealing, vacuum tubes, lamps...) have to take this in account.
Addition of lead(II) oxide lowers melting point, lowers viscosity of the melt, and increases refractive index. Lead oxide also facilitates solubility of other metal oxides and is used in colored glasses. The viscosity decrease of lead glass melt is very significant (roughly 100 times in comparison with soda glasses); this allows easier removal of bubbles and working at lower temperatures, hence its frequent use as an additive in vitreous enamels and glass solders. The high ionic radius of the Pb2+ ion renders it highly immobile in the matrix and hinders the movement of other ions; lead glasses therefore have high electrical resistance, about two orders of magnitude higher than soda-lime glass (108.5 vs 106.5 Ohm·cm, DC at 250 °C). For more details, see lead glass.
Addition of fluorine lowers the dielectric constant of glass. Fluorine is highly electronegative and attracts the electrons in the lattice, lowering the polarizability of the material. Such silicon dioxide-fluoride is used in manufacture of integrated circuits as an insulator. High levels of fluorine doping lead to formation of volatile SiF2O and such glass is then thermally unstable. Stable layers were achieved with dielectric constant down to about 3.5–3.7.
Amorphous metals
thumb|Samples of amorphous metal, with millimeter scale
In the past, small batches of amorphous metals with high surface area configurations (ribbons, wires, films, etc.) have been produced through the implementation of extremely rapid rates of cooling. This was initially termed "splat cooling" by doctoral student W. Klement at Caltech, who showed that cooling rates on the order of millions of degrees per second is sufficient to impede the formation of crystals, and the metallic atoms become "locked into" a glassy state. Amorphous metal wires have been produced by sputtering molten metal onto a spinning metal disk. More recently a number of alloys have been produced in layers with thickness exceeding 1 millimeter. These are known as bulk metallic glasses (BMG). Liquidmetal Technologies sell a number of zirconium-based BMGs. Batches of amorphous steel have also been produced that demonstrate mechanical properties far exceeding those found in conventional steel alloys.
In 2004, NIST researchers presented evidence that an isotropic non-crystalline metallic phase (dubbed "q-glass") could be grown from the melt. This phase is the first phase, or "primary phase", to form in the Al-Fe-Si system during rapid cooling. Interestingly, experimental evidence indicates that this phase forms by a first-order transition. Transmission electron microscopy (TEM) images show that the q-glass nucleates from the melt as discrete particles, which grow spherically with a uniform growth rate in all directions. The diffraction pattern shows it to be an isotropic glassy phase. Yet there is a nucleation barrier, which implies an interfacial discontinuity (or internal surface) between the glass and the melt.
Electrolytes
Electrolytes or molten salts are mixtures of different ions. In a mixture of three or more ionic species of dissimilar size and shape, crystallization can be so difficult that the liquid can easily be supercooled into a glass.
The best-studied example is Ca0.4K0.6(NO3)1.4.
Aqueous solutions
Some aqueous solutions can be supercooled into a glassy state, for instance LiCl:RH2O in the composition range 4<R<8.
Molecular liquids
A molecular liquid is composed of molecules that do not form a covalent network but interact only through weak van der Waals forces or through transient hydrogen bonds.
Many molecular liquids can be supercooled into a glass; some are excellent glass formers that normally do not crystallize.
A widely known example is sugar glass.
Under extremes of pressure and temperature solids may exhibit large structural and physical changes that can lead to polyamorphic phase transitions. In 2006 Italian scientists created an amorphous phase of carbon dioxide using extreme pressure. The substance was named amorphous carbonia(a-CO2) and exhibits an atomic structure resembling that of silica.Carbon dioxide glass created in the lab. NewScientist. 15 June 2006.
Polymers
Important polymer glasses include amorphous and glassy pharmaceutical compounds. These are useful because the solubility of the compound is greatly increased when it is amorphous compared to the same crystalline composition. Many emerging pharmaceuticals are practically insoluble in their crystalline forms.Understanding polymer glasses
Colloidal glasses
Concentrated colloidal suspensions may exhibit a distinct glass transition as function of particle concentration or density.
In cell biology there is recent evidence suggesting that the cytoplasm behaves like a colloidal glass approaching the liquid-glass transition.http://www.cell.com/abstract/S0092-8674%2813%2901479-7 During periods of low metabolic activity, as in dormancy, the cytoplasm vitrifies and prohibits the movement to larger cytoplasmic particles while allowing the diffusion of smaller ones throughout the cell.
Glass-ceramics
right|thumb|A high-strength glass-ceramic cooktop with negligible thermal expansion.
Glass-ceramic materials share many properties with both non-crystalline glass and crystalline ceramics. They are formed as a glass, and then partially crystallized by heat treatment. For example, the microstructure of whiteware ceramics frequently contains both amorphous and crystalline phases. Crystalline grains are often embedded within a non-crystalline intergranular phase of grain boundaries. When applied to whiteware ceramics, vitreous means the material has an extremely low permeability to liquids, often but not always water, when determined by a specified test regime.
The term mainly refers to a mix of lithium and aluminosilicates that yields an array of materials with interesting thermomechanical properties. The most commercially important of these have the distinction of being impervious to thermal shock. Thus, glass-ceramics have become extremely useful for countertop cooking. The negative thermal expansion coefficient (CTE) of the crystalline ceramic phase can be balanced with the positive CTE of the glassy phase. At a certain point (~70% crystalline) the glass-ceramic has a net CTE near zero. This type of glass-ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C.
Structure
As in other amorphous solids, the atomic structure of a glass lacks the long-range periodicity observed in crystalline solids. Due to chemical bonding characteristics glasses do possess a high degree of short-range order with respect to local atomic polyhedra.
thumb|left|The amorphous structure of glassy silica (SiO2) in two dimensions. No long-range order is present, although there is local ordering with respect to the tetrahedral arrangement of oxygen (O) atoms around the silicon (Si) atoms.
Formation from a supercooled liquid
In physics, the standard definition of a glass (or vitreous solid) is a solid formed by rapid melt quenching, ASTM definition of glass from 1945; also: DIN 1259, Glas – Begriffe für Glasarten und Glasgruppen, September 1986 although the term glass is often used to describe any amorphous solid that exhibits a glass transition temperature Tg. For melt quenching, if the cooling is sufficiently rapid (relative to the characteristic crystallization time) then crystallization is prevented and instead the disordered atomic configuration of the supercooled liquid is frozen into the solid state at Tg. The tendency for a material to form a glass while quenched is called glass-forming ability. This ability can be predicted by the rigidity theory. Generally, a glass exists in a structurally metastable state with respect to its crystalline form, although in certain circumstances, for example in atactic polymers, there is no crystalline analogue of the amorphous phase.
Some people consider glass to be a liquid due to its lack of a first-order phase transition
where certain thermodynamic variables such as volume, entropy and enthalpy are discontinuous through the glass transition range. The glass transition may be described as analogous to a second-order phase transition where the intensive thermodynamic variables such as the thermal expansivity and heat capacity are discontinuous. Nonetheless, the equilibrium theory of phase transformations does not entirely hold for glass, and hence the glass transition cannot be classed as one of the classical equilibrium phase transformations in solids.
Glass is an amorphous solid. It exhibits an atomic structure close to that observed in the supercooled liquid phase but displays all the mechanical properties of a solid."Philip Gibbs" Glass Worldwide, (May/June 2007), pp. 14–18 The notion that glass flows to an appreciable extent over extended periods of time is not supported by empirical research or theoretical analysis (see viscosity of amorphous materials). Laboratory measurements of room temperature glass flow do show a motion consistent with a material viscosity on the order of 1017–1018 Pa s.
Although the atomic structure of glass shares characteristics of the structure in a supercooled liquid, glass tends to behave as a solid below its glass transition temperature. A supercooled liquid behaves as a liquid, but it is below the freezing point of the material, and in some cases will crystallize almost instantly if a crystal is added as a core. The change in heat capacity at a glass transition and a melting transition of comparable materials are typically of the same order of magnitude, indicating that the change in active degrees of freedom is comparable as well. Both in a glass and in a crystal it is mostly only the vibrational degrees of freedom that remain active, whereas rotational and translational motion is arrested. This helps to explain why both crystalline and non-crystalline solids exhibit rigidity on most experimental time scales.
Behavior of antique glass
The observation that old windows are sometimes found to be thicker at the bottom than at the top is often offered as supporting evidence for the view that glass flows over a timescale of centuries, the assumption being that the glass has exhibited the liquid property of flowing from one shape to another. This assumption is incorrect, as once solidified, glass stops flowing. The reason for the observation is that in the past, when panes of glass were commonly made by glassblowers, the technique used was to spin molten glass so as to create a round, mostly flat and even plate (the crown glass process, described above). This plate was then cut to fit a window. The pieces were not absolutely flat; the edges of the disk became a different thickness as the glass spun. When installed in a window frame, the glass would be placed with the thicker side down both for the sake of stability and to prevent water accumulating in the lead cames at the bottom of the window. Occasionally such glass has been found installed with the thicker side at the top, left or right.
Mass production of glass window panes in the early twentieth century caused a similar effect. In glass factories, molten glass was poured onto a large cooling table and allowed to spread. The resulting glass is thicker at the location of the pour, located at the center of the large sheet. These sheets were cut into smaller window panes with nonuniform thickness, typically with the location of the pour centered in one of the panes (known as "bull's-eyes") for decorative effect. Modern glass intended for windows is produced as float glass and is very uniform in thickness.
Several other points can be considered that contradict the "cathedral glass flow" theory:
Writing in the American Journal of Physics, the materials engineer Edgar D. Zanotto states "…the predicted relaxation time for GeO2 at room temperature is 1032 years. Hence, the relaxation period (characteristic flow time) of cathedral glasses would be even longer." (1032 years is many times longer than the estimated age of the universe.)
If medieval glass has flowed perceptibly, then ancient Roman and Egyptian objects should have flowed proportionately more—but this is not observed. Similarly, prehistoric obsidian blades should have lost their edge; this is not observed either (although obsidian may have a different viscosity from window glass).
If glass flows at a rate that allows changes to be seen with the naked eye after centuries, then the effect should be noticeable in antique telescopes. Any slight deformation in the antique telescopic lenses would lead to a dramatic decrease in optical performance, a phenomenon that is not observed.
Gallery
See also
Caneworking
Fabrication and testing of optical components
Fire glass
Knitted glass
Glass recycling
Kimberley points
Murrine
Optical lens design
Prince Rupert's Drop
Superglass
Tektite
Vitrified sand
Toughened glass
Low-iron glass
Mirror
Stained glass
References
Further reading
Noel C. Stokes; The Glass and Glazing Handbook; Standards Australia; SAA HB125–1998
(reprinted by R&D Magazine)
Stookey, S. Donald. Explorations in Glass: An Autobiography. Wiley, 2000. ISBN 978-1-57498-124-7
Vogel, Werner. Chemistry of Glass. Wiley, 1985. ISBN 978-0-916094-73-7
External links
The Story of Glass Making in Canada from The Canadian Museum of Civilization.
"How Your Glass Ware Is Made" by George W. Waltz, February 1951, Popular Science.
All About Glass from the Corning Museum of Glass: a collection of articles, multimedia, and virtual books all about glass, including the Glass Dictionary.
Glass Encyclopedia from 20th Century Glass: a comprehensive guide to all types of antique and collectible glass, with information, pictures and references.
National Glass Association the largest trade association representing the flat (architectural), auto glass, and window & door industries
Category:Dielectrics
Category:Packaging materials
Category:Sculpture materials | 12,581 | 2017-01 |
Sanskrit | Sanskrit (English pronunciation:; written in Devanagari script; ; ; IPA: or , originally , "refined speech") is the primary sacred language of Hinduism and Mahāyāna Buddhism, a philosophical language in Hinduism, Jainism, Buddhism and Sikhism. It was also a literary language that was in use as a lingua franca in ancient and medieval South Asia., Quote: "Sanskrit was another important lingua franca in the ancient world that was widely used in South Asia and in the context of Hindu and Buddhist religions in neighboring areas as well. (...) The spread of South Asian cultural influence to Southeast Asia, Central Asia and East Asia meant that Sanskrit was also used in these areas, especially in a religious context and political elites."; Quote: "Sanskrit served as the lingua franca of ancient India, just as Latin did in medieval Europe" It is a standardised dialect of Old Indo-Aryan, originating as Vedic Sanskrit and tracing its linguistic ancestry back to Proto-Indo-Iranian and Proto-Indo-European.Burrow, T. (2001). The Sanskrit Language. Faber: Chicago p. v & ch. 1 Today it is listed as one of the 22 scheduled languages of India and is an official language of the state of Uttarakhand. As one of the oldest Indo-European languages for which substantial written documentation exists, Sanskrit holds a prominent position in Indo-European studies.
The body of Sanskrit literature encompasses a rich tradition of poetry and drama as well as scientific, technical, philosophical and religious texts. Sanskrit continues to be widely used as a ceremonial language in Hindu religious rituals and Buddhist practice in the form of hymns and chants. Spoken Sanskrit has been revived in some villages with traditional institutions, and there are attempts to enhance its popularity.
Name
right|thumb|Ancient Sanskrit on Hemp based Paper. Hemp Fiber was commonly used in the production of paper from 200 BCE to the Late 1800's.
The Sanskrit verbal adjective may be translated as "put together, constructed, well or completely formed; refined, adorned, highly elaborated". It is derived from the root word "to put together, compose, arrange, prepare"
As a term for "refined or elaborated speech" the adjective appears only in Epic and Classical Sanskrit in the Manusmṛti and the Mahabharata. The language referred to as "the cultured language" has by definition always been a "sacred" and "sophisticated" language, used for religious and learned discourse in ancient India, in contrast to the language spoken by the people, "natural, artless, normal, ordinary".
Variants
The pre-Classical form of Sanskrit is known as Vedic Sanskrit, with the language of the Rigveda being the oldest and most archaic stage preserved, dating back to the early second millennium BCE.
Classical Sanskrit is the standard register as laid out in the grammar of , around the fourth century BCE. Its position in the cultures of Greater India is akin to that of Latin and Ancient Greek in Europe and it has significantly influenced most modern languages of the Indian subcontinent, particularly in India, Bangladesh, Pakistan, Sri Lanka and Nepal.
Vedic Sanskrit
thumb|300px|Rigveda (padapatha) manuscript in Devanagari, early 19th century
Sanskrit, as defined by , evolved out of the earlier Vedic form. The present form of Vedic Sanskrit can be traced back to as early as the second millennium BCE (for Rig-vedic). Scholars often distinguish Vedic Sanskrit and Classical or "Pāṇinian" Sanskrit as separate dialects. Although they are quite similar, they differ in a number of essential points of phonology, vocabulary, grammar and syntax. Vedic Sanskrit is the language of the Vedas, a large collection of hymns, incantations (Samhitas) and theological and religio-philosophical discussions in the Brahmanas and Upanishads. Modern linguists consider the metrical hymns of the Rigveda Samhita to be the earliest, composed by many authors over several centuries of oral tradition. The end of the Vedic period is marked by the composition of the Upanishads, which form the concluding part of the traditional Vedic corpus; however, the early Sutras are Vedic, too, both in language and content.
Classical Sanskrit
For nearly 2000 years, Sanskrit was the language of a cultural order that exerted influence across South Asia, Inner Asia, Southeast Asia, and to a certain extent East Asia. A significant form of post-Vedic Sanskrit is found in the Sanskrit of Indian epic poetry—the Ramayana and Mahabharata. The deviations from in the epics are generally considered to be on account of interference from Prakrits, or innovations, and not because they are pre-Paninian. Traditional Sanskrit scholars call such deviations ārṣa (आर्ष), meaning 'of the ṛṣis', the traditional title for the ancient authors. In some contexts, there are also more "prakritisms" (borrowings from common speech) than in Classical Sanskrit proper. Buddhist Hybrid Sanskrit is a literary language heavily influenced by the Middle Indo-Aryan languages, based on early Buddhist Prakrit texts which subsequently assimilated to the Classical Sanskrit standard in varying degrees.
There were four principal dialects of classical Sanskrit: (Northwestern, also called Northern or Western), (lit., middle country), (Eastern) and (Southern, arose in the Classical period). The predecessors of the first three dialects are attested in Vedic Brāhmaṇas, of which the first one was regarded as the purest ().
Contemporary usage
As a spoken language
In the 2001 census of India, Sanskrit is spoken by 14,135 as their native language, by 1,234,931 people as a second language, and by 3,742,223 people as a third language. Since the 1990s, movements to spread spoken Sanskrit have been increasing. Organisations like Samskrita Bharati conduct Speak Sanskrit workshops to popularise the language.
Indian newspapers have published reports about several villages, where, as a result of recent revival attempts, large parts of the population, including children, are learning Sanskrit and are even using it to some extent in everyday communication:
Mattur, Shimoga district, Karnataka
Jhiri, Rajgarh district, Madhya Pradesh
Ganoda, Banswara district, Rajasthan
Shyamsundarpur, Kendujhar district, Odisha
According to the 2011 national census of Nepal, 1,669 people use Sanskrit as their native language.
In official use
In India, Sanskrit is among the 14 original languages of the Eighth Schedule to the Constitution. The state of Uttarakhand in India has ruled Sanskrit as its second official language. In October 2012 social activist Hemant Goswami filed a writ petition in the Punjab and Haryana High Court for declaring Sanskrit as a 'minority' language.
Contemporary literature and patronage
More than 3,000 Sanskrit works have been composed since India's independence in 1947. Much of this work has been judged of high quality, in comparison to both classical Sanskrit literature and modern literature in other Indian languages.: :
The Sahitya Akademi has given an award for the best creative work in Sanskrit every year since 1967. In 2009, Satya Vrat Shastri became the first Sanskrit author to win the Jnanpith Award, India's highest literary award.
In music
Sanskrit is used extensively in the Carnatic and Hindustani branches of classical music. Kirtanas, bhajans, stotras, and shlokas of Sanskrit are popular throughout India. The samaveda uses musical notations in several of its recessions.
In Mainland China, musicians such as Sa Dingding have written pop songs in Sanskrit.
In mass media
Over 90 weeklies, fortnightlies and quarterlies are published in Sanskrit. Sudharma, a daily newspaper in Sanskrit, has been published out of Mysore, India, since 1970, while Sanskrit Vartman Patram and Vishwasya Vrittantam started in Gujarat during the last five years.
Since 1974, there has been a short daily news broadcast on state-run All India Radio. These broadcasts are also made available on the internet on AIR's website. Sanskrit news is broadcast on TV and on the internet through the DD National channel at 6:55 AM IST.
In liturgy
Sanskrit is the sacred language of various Hindu, Buddhist, and Jain traditions. It is used during worship in Hindu temples throughout the world. In Newar Buddhism, it is used in all monasteries, while Mahayana and Tibetan Buddhist religious texts and sutras are in Sanskrit as well as vernacular languages. Jain texts are written in Sanskrit, including the Tattvartha sutra, Ratnakaranda śrāvakācāra , the Bhaktamara Stotra and the Agamas.
right|thumb|400px|Devi Mahatmya palm-leaf manuscript in an early Bhujimol script, Bihar or Nepal, 11th century
It is also popular amongst the many practitioners of yoga in the West, who find the language helpful for understanding texts such as the Yoga Sutras of Patanjali.
Symbolic usage
In Nepal, India and Indonesia, Sanskrit phrases are widely used as mottoes for various national, educational and social organisations:
India: Satyameva Jayate meaning: Truth alone triumphs.
Nepal: Janani Janmabhoomischa Swargadapi Gariyasi meaning: Mother and motherland are superior to heaven.
Indonesia: In Indonesia, Sanskrit are usually widely used as terms and mottoes of the armed forces and other national organizations (See: Indonesian Armed Forces mottoes). Rastra Sewakottama (राष्ट्र सेवकोत्तम; People's Main Servants) is the official motto of the Indonesian National Police, Tri Dharma Eka Karma(त्रीधर्म एक कर्म) is the official motto of the Indonesian Military, Kartika Eka Paksi (कार्तिक एक पक्षी; Unmatchable Bird with Noble Goals) is the official motto of the Indonesian Army, Adhitakarya Mahatvavirya Nagarabhakti (अधीतकार्य महत्ववीर्य नगरभक्ती; Hard-working Knights Serving Bravery as Nations Hero") is the official motto of the Indonesian Military Academy, Upakriya Labdha Prayojana Balottama (उपकृया लब्ध प्रयोजन बालोत्तम; "Purpose of The Unit is to Give The Best Service to The Nation by Finding The Perfect Soldier") is the official motto of the Army Psychological Corps, Karmanye Vadikaraste Mafalesu Kadachana (कर्मायने वदीकरस्ते माफलेशु कदाचन; "Working Without Counting The Profit and Loss") is the official motto of the Air-Force Special Forces (Paskhas), Jalesu Bhumyamcha Jayamahe (जलेशु भूम्यं च जयमहे; "On The Sea and Land We Are Glorious") is the official motto of the Indonesian Marine Corps, and there are more units and organizations in Indonesia either Armed Forces or civil which use the Sanskrit language respectively as their mottoes and other purposes. Although Indonesia is a Muslim-majority country, it still has major Hindu and Indian influence since pre-historic times until now culturally and traditionally especially in the islands of Java and Bali.
Many of India's and Nepal's scientific and administrative terms are named in Sanskrit. The Indian guided missile program that was commenced in 1983 by the Defence Research and Development Organisation has named the five missiles (ballistic and others) that it developed Prithvi, Agni, Akash, Nag and the Trishul missile system. India's first modern fighter aircraft is named HAL Tejas.
Historical usage
Origin and development
Sanskrit is a member of the Indo-Iranian subfamily of the Indo-European family of languages. Its closest ancient relatives are the Iranian languages Avestan and Old Persian.
In order to explain the common features shared by Sanskrit and other Indo-European languages, the Indo-Aryan migration theory states that the original speakers of what became Sanskrit arrived in the Indian subcontinent from the north-west some time during the early second millennium BCE. Evidence for such a theory includes the close relationship between the Indo-Iranian tongues and the Baltic and Slavic languages, vocabulary exchange with the non-Indo-European Uralic languages, and the nature of the attested Indo-European words for flora and fauna.
The earliest attested Sanskrit texts are religious texts of the Rigveda, from the mid-to-late second millennium BCE. No written records from such an early period survive, if they ever existed. However, scholars are confident that the oral transmission of the texts is reliable: they were ceremonial literature whose correct pronunciation was considered crucial to its religious efficacy.
From the Rigveda until the time of (fourth century BCE) the development of the early Vedic language can be observed in other Vedic texts: the Samaveda, Yajurveda, Atharvaveda, Brahmanas, and Upanishads. During this time, the prestige of the language, its use for sacred purposes, and the importance attached to its correct enunciation all served as powerful conservative forces resisting the normal processes of linguistic change. However, there is a clear, five-level linguistic development of Vedic from the Rigveda to the language of the Upanishads and the earliest sutras such as the Baudhayana sutras.
Standardisation by Panini
The oldest surviving Sanskrit grammar is Pāṇini's ("Eight-Chapter Grammar"). It is essentially a prescriptive grammar, i.e., an authority that defines Sanskrit, although it contains descriptive parts, mostly to account for some Vedic forms that had become rare in 's time. Classical Sanskrit became fixed with the grammar of Pāṇini (roughly 500 BCE), and remains in use as a learned language through the present day.
Coexistence with vernacular languages
Sanskrit linguist Madhav Deshpande says that when the term "Sanskrit" arose it was not thought of as a specific language set apart from other languages, but rather as a particularly refined or perfected manner of speaking. Knowledge of Sanskrit was a marker of social class and educational attainment in ancient India, and the language was taught mainly to members of the higher castes through the close analysis of Vyākaraṇins such as and Patanjali, who exhorted proper Sanskrit at all times, especially during ritual. Sanskrit, as the learned language of Ancient India, thus existed alongside the vernacular Prakrits, which were Middle Indo-Aryan languages. However, linguistic change led to an eventual loss of mutual intelligibility.
Many Sanskrit dramas also indicate that the language coexisted with Prakrits, spoken by multilingual speakers with a more extensive education. Sanskrit speakers were almost always multilingual. In the medieval era, Sanskrit continued to be spoken and written, particularly by learned Brahmins for scholarly communication. This was a thin layer of Indian society, but covered a wide geography. Centres like Varanasi, Paithan, Pune and Kanchipuram had a strong presence of teaching and debating institutions, and high classical Sanskrit was maintained until British times.
Decline
There are a number of sociolinguistic studies of spoken Sanskrit which strongly suggest that oral use of modern Sanskrit is limited, having ceased development sometime in the past.
Sheldon Pollock argues that "most observers would agree that, in some crucial way, Sanskrit is dead". Pollock has further argued that, while Sanskrit continued to be used in literary cultures in India, it was never adapted to express the changing forms of subjectivity and sociality as embodied and conceptualised in the modern age. Instead, it was reduced to "reinscription and restatements" of ideas already explored, and any creativity was restricted to hymns and verses. A notable exception are the military references of Nīlakaṇṭha Caturdhara's 17th-century commentary on the Mahābhārata.
Hatcher argues that modern works continue to be produced in Sanskrit, while according to Hanneder,
Hanneder has also argued that modern works in Sanskrit are either ignored or their "modernity" contested.
When the British imposed a Western-style education system in India in the 19th century, knowledge of Sanskrit and ancient literature continued to flourish as the study of Sanskrit changed from a more traditional style into a form of analytical and comparative scholarship mirroring that of Europe.
Public education and popularisation
Adult and continuing education
Attempts at reviving the Sanskrit language have been undertaken in the Republic of India since its foundation in 1947 (it was included in the 14 original languages of the Eighth Schedule to the Constitution).
Samskrita Bharati is an organisation working for Sanskrit revival. The "All-India Sanskrit Festival" (since 2002) holds composition contests. The 1991 Indian census reported 49,736 fluent speakers of Sanskrit. Sanskrit learning programmes also feature on the lists of most AIR broadcasting centres. The Mattur village in central Karnataka claims to have native speakers of Sanskrit among its population. Inhabitants of all castes learn Sanskrit starting in childhood and converse in the language. Even the local Muslims converse in Sanskrit. Historically, the village was given by king Krishnadevaraya of the Vijayanagara Empire to Vedic scholars and their families, while people in his kingdom spoke Kannada and Telugu. Another effort concentrates on preserving and passing along the oral tradition of the Vedas, is one such organisation based out of Hyderabad that has been digitising the Vedas by recording recitations of Vedic Pandits.
School curricula
thumb|Sanskrit festival at Pramati Hillview Academy, Mysore, India.
The Central Board of Secondary Education of India (CBSE), along with several other state education boards, has made Sanskrit an alternative option to the state's own official language as a second or third language choice in the schools it governs. In such schools, learning Sanskrit is an option for grades 5 to 8 (Classes V to VIII). This is true of most schools affiliated with the Indian Certificate of Secondary Education (ICSE) board, especially in states where the official language is Hindi. Sanskrit is also taught in traditional gurukulas throughout India.
In the West
St James Junior School in London, England, offers Sanskrit as part of the curriculum. In the United States, since September 2009, high school students have been able to receive credits as Independent Study or toward Foreign Language requirements by studying Sanskrit, as part of the "SAFL: Samskritam as a Foreign Language" program coordinated by Samskrita Bharati. In Australia, the Sydney private boys' high school Sydney Grammar School offers Sanskrit from years 7 through to 12, including for the Higher School Certificate.
Universities
A list of Sanskrit universities is given below in chronological order of establishment:
Year Est. Name Location 1791 Sampurnanand Sanskrit University Varanasi 1824 Sanskrit College Kolkata 1876 Sadvidya Pathashala Mysore 1961 Kameshwar Singh Darbhanga Sanskrit University Darbhanga 1962 Rashtriya Sanskrit Vidyapeetha Tirupati 1962 Shri Lal Bahadur Shastri Rashtriya Sanskrit Vidyapeetha New Delhi 1970 Rashtriya Sanskrit Sansthan New Delhi 1981 Shri Jagannath Sanskrit University Puri 1986 Nepal Sanskrit University Nepal 1993 Sree Sankaracharya University of Sanskrit Kalady, Kerala 1997 Kavikulaguru Kalidas Sanskrit University Ramtek 2001 Jagadguru Ramanandacharya Rajasthan Sanskrit University Jaipur 2005 Uttarakhand Sanskrit University Haridwar 2005 Shree Somnath Sanskrit University Somnath-Veraval 2008 Maharshi Panini Sanskrit Evam Vedic Vishwavidyalaya Ujjain 2011 Karnataka Samskrit University Bangalore
Many universities throughout the world train and employ Sanskrit scholars, either within a separate Sanskrit department or as part of a broader focus area, such as South Asian studies or Linguistics. For example, Delhi university has about 400 Sanskrit students, about half of which are in post-graduate programmes.
European scholarship
thumb|A poem by the ancient Indian poet Vallana (ca. 900 – 1100 CE) on the side wall of a building at the Haagweg 14 in Leiden, Netherlands
European scholarship in Sanskrit, begun by Heinrich Roth (1620–1668) and Johann Ernst Hanxleden (1681–1731), is considered responsible for the discovery of an Indo-European language family by Sir William Jones (1746–1794). This research played an important role in the development of Western philology, or historical linguistics.
Sir William Jones was one of the most influential philologists of his time. He told The Asiatic Society in Calcutta on 2 February 1786:
The Sanskrit language, whatever be its antiquity, is of a wonderful structure; more perfect than the Greek, more copious than the Latin, and more exquisitely refined than either, yet bearing to both of them a stronger affinity, both in the roots of verbs and in the forms of grammar, than could have been produced by accident; so strong, indeed, that no philologer could examine them all three, without believing them to have sprung from some common source, which, perhaps, no longer exists.
British attitudes
Orientalist scholars of the 18th century like Sir William Jones marked a wave of enthusiasm for Indian culture and for Sanskrit. According to Thomas Trautmann, after this period of "Indomania", a certain hostility to Sanskrit and to Indian culture in general began to assert itself in early 19th century Britain, manifested by a neglect of Sanskrit in British academia. This was the beginning of a general push in favour of the idea that India should be culturally, religiously and linguistically assimilated to Britain as far as possible. Trautmann considers two separate and logically opposite sources for the growing hostility: one was "British Indophobia", which he calls essentially a developmentalist, progressivist, liberal, and non-racial-essentialist critique of Hindu civilisation as an aid for the improvement of India along European lines; the other was scientific racism, a theory of the English "common-sense view" that Indians constituted a "separate, inferior and unimprovable race".
Phonology
Classical Sanskrit distinguishes about 36 phonemes; the presence of allophony leads the writing systems to generally distinguish 48 phones, or sounds. The sounds are traditionally listed in the order vowels (Ac), diphthongs (Hal), anusvara and visarga, plosives (Sparśa), nasals, and finally the liquids and fricatives, written in the International Alphabet of Sanskrit Transliteration (IAST) as follows:
Vowels:
;
;
Consonants:
Writing system
thumb|Kashmir Shaiva manuscript in the Śāradā script (c. 17th century)
Sanskrit originated in an oral society, and the oral tradition was maintained through the development of early classical Sanskrit literature. Some scholars such as Jack Goody suggest that the Vedic Sanskrit texts are not the product of an oral society, basing this view by comparing inconsistencies in the transmitted versions of literature from various oral societies such as the Greek, Serbia and other cultures, then noting that the Vedic literature is too consistent and vast to have been composed and transmitted orally across generations, without being written down. These scholars add that the Vedic texts likely involved both a written and oral tradition, calling it a "parallel products of a literate society".
Sanskrit has no native script of its own, and historical evidence suggests that it has been written in various scripts on a variety of media such as palm leaves, cloth, paper, rock and metal sheets, at least by the time of arrival of Alexander the Great in northwestern Indian subcontinent in 1st millennium BCE.
thumb|Illustration of Devanagari as used for writing Sanskrit
The earliest known rock inscriptions in Sanskrit date to the mid second century CE.c150 - Junagadh rock inscription of Rudradaman: They are in the Brāhmī script, which was originally used for Prakrit, not Sanskrit. It has been described as a paradox that the first evidence of written Sanskrit occurs centuries later than that of the Prakrit languages which are its linguistic descendants. In northern India, there are Brāhmī inscriptions dating from the third century BCE onwards, the oldest appearing on the famous Prakrit pillar inscriptions of king Ashoka. The earliest South Indian inscriptions in Tamil Brahmi, written in early Tamil, belong to the same period. When Sanskrit was written down, it was first used for texts of an administrative, literary or scientific nature. The sacred hymns and verse were preserved orally, and were set down in writing "reluctantly" (according to one commentator), and at a comparatively late date.
thumb|300px|Sanskrit in modern Indian and other Brahmi scripts: May Śiva bless those who take delight in the language of the gods. (Kālidāsa)
Brahmi evolved into a multiplicity of Brahmic scripts, many of which were used to write Sanskrit. Roughly contemporary with the Brahmi, Kharosthi was used in the northwest of the subcontinent. Sometime between the fourth and eighth centuries, the Gupta script, derived from Brahmi, became prevalent. Around the eighth century, the Śāradā script evolved out of the Gupta script. The latter was displaced in its turn by Devanagari in the 11th or 12th century, with intermediary stages such as the Siddhaṃ script. In East India, the Bengali alphabet, and, later, the Odia alphabet, were used.
In the south, where Dravidian languages predominate, scripts used for Sanskrit include the Tamil, Kannada, Telugu, the Malayalam and Grantha alphabets.
Romanisation
Since the late 18th century, Sanskrit has been transliterated using the Latin alphabet. The system most commonly used today is the IAST (International Alphabet of Sanskrit Transliteration), which has been the academic standard since 1888. ASCII-based transliteration schemes have also evolved because of difficulties representing Sanskrit characters in computer systems. These include Harvard-Kyoto and ITRANS, a transliteration scheme that is used widely on the Internet, especially in Usenet and in email, for considerations of speed of entry as well as rendering issues. With the wide availability of Unicode-aware web browsers, IAST has become common online. It is also possible to type using an alphanumeric keyboard and transliterate to Devanagari using software like Mac OS X's international support.
European scholars in the 19th century generally preferred Devanagari for the transcription and reproduction of whole texts and lengthy excerpts. However, references to individual words and names in texts composed in European Languages were usually represented with Roman transliteration. From the 20th century onwards, because of production costs, textual editions edited by Western scholars have mostly been in Romanised transliteration.
Grammar
The Sanskrit grammatical tradition, Vyākaraṇa, one of the six Vedangas, began in the late Vedic period and culminated in the Aṣṭādhyāyī of Pāṇini, which consists of 3990 sutras (ca. fifth century BCE). About a century after Pāṇini (around 400 BCE), Kātyāyana composed Vārtikas on the Pāṇini sũtras. Patanjali, who lived three centuries after Pāṇini, wrote the Mahābhāṣya, the "Great Commentary" on the Aṣṭādhyāyī and Vārtikas. Because of these three ancient Vyākaraṇins (grammarians), this grammar is called Trimuni Vyākarana. To understand the meaning of the sutras, Jayaditya and Vāmana wrote a commentary, the Kāsikā, in 600 CE. Pāṇinian grammar is based on 14 Shiva sutras (aphorisms), where the whole mātrika (alphabet) is abbreviated. This abbreviation is called the Pratyāhara.
Sanskrit verbs are categorized into ten classes, which can be conjugated to form the present, imperfect, imperative, optative, perfect, aorist, future, and conditional moods and tenses. Before Classical Sanskrit, older forms also included a subjunctive mood. Each conjugational ending conveys person, number, and voice.
Nouns are highly inflected, including three grammatical genders, three numbers, and eight cases. Nominal compounds are common, and can include over 10 word stems.
Word order is free, though there is a strong tendency toward subject–object–verb, the original system of Vedic prose.
Influence on other languages
Indic languages
Sanskrit has greatly influenced the languages of India that grew from its vocabulary and grammatical base; for instance, Hindi is a "Sanskritised register" of the Khariboli dialect. All modern Indo-Aryan languages, as well as Munda and Dravidian languages, have borrowed many words either directly from Sanskrit (tatsama words), or indirectly via middle Indo-Aryan languages (tadbhava words). Words originating in Sanskrit are estimated at roughly fifty percent of the vocabulary of modern Indo-Aryan languages, as well as the literary forms of Malayalam and Kannada. Literary texts in Telugu are lexically Sanskrit or Sanskritised to an enormous extent, perhaps seventy percent or more. Marathi is another prominent language in Western India, that derives most of its words and Marathi grammar from Sanskrit.Sugam Marathi Vyakaran & Lekhana. 2007. Nitin publications. Author: M.R.Walimbe Sanskrit words are often preferred in the literary texts in Marathi over corresponding colloquial Marathi word.Carey, William (1805). A Grammar of the Marathi Language. Serampur [sic]: Serampore Mission Press. ISBN 9781108056311.
Interaction with other languages
Sanskrit has also influenced Sino-Tibetan languages through the spread of Buddhist texts in translation. Buddhism was spread to China by Mahayana missionaries sent by Ashoka, mostly through translations of Buddhist Hybrid Sanskrit. Many terms were transliterated directly and added to the Chinese vocabulary. Chinese words like 剎那 chànà (Devanagari: क्षण 'instantaneous period') were borrowed from Sanskrit. Many Sanskrit texts survive only in Tibetan collections of commentaries to the Buddhist teachings, the Tengyur.
Sanskrit was a language for religious purposes and for the political elite in parts of medieval era Southeast Asia, Central Asia and East Asia. In Southeast Asia, languages such as Thai and Lao contain many loanwords from Sanskrit, as do Khmer. For example, in Thai, Ravana, the emperor of Lanka, is called Thosakanth, a derivation of his Sanskrit name Dāśakaṇṭha "having ten necks".
Many Sanskrit loanwords are also found in Austronesian languages, such as Javanese, particularly the older form in which nearly half the vocabulary is borrowed. Other Austronesian languages, such as traditional Malay and modern Indonesian, also derive much of their vocabulary from Sanskrit. Similarly, Philippine languages such as Tagalog have some Sanskrit loanwords, although more are derived from Spanish. A Sanskrit loanword encountered in many Southeast Asian languages is the word bhāṣā, or spoken language, which is used to refer to the names of many languages. English also has words of Sanskrit origin.
In popular culture Satyagraha, an opera by Philip Glass, uses texts from the Bhagavad Gita, sung in Sanskrit. The closing credits of The Matrix Revolutions has a prayer from the Brihadaranyaka Upanishad. The song "Cyber-raga" from Madonna's album Music includes Sanskrit chants, and Shanti/Ashtangi from her 1998 album Ray of Light, which won a Grammy, is the ashtanga vinyasa yoga chant. The lyrics include the mantra Om shanti. Composer John Williams featured choirs singing in Sanskrit for Indiana Jones and the Temple of Doom and in Star Wars: Episode I – The Phantom Menace. The theme song of Battlestar Galactica 2004'' is the Gayatri Mantra, taken from the Rigveda. The lyrics of "The Child In Us" by Enigma also contains Sanskrit verses.
See also
Devanagari
Sanskrit numerals
References and notes
Further reading
External links
Sanskrit Lessons (free online from the Linguistics Research Center at UT Austin)
Samskrita Bharati, organisation supporting the usage of Sanskrit
Sanskrit Documents—Documents in ITX format of Upanishads, Stotras etc.
Sanskrit texts at Sacred Text Archive
Sanskrit Manuscripts in Cambridge Digital Library
Category:Ancient languages
Category:Classical Language in India
Category:Indo-Aryan languages
Category:Languages written in Devanagari
Category:Subject–object–verb languages
Category:Official languages of India | 27,698 | 2017-01 |
Iran | Iran ( , also , ; ), also known as Persia (, ), officially the Islamic Republic of Iran ( ), is a sovereign state in Western Asia. It is bordered to the northwest by Armenia, the de facto Nagorno-Karabakh Republic, and Azerbaijan; to the north by the Caspian Sea; to the northeast by Turkmenistan; to the east by Afghanistan and Pakistan; to the south by the Persian Gulf and the Gulf of Oman; and to the west by Turkey and Iraq. Comprising a land area of , it is the second-largest country in the Middle East and the 18th-largest in the world. With 82.8 million inhabitants, Iran is the world's 17th-most-populous country. It is the only country with both a Caspian Sea and an Indian Ocean coastline. The country's central location in Eurasia and Western Asia, and its proximity to the Strait of Hormuz, make it of great geostrategic importance. Tehran is the country's capital and largest city, as well as its leading economic center.
Iran is heir to one of the world's oldest civilizations,Christopher A Whatley (2001). Bought and Sold for English Gold: The Union of 1707 (Tuckwell Press, 2001) beginning with the formation of the Proto-Elamite and Elamite kingdoms in 3200–2800 BC. The area was first unified by the Iranian Medes in 625 BC, who became the dominant cultural and political power in the region. Iran reached its greatest geographic extent during the Achaemenid Empire founded by Cyrus the Great in 550 BC, which at one time stretched from parts of Eastern Europe in the west, to the Indus Valley in the east, making it the largest empire the world had yet seen. The empire collapsed in 330 BC following the conquests of Alexander the Great, but reemerged shortly after as the Parthian Empire. Under the Sassanid Dynasty, Iran again became one of the leading powers in the world for the next four centuries.Norman A. Stillman The Jews of Arab Lands p. 22 Jewish Publication Society, 1979 ISBN 0827611552International Congress of Byzantine Studies Proceedings of the 21st International Congress of Byzantine Studies, London, 21–26 August 2006, Volumes 1–3 p. 29. Ashgate Pub Co, 30 sep. 2006 ISBN 075465740X
Beginning in 633 AD, Arabs conquered Iran and largely displaced the indigenous faiths of Manichaeism and Zoroastrianism by Islam. Iran became a major contributor to the Islamic Golden Age that followed, producing many influential scientists, scholars, artists, and thinkers. The rise of the Safavid Dynasty in 1501 led to the establishment of Twelver Shia Islam as the official religion of Iran, marking one of the most important turning points in Iranian and Muslim history.R.M. Savory, Safavids, Encyclopaedia of Islam, 2nd edition During the 18th century, Iran reached its greatest territorial extent since the Sassanid Empire, and under Nader Shah briefly possessed what was arguably the most powerful empire at the time. Through the late 18th and 19th centuries, a series of conflicts with Russia led to significant territorial losses and the erosion of sovereignty.Timothy C. Dowling Russia at War: From the Mongol Conquest to Afghanistan, Chechnya, and Beyond pp. 728–730 ABC-CLIO, 2 dec. 2014 ISBN 1598849484 Popular unrest culminated in the Persian Constitutional Revolution of 1906, which established a constitutional monarchy and the country's first legislative body, the Majles. Following a coup d'état instigated by the U.K. and the U.S. in 1953, Iran gradually became closely aligned with the West but grew increasingly autocratic.Anthony H. Cordesman "Iran's Military Forces in Transition: Conventional Threats and Weapons of Mass Destruction" p 22 Growing dissent against foreign influence and political repression led to the 1979 Revolution and the establishment of an Islamic republic.
Iran is a major regional and middle power, and its large reserves of fossil fuels — which include the largest natural gas supply in the world and the fourth-largest proven oil reserves — exert considerable influence in international energy security and the world economy. Iran's rich cultural legacy is reflected in part by its 21 UNESCO World Heritage Sites, the third-largest number in Asia and 11th-largest in the world.World Heritage List, UNESCO World Heritage Sites official sites. http://whc.unesco.org/en/list/
Iran is a founding member of the UN, ECO, NAM, OIC, and OPEC. Its political system is based on the 1979 Constitution which combines elements of a parliamentary democracy with a theocracy governed by Islamic jurists under the concept of a Supreme Leadership. A multicultural country comprising numerous ethnic and linguistic groups, most inhabitants are Shia Muslims and Persian is the official language.
Etymology
The term Iran derives directly from Middle Persian Ērān, first attested in a 3rd-century inscription at Rustam Relief, with the accompanying Parthian inscription using the term Aryān, in reference to Iranians. The Middle Iranian ērān and aryān are oblique plural forms of gentilic ēr- (Middle Persian) and ary- (Parthian), both deriving from Proto-Iranian *arya- (meaning "Aryan", i.e. "of the Iranians"), argued to descend from Proto-Indo-European , meaning "skillful assembler".Laroche. 1957. Proto-Iranian *arya- descends from Proto-Indo-European (PIE) , a yo-adjective to a root "to assemble skillfully", present in Greek harma "chariot", Greek aristos, (as in "aristocracy"), Latin ars "art", etc. In the Iranian languages, the gentilic is attested as a self-identifier included in ancient inscriptions and the literature of Avesta, and remains also in other Iranian ethnic names such as Alans (Ossetic: Ир – Ir) and Iron (Ossetic: Ирон – Iron).
Historically, Iran has been referred to as Persia by the West, due mainly to the writings of Greek historians who called Iran Persis (),Persia, Encyclopædia Britannica, "The term Persia was used for centuries ... [because] use of the name was gradually extended by the ancient Greeks and other peoples to apply to the whole Iranian plateau." meaning "land of the Persians". As the most extensive interactions the Ancient Greeks had with any outsider was with the Persians, the term persisted, even long after the Persian rule in Greece. However, Persis (Old Persian: Pārśa; Modern Persian: Pārse) was originally referred to a region settled by Persians in the west shore of Lake Urmia, in the 9th century BC. The settlement was then shifted to the southern end of the Zagros Mountains, and is today defined as Fars Province.
In 1935, Reza Shah requested the international community to refer to the country by its native name, Iran. As the New York Times explained at the time, "At the suggestion of the Persian Legation in Berlin, the Tehran government, on the Persian New Year, Nowruz, March 21, 1935, substituted Iran for Persia as the official name of the country." Opposition to the name change led to the reversal of the decision, and Professor Ehsan Yarshater, editor of Encyclopædia Iranica, propagated a move to use Persia and Iran interchangeably. Today, both Persia and Iran are used in cultural contexts; although, Iran is the name used officially in political contexts.
Historical and cultural usage of the word Iran is not restricted to the modern state proper. "Greater Iran" (Irānzamīn or Irān e Bozorg) correspond to territories of the Iranian cultural and linguistic zones. In addition to modern Iran, it includes portions of the Caucasus, Mesopotamia, Anatolia, and Central Asia.Farrokh, Kaveh. Shadows in the Desert: Ancient Persia at War. ISBN 1846031087
History
Prehistory
thumb|left|A cave painting in Doushe cave, Lorestan, from the 8th millennium BC.
The earliest archaeological artifacts in Iran, like those excavated at the Kashafrud and Ganj Par sites, attest to a human presence in Iran since the Lower Paleolithic era, c. 800,000–200,000 BC. Iran's Neanderthal artifacts from the Middle Paleolithic period, c. 200,000–40,000 BC, have been found mainly in the Zagros region, at sites such as Warwasi and Yafteh Cave. Around 10th to 8th millennium BC, early agricultural communities such as Chogha Golan and Chogha Bonut began to flourish in Iran,"Emergence of Agriculture in the Foothills of the Zagros Mountains of Iran", by Simone Riehl, Mohsen Zeidi, Nicholas J. Conard – University of Tübingen, publication 10 May 2013 as well as Susa and Chogha Mish developing in and around the Zagros region.
The emergence of Susa as a city, as determined by radiocarbon dating, dates back to early 4,395 BC. There are dozens of prehistoric sites across the Iranian plateau, pointing to the existence of ancient cultures and urban settlements in the 4th millennium BC.Iranian.ws, "Archaeologists: Modern civilization began in Iran based on new evidence", 12 August 2007. Retrieved 1 October 2007. During the Bronze Age, Iran was home to several civilizations including Elam, Jiroft, and Zayande River. Elam, the most prominent of these civilizations, developed in the southwest of Iran, alongside those in Mesopotamia. The emergence of writing in Elam was paralleled to Sumer, and the Elamite cuneiform was developed since the 3rd millennium BC.
The Elamite Kingdom continued its existence until the emergence of the Median and Achaemenid empires. Between 3400 BC until about 2000 BC, northwestern Iran was part of the Kura-Araxes culture that stretched into the neighbouring regions of the Caucasus and Anatolia. Since the earliest 2nd millennium BC, Assyrians settled in swaths of western Iran, and incorporated the region into their territories.
Classical antiquity
thumb|A depiction of the united Medes and Persians at the Apadana, Persepolis.
thumb|right|Modern impression of an Achaemenid cylinder seal from the 5th century BC, depicting a winged solar disc legitimizing the conquering Persian king who subdues two rampant Mesopotamian lamassu figures.
During the 2nd millennium BC, Proto-Iranian tribes arrived in Iran from the Eurasian steppes, rivaling the native settlers of the country. As these tribes dispersed into the wider area of Greater Iran and beyond, the boundaries of modern Iran were dominated by the Persian, Median and Parthian tribes.
From the late 10th to late 7th centuries BC, the Iranian peoples, together with the pre-Iranian kingdoms, fell under the domination of the Assyrian Empire, based in northern Mesopotamia.Georges Roux – Ancient Iraq Under king Cyaxares, the Medes and Persians entered into an alliance with Nabopolassar of Babylon, as well as the Scythians and the Cimmerians, and together they attacked the Assyrian Empire. The civil war ravaged the Assyrian Empire between 616 BC and 605 BC, thus freeing their respective peoples from three centuries of Assyrian rule. The unification of the Median tribes under a single ruler in 728 BC led to the foundation of the Median Empire which, by 612 BC, controlled the whole Iran and the eastern Anatolia. This marked the end of the Kingdom of Urartu as well, which was subsequently conquered and dissolved.
thumb|left|Tomb of Cyrus the Great, the founder of the Achaemenid Empire, Pasargadae.
thumb|left|Ruins of the Gate of All Nations, Persepolis.
In 550 BC, Cyrus the Great, son of Mandane and Cambyses I, took over the Median Empire, and founded the Achaemenid Empire by unifying other city states. The conquest of Media was a result of what is called the Persian Revolt. The brouhaha was initially triggered by the actions of the Median ruler Astyages, and was quickly spread to other provinces, as they allied with the Persians. Later conquests under Cyrus and his successors expanded the empire to include Lydia, Babylon, Egypt, parts of the Balkans and Eastern Europe proper, as well as the lands to the west of the Indus and Oxus rivers.
539 BC was the year in which Persian forces defeated the Babylonian army at Opis, and marked the end of around four centuries of Mesopotamian domination of the region with the transition from the Neo-Babylonian Period to the Achaemenid Period. Cyrus entered Babylon and presented himself as a traditional Mesopotamian monarch. Subsequent Achaemenid art and iconography reflect the influence of the new political reality in Mesopotamia.
thumb|Achaemenid Empire around the time of Darius I and Xerxes I.
At its greatest extent, the Achaemenid Empire included the modern territories of Iran, Azerbaijan, Armenia, Georgia, Turkey, much of the Black Sea coastal regions, northeastern Greece and southern Bulgaria (Thrace), northern Greece and Macedonia (Paeonia and Ancient Macedon), Iraq, Syria, Lebanon, Jordan, Israel, Palestine, all significant ancient population centers of ancient Egypt as far west as Libya, Kuwait, northern Saudi Arabia, parts of the UAE and Oman, Pakistan, Afghanistan, and much of Central Asia, making it the first world government and the largest empire the world had yet seen.
It is estimated that in 480 BC, 50 million people lived in the Achaemenid Empire.Yarshater (1996, p. 47)While estimates for the Achaemenid Empire range from 10–80+ million, most prefer 50 million. Prevas (2009, p. 14) estimates 10 million. Strauss (2004, p. 37) estimates about 20 million. Ward (2009, p. 16) estimates at 20 million. Scheidel (2009, p. 99) estimates 35 million. Daniel (2001, p. 41) estimates at 50 million. Meyer and Andreades (2004, p. 58) estimates to 50 million. Jones (2004, p. 8) estimates over 50 million. Richard (2008, p. 34) estimates nearly 70 million. Hanson (2001, p. 32) estimates almost 75 million. Cowley (1999 and 2001, p. 17) estimates possibly 80 million. The empire at its peak ruled over 44% of the world's population, the highest such figure for any empire in history. In Greek history, the Achaemenid Empire is considered as the antagonist of the Greek city states, for the emancipation of slaves including the Jewish exiles in Babylon, building infrastructures such as road and postal systems, and the use of an official language, the Imperial Aramaic, throughout its territories. The empire had a centralized, bureaucratic administration under the emperor, a large professional army, and civil services, inspiring similar developments in later empires.Schmitt Achaemenid dynasty (i. The clan and dynasty) Furthermore, one of the Seven Wonders of the Ancient World, the Mausoleum at Halicarnassus, was built in the empire between 353 and 350 BC.
Eventual conflict on the western borders began with the Ionian Revolt which erupted into the Greco-Persian Wars, and continued through the first half of the 5th century BC, and ended with the Persian withdrawal from all of their European territories in the Balkans and Eastern Europe proper.
In 334 BC, Alexander the Great invaded the Achaemenid Empire, defeating the last Achaemenid emperor, Darius III, at the Battle of Issus. Following the premature death of Alexander, Iran came under the control of the Hellenistic Seleucid Empire. In the middle of the 2nd century BC, the Parthian Empire rose to become the main power in Iran, and the century-long geopolitical arch-rivalry between Romans and Parthians began, culminating in the Roman–Parthian Wars. The Parthian Empire continued as a feudal monarchy for nearly five centuries, until 224 CE, when it was succeeded by the Sassanid Empire. Together with their neighboring arch-rival, the Roman-Byzantines, they made up the world's two most dominant powers at the time, for over four centuries.
thumb|left|Sassanid reliefs at Taq Bostan.
The Sassanids established an empire within the frontiers achieved by the Achaemenids, with their capital at Ctesiphon. The Sassanid Empire of the Late Antiquity is considered as one of the most influential periods of Iran, as Iran influenced the culture of ancient Rome (and through that as far as Western Europe),J. B. Bury, p. 109.Will Durant, Age of Faith, (Simon and Schuster, 1950), 150; "Repaying its debt, Sasanian art exported its forms and motives eastward into India, Turkestan, and China, westward into Syria, Asia Minor, Constantinople, the Balkans, Egypt, and Spain." Africa, China, and India,Sarfaraz, pp. 329–330 and played a prominent role in the formation of both European and Asian medieval art.
thumb|A bas-relief at Naqsh-e Rostam, depicting the victory of Shapur I over Valerian following the Battle of Edessa.
Most of the era of both Parthian and Sassanid empires were overshadowed by the Roman-Persian Wars, which raged on their western borders at the Anatolia, the western Caucasus, Mesopotamia, and the Levant, for over 700 years. These wars exhausted both Romans and Sassanids, and led to the defeat of both at the hands of the invading Muslim Arabs.
Several offshoots of the Achaemenids, Parthians, and Sassanids, established eponymous dynasties and branches in Anatolia and the Caucasus, including the Kingdom of Pontus, the Mihranids, and the Arsacid dynasties of Armenia, Iberia (Georgia), and Caucasian Albania (present-day Azerbaijan and southern Dagestan).
Medieval period
The prolonged Byzantine-Sassanid Wars, most importantly the climactic Byzantine-Sassanid War of 602-628, as well as the social conflict within the Sassanid Empire, opened the way for an Arab invasion to Iran in the 7th century. Initially defeated by the Arab Rashidun Caliphate, Iran came under the rule of the Arab caliphates of Umayyad and Abbasid. The prolonged and gradual process of the Islamization of Iran began following the conquest. Under the new Arab elite of the Rashidun and later the Umayyad caliphates, both converted (mawali) and non-converted (dhimmi) Iranians were discriminated against, being excluded from the government and military, and having to pay a special tax called Jizya. Gunde Shapur, home of the Academy of Gunde Shapur which was the most important medical center of the world at the time, survived after the invasion, but became known as an Islamic institute thereafter.
thumb|Tomb of Hafez, the popular Iranian poet whose works are regarded as a pinnacle in Persian literature and have left a considerable mark on later Western writers, most notably Goethe, Thoreau and Emerson.
In 750, the Abbasids overthrew the Umayyads, due mainly to the support from the mawali Iranians. The mawali formed the majority of the rebel army, which was led by the Iranian general Abu Muslim. The arrival of the Abbasid Caliphs saw a revival of Iranian culture and influence, and a move away from the imposed Arabic customs. The role of the old Arab aristocracy was gradually replaced by an Iranian bureaucracy.
After two centuries of the Arab rule, semi-independent and independent Iranian kingdoms such as the Tahirids, Saffarids, Samanids, and Buyids began to appear on the fringes of the declining Abbasid Caliphate. By the Samanid era in the 9th and 10th centuries, the efforts of Iranians to regain their independence had been well solidified.
The blossoming literature, philosophy, medicine, and art of Iran became major elements in the formation of a new age for the Iranian civilization, during the period known as the Islamic Golden Age. The Islamic Golden Age reached its peak by the 10th and 11th centuries, during which Iran was the main theater of the scientific activities. After the 10th century, the Persian language, alongside Arabic, was used for the scientific, philosophical, historical, musical, and medical works, whereas the important Iranian writers, such as Tusi, Avicenna, Qotb od Din Shirazi, and Biruni, had major contributions in the scientific writing.
The cultural revival that began in the Abbasid period led to a resurfacing of the Iranian national identity, and so earlier attempts of Arabization never succeeded in Iran. The Iranian Shuubiyah movement became a catalyst for Iranians to regain independence in their relations with the Arab invaders. The most notable effect of this movement was the continuation of the Persian language attested to the epic poet Ferdowsi, now regarded as the most important figure in Iranian literature.
thumb|left|Tuğrul Tower, a 12th-century monument in Rey, Iran.
The 10th century saw a mass migration of Turkic tribes from Central Asia into the Iranian plateau. Turkic tribesmen were first used in the Abbasid army as mamluks (slave-warriors), replacing Iranian and Arab elements within the army. As a result, the mamluks gained a significant political power. In 999, large portions of Iran came briefly under the rule of the Ghaznavids, whose rulers were of mamluk Turk origin, and longer subsequently under the Turkish Seljuk and Khwarezmian empires. These Turks had been Persianized and had adopted Persian models of administration and rulership. The Seljuks subsequently gave rise to the Sultanate of Rum in Anatolia, while taking their thoroughly Persianized identity with them.Sigfried J. de Laet. History of Humanity: From the seventh to the sixteenth century UNESCO, 1994. ISBN 9231028138 p 734Ga ́bor A ́goston,Bruce Alan Masters. Encyclopedia of the Ottoman Empire Infobase Publishing, 1 jan. 2009 ISBN 1438110251 p 322 The result of the adoption and patronage of Persian culture by Turkish rulers was the development of a distinct Turko-Persian tradition.
In 1219–21 the Khwarezmian Empire suffered a devastating invasion by the Mongol army of Genghis Khan. According to Steven R. Ward, "Mongol violence and depredations killed up to three-fourths of the population of the Iranian Plateau, possibly 10 to 15 million people. Some historians have estimated that Iran's population did not again reach its pre-Mongol levels until the mid-20th century."
Following the fracture of the Mongol Empire in 1256, Hulagu Khan, grandson of Genghis Khan, established the Ilkhanate in Iran. In 1370, yet another conqueror, Timur, followed the example of Hulagu, establishing the Timurid Empire which lasted for another 156 years. In 1387, Timur ordered the complete massacre of Isfahan, reportedly killing 70,000 citizens. The Ilkhans and the Timurids soon came to adopt the ways and customs of the Iranians, choosing to surround themselves with a culture that was distinctively Iranian.
Early modern period
thumb|right|upright|A Venetian portrait of Ismail I, the founder of the Safavid dynasty. – The Uffizi
By the 1500s, Ismail I from Ardabil, established the Safavid dynasty, with Tabriz as the capital. Beginning with Azerbaijan, he subsequently extended his authority over all of the Iranian territories, and established an intermittent Iranian hegemony over the vast relative regions, reasserting the Iranian identity within large parts of the Greater Iran.Why is there such confusion about the origins of this important dynasty, which reasserted Iranian identity and established an independent Iranian state after eight and a half centuries of rule by foreign dynasties? RM Savory, Iran under the Safavids (Cambridge University Press, Cambridge, 1980), p. 3. Iran was predominantly Sunni, but Ismail instigated a forced conversion to the Shia branch of Islam, by which the Shia Islam spread throughout the Safavid territories in the Caucasus, Iran, Anatolia, and Mesopotamia. As a result, thereof, the modern-day Iran is the only official Shia nation of the world, with it holding an absolute majority in Iran and the Republic of Azerbaijan, having there the 1st and 2nd highest number of Shia inhabitants by population percentage in the world.Juan Eduardo Campo, Encyclopedia of Islam, p.625
The centuries-long geopolitical and ideological rivalry between Safavid Iran and the neighboring Ottoman Empire, led to numerous Ottoman–Persian Wars. The Safavid Era peaked in the reign of Abbas the Great, 1587–1629, surpassing their Ottoman archrivals in strength, and making the empire a leading hub in Western Eurasia for the sciences and arts. The Safavid Era saw the start of mass integration from Caucasian populations into new layers of the society of Iran, as well as mass resettlement of them within the heartlands of Iran, playing a pivotal role in the history of Iran for centuries onwards. Following a gradual decline in the late 1600s and early 1700s, which was caused by the internal conflicts, the continuous wars with the Ottomans, and the foreign interference (most notably the Russian interference), the Safavid rule was ended by the Pashtun rebels who besieged Isfahan and defeated Soltan Hosein in 1722.
thumb|left|Statue of Nader Shah, the founder of the Afsharid dynasty. – Naderi Museum
In 1729, Nader Shah, a chieftain and military genius from Khorasan, successfully drove out and conquered the Pashtun invaders. He subsequently took back the annexed Caucasian territories which were divided among the Ottoman and Russian authorities by the ongoing chaos in Iran. During the reign of Nader Shah, Iran reached its greatest extent since the Sassanid Empire, reestablishing the Iranian hegemony all over the Caucasus, as well as other major parts of the west and central Asia, and briefly possessing what was arguably the most powerful empire at the time.
Nader Shah invaded India and sacked far off Delhi by the late 1730s. His territorial expansion, as well as his military successes, went into a decline following the final campaigns in the Northern Caucasus. The assassination of Nader Shah sparked a brief period of civil war and turmoil, after which Karim Khan of the Zand dynasty came to power in 1750, bringing a period of relative peace and prosperity.
The geopolitical reach of the Zand dynasty was limited, compared to its preceding dynasties. Many of the Iranian territories in the Caucasus gained de facto independence and were locally ruled through various Caucasian khanates. However, despite the self-ruling, they all remained subjects and vassals to the Zand king.Encyclopedia of Soviet law By Ferdinand Joseph Maria Feldbrugge, Gerard Pieter van den Berg, William B. Simons, Page 457 The khanates exercised control over their affairs via international trade routes between Central Asia and the West.
Another civil war ensued after the death of Karim Khan in 1779, out of which Aqa Mohammad Khan emerged, founding the Qajar dynasty in 1794. In 1795, following the disobedience of the Georgian subjects and their alliance with the Russians, the Qajars captured Tblisi by the Battle of Krtsanisi, and drove the Russians out of the entire Caucasus, reestablishing the Iranian suzerainty over the region.
From the 1800s to the 1940s
thumb|left|A map showing the northwestern borders of Iran in the 19th century, comprising modern-day Eastern Georgia, Dagestan, Armenia and Azerbaijan, before being ceded to neighboring Imperial Russia by the Russo-Persian Wars.
The Russo-Persian wars of 1804–1813 and 1826–1828 resulted in large irrevocable territorial losses for Iran in the Caucasus, comprising all of Transcaucasia and Dagestan, which made part of the very concept of Iran for centuries, and thus substantial gains for the neighboring Russian Empire.
As a result of the 19th century Russo-Persian wars, the Russians took over the Caucasus, and Iran irrevocably lost control over its integral territories in the region (comprising modern-day Dagestan, Georgia, Armenia, and Azerbaijan), which got confirmed per the treaties of Gulistan and Turkmenchay.Farrokh, Kaveh. Iran at War: 1500–1988. ISBN 1780962215 The area to the north of the river Aras, among which the contemporary Republic of Azerbaijan, eastern Georgia, Dagestan, and Armenia, were Iranian territory until they were occupied by Russia in the course of the 19th century.
As Iran shrank, many Transcaucasian and North Caucasian Muslims moved towards Iran,А. Г. Булатова. Лакцы (XIX — нач. XX вв.). Историко-этнографические очерки. — Махачкала, 2000. especially until the aftermath result of the Caucasian War, and the decades afterwards, while Iran's Armenians were encouraged to settle in the newly incorporated Russian territories,"Griboedov not only extended protection to those Caucasian captives who sought to go home but actively promoted the return of even those who did not volunteer. Large numbers of Georgian and Armenian captives had lived in Iran since 1804 or as far back as 1795." Fisher, William Bayne; Avery, Peter; Gershevitch, Ilya; Hambly, Gavin; Melville, Charles. The Cambridge History of Iran, Cambridge University Press – 1991. p. 339 A. S. Griboyedov. "Записка о переселеніи армянъ изъ Персіи въ наши области", Фундаментальная Электронная БиблиотекаBournoutian. Armenian People, p. 105 causing significant demographic shifts.
Around 1.5 million people—20 to 25% of the population of Iran—died as a result of the Great Famine of 1870–1871.
thumb|The first national parliament of Iran, established in 1906.
Between 1872 and 1905, a series of protests took place in response to the sale of concessions to foreigners by Qajar monarchs Nasser ed Din and Mozaffar ed Din, and led to the Iranian Constitutional Revolution. The first Iranian Constitution and the first national parliament of Iran were founded in 1906, through the ongoing revolution. The Constitution included the official recognition of Iran's three religious minorities, namely Christians, Zoroastrians, and Jews,Colin Brock,Lila Zia Levers. Aspects of Education in the Middle East and Africa Symposium Books Ltd., 7 mei 2007 ISBN 1873927215 p 99 which has remained a basis in the legislation of Iran since then.
The struggle related to the constitutional movement continued until 1911, when Mohammad Ali Shah was defeated and forced to abdicate. On the pretext of restoring order, the Russians occupied northern Iran in 1911, and maintained a military presence in the region for years to come. During World War I, the British occupied much of the territory of western Iran, and fully withdrew in 1921. The Persian Campaign commenced furthermore during World War I in northwestern Iran after an Ottoman invasion, as part of the Middle Eastern theatre of World War I. As a result of Ottoman hostilities across the border, a large amount of the Assyrians of Iran were massacred by the Ottoman armies, notably in and around Urmia.Richard G. Hovannisian. The Armenian Genocide: Cultural and Ethical Legacies. pp. 270–271. Transaction Publishers, 31 dec. 2011 ISBN 1412835925Alexander Laban Hinton,Thomas La Pointe,Douglas Irvin-Erickson. Hidden Genocides: Power, Knowledge, Memory. p. 117. Rutgers University Press, 18 dec. 2013 ISBN 0813561647 Apart from the rule of Aqa Mohammad Khan, the Qajar rule is characterized as a century of misrule.
The Persian Cossack Brigade, which was the most effective military force available to the crown, began a military coup supported by the British in February 1921. The Qajar dynasty was subsequently overthrown, and Reza Khan, the former general of the Cossack Brigade, became the new Prime Minister of Iran. Eventually, he was declared the new monarch in 1925—thence known as Reza Shah—establishing the Pahlavi dynasty.
In the midst of World War II in 1941, Nazi Germany began the so-called Operation Barbarossa and invaded the Soviet Union, breaking the Molotov-Ribbentrop Pact. This had a major impact on Iran, which had declared neutrality in the conflicts. Later that year, following an Anglo-Soviet invasion of Iran, Reza Shah was forced to abdicate in favor of his son, Mohammad Reza Pahlavi. Subsequently, Iran became a major conduit for British and American aid to the Soviet Union until the end of the ongoing war.
thumb|left|The Allied "Big Three" at the 1943 Tehran Conference.
At the 1943 Tehran Conference, the Allied "Big Three" (Joseph Stalin, Franklin D. Roosevelt, and Winston Churchill) issued the Tehran Declaration to guarantee the post-war independence and boundaries of Iran. However, at the end of the war, Soviet troops remained in Iran and local pro-Soviet groups established two puppet states in northwestern Iran, namely the People's Government of Azerbaijan and the Republic of Mahabad. Receiving a promise of oil concessions, the Soviets withdraw from Iran proper in May 1946. The two puppet states were soon overthrown following the Iran crisis of 1946, and the oil concessions were revoked.Louise Fawcett, "Revisiting the Iranian Crisis of 1946: How Much More Do We Know?." Iranian Studies 47#3 (2014): 379-399.Gary R. Hess, "the Iranian Crisis of 1945-46 and the Cold War." Political Science Quarterly 89#1 (1974): 117-146. online
Contemporary era
thumb|upright|Mohammad Mosaddegh, Iranian democracy advocate and deposed Prime Minister.
In 1951, Mohammad Mosaddegh was elected as the prime minister. He became enormously popular in Iran, after he nationalized Iran's petroleum industry and oil reserves. He was deposed in the 1953 Iranian coup d'état, an Anglo-American covert operation that marked the first time the US had overthrown a foreign government during the Cold War.
thumb|left|Mohammad Reza Pahlavi and the Imperial Family during the coronation ceremony of the Shah of Iran in 1967.
After the coup, the Shah became increasingly autocratic and sultanistic, and Iran entered a phase of decades long controversial close relations with the United States and some other foreign governments.Nikki R. Keddie, Rudolph P Matthee. Iran and the Surrounding World: Interactions in Culture and Cultural Politics University of Washington Press, 2002 p 366 While the Shah increasingly modernized Iran and claimed to retain it as a fully secular state, arbitrary arrests and torture by his secret police, the SAVAK, were used to crush all forms of political opposition.
Ruhollah Khomeini, a radical Muslim cleric, became an active critic of the Shah's far-reaching series of reforms known as the White Revolution. Khomeini publicly denounced the government, and was arrested and imprisoned for 18 months. After his release in 1964, he refused to apologize, and was eventually sent into exile.
Due to the 1973 spike in oil prices, the economy of Iran was flooded with foreign currency, which caused inflation. By 1974, the economy of Iran was experiencing double digit inflation, and despite the many large projects to modernize the country, corruption was rampant and caused large amounts of waste. By 1975 and 1976, an economic recession led to increased unemployment, especially among millions of youth who had migrated to the cities of Iran looking for construction jobs during the boom years of the early 1970s. By the late 1970s, many of these people opposed the Shah's regime and began to organize and join the protests against it.
thumb|upright|right|Ruhollah Khomeini returning to Iran from exile, on February 1, 1979.
The 1979 Revolution, later known as the Islamic Revolution,Fereydoun Hoveyda, The Shah and the Ayatollah: Iranian Mythology and Islamic Revolution ISBN 0-275-97858-3, Praeger Publishers began in January 1978 with the first major demonstrations against the Shah. After a year of strikes and demonstrations paralyzing the country and its economy, Mohammad Reza Pahlavi fled the country and Ruhollah Khomeini returned from exile to Tehran in February 1979, forming a new government. After holding a referendum, Iran officially became an Islamic republic in April 1979. A second referendum in December 1979 approved a theocratic constitution.
The immediate nationwide uprisings against the new government began with the 1979 Kurdish rebellion and the Khuzestan uprisings, along with the uprisings in Sistan and Baluchestan Province and other areas. Over the next several years, these uprisings were subdued in a violent manner by the new Islamic government. The new government began purging itself of the non-Islamist political opposition, as well as of those Islamists who were not considered radical enough. Although both nationalists and Marxists had initially joined with Islamists to overthrow the Shah, tens of thousands were executed by the new regime afterwards.
On November 4, 1979, a group of Muslim students seized the United States Embassy and took the embassy with 52 personnel and citizens hostage, after the United States refused to return Mohammad Reza Pahlavi to Iran to face trial in the court of the new regime and all but certain execution. Attempts by the Jimmy Carter administration to negotiate for the release of the hostages, and a failed rescue attempt, helped force Carter out of office and brought Ronald Reagan to power. On Jimmy Carter's final day in office, the last hostages were finally set free as a result of the Algiers Accords.
The Cultural Revolution began in 1980, with an initial closure of universities for three years, in order to perform an inspection and cleanup in the cultural policy of the education and training system.Supreme Cultural Revolution Council GlobalSecurity.org
On September 22, 1980, the Iraqi army invaded the western Iranian province of Khuzestan, launching the Iran–Iraq War. Although the forces of Saddam Hussein made several early advances, by mid 1982, the Iranian forces successfully managed to drive the Iraqi army back into Iraq. In July 1982, with Iraq thrown on the defensive, Iran took the decision to invade Iraq and conducted countless offensives in a bid to conquer Iraqi territory and capture cities, such as Basra. The war continued until 1988, when the Iraqi army defeated the Iranian forces inside Iraq and pushed the remaining Iranian troops back across the border. Subsequently, Khomeini accepted a truce mediated by the UN. The total Iranian casualties in the war were estimated to be 123,220–160,000 KIA, 60,711 MIA, and 11,000–16,000 civilians killed.
thumb|left|The Silent Demonstration during the 2009–10 Iranian election protests.
Following the Iran–Iraq War, in 1989, Akbar Hashemi Rafsanjani and his administration concentrated on a pragmatic pro-business policy of rebuilding and strengthening the economy without making any dramatic break with the ideology of the revolution. In 1997, Rafsanjani was succeeded by the moderate reformist Mohammad Khatami, whose government attempted, unsuccessfully, to make the country more free and democratic.
The 2005 presidential election brought conservative populist candidate, Mahmoud Ahmadinejad, to power. During the 2009 Iranian presidential election, the Interior Ministry announced incumbent president Ahmadinejad had won 62.63% of the vote, while Mir-Hossein Mousavi had come in second place with 33.75%. Allegations of large irregularities and fraud provoked the 2009 Iranian presidential election protests, both within Iran and in major cites outside the country.
Hassan Rouhani was elected as President of Iran on June 15, 2013, defeating Mohammad Bagher Ghalibaf and four other candidates. The electoral victory of Rouhani has improved the relations of Iran with other countries.Strategic Asia 2013–14: Asia in the Second Nuclear Age – Page 229, Abraham M. Denmark, Travis Tanner – 2013
Geography
thumb|300px|Provinces of Iran by area (km2).
Iran has an area of . Iran lies between latitudes 24° and 40° N, and longitudes 44° and 64° E. Its borders are with Azerbaijan (, with Azerbaijan-Naxcivan exclave, ) and Armenia () to the north-west; the Caspian Sea to the north; Turkmenistan () to the north-east; Pakistan () and Afghanistan () to the east; Turkey () and Iraq () to the west; and finally the waters of the Persian Gulf and the Gulf of Oman to the south.
thumb|left|Mount Damavand, Iran's highest point, is located in Amol County, Mazenderan.
Iran consists of the Iranian Plateau with the exception of the coasts of the Caspian Sea and Khuzestan Province. It is one of the world's most mountainous countries, its landscape dominated by rugged mountain ranges that separate various basins or plateaux from one another. The populous western part is the most mountainous, with ranges such as the Caucasus, Zagros and Alborz Mountains; the last contains Iran's highest point, Mount Damavand at , which is also the highest mountain on the Eurasian landmass west of the Hindu Kush.
The northern part of Iran is covered by dense rain forests called Shomal or the Jungles of Iran. The eastern part consists mostly of desert basins such as the Dasht-e Kavir, Iran's largest desert, in the north-central portion of the country, and the Dasht-e Lut, in the east, as well as some salt lakes. This is because the mountain ranges are too high for rain clouds to reach these regions.
The only large plains are found along the coast of the Caspian Sea and at the northern end of the Persian Gulf, where Iran borders the mouth of the Arvand river. Smaller, discontinuous plains are found along the remaining coast of the Persian Gulf, the Strait of Hormuz and the Gulf of Oman.
Climate
thumb|Climate map of Iran (Köppen-Geiger)
Iran's climate ranges from arid or semiarid, to subtropical along the Caspian coast and the northern forests. On the northern edge of the country (the Caspian coastal plain) temperatures rarely fall below freezing and the area remains humid for the rest of the year. Summer temperatures rarely exceed . Annual precipitation is in the eastern part of the plain and more than in the western part. United Nations Resident Coordinator for Iran Gary Lewis has said that "Water scarcity poses the most severe human security challenge in Iran today".
To the west, settlements in the Zagros basin experience lower temperatures, severe winters with below zero average daily temperatures and heavy snowfall. The eastern and central basins are arid, with less than of rain, and have occasional deserts. Average summer temperatures rarely exceed . The coastal plains of the Persian Gulf and Gulf of Oman in southern Iran have mild winters, and very humid and hot summers. The annual precipitation ranges from .
Fauna
thumb|left|Asiatic cheetah, a critically endangered species living only in Iran.
The wildlife of Iran is composed of several animal species, including bears, gazelles, wild pigs, wolves, jackals, panthers, Eurasian lynx, and foxes. Other domestic animals of Iran include sheep, goats, cattle, horses, water buffaloes, donkeys, and camels. Pheasants, partridges, storks, eagles, and falcons are also native to the wildlife of Iran.
One of the most famous members of the Iranian wildlife is the critically endangered Asiatic cheetah, also known as the Iranian cheetah, whose numbers were greatly reduced after the 1979 Revolution. The Persian leopard, which is the world's largest leopard subspecies living primarily in northern Iran, is also listed as an endangered species. Iran lost all its Asiatic lions and the now extinct Caspian tigers by the earlier part of the 20th century.
At least 74 species of Iranian wildlife are on the red list of the International Union for the Conservation of Nature, a sign of serious threats against the country’s biodiversity. The Iranian Parliament has been showing disregard for wildlife by passing laws and regulations such as the act that lets the Ministry of Industries and Mines exploit mines without the involvement of the Department of Environment, and by approving large national development projects without demanding comprehensive study of their impact on wildlife habitats.
Regions, provinces and cities
thumb|The most populated cities of Iran in 2010.
Iran is divided into five regions with thirty one provinces (ostān), each governed by an appointed governor (ostāndār). The provinces are divided into counties (shahrestān), and subdivided into districts (bakhsh) and sub-districts (dehestān).
Iran has one of the highest urban growth rates in the world. From 1950 to 2002, the urban proportion of the population increased from 27% to 60%. The United Nations predicts that by 2030, 80% of the population will be urban. Most internal migrants have settled near the cities of Tehran, Isfahan, Ahvaz, and Qom. The listed populations are from the 2006/07 (1385 AP) census.
Tehran, with a population of around 8.1 million (2011 census), is the capital and largest city in Iran. It is an economical and cultural center in Iran, and is the hub of the country's communication and transport network.
The country's second largest city, Mashhad, has a population of around 2.7 million (2011 census). It is the capital of Razavi Khorasan Province, and is a holy city in Shia Islam, as it is the site of the Imam Reza Shrine. About 15 to 20 million pilgrims visit the Shrine of Imam Reza every year.
Isfahan, with a population of around 1.7 million (2011 census), is Iran's third largest city and the capital of Isfahan Province. It was also a former capital of Iran, and contains a wide variety of historical sites; including the famous Image of the World Square, Siose Bridge, and the sites at the Armenian district of New Jolfa. It is also home to the 5th largest shopping mall in the world, namely Isfahan City Center.
The fourth major city of Iran, Karaj, has a population of around 1.6 million (2011 census). It is the capital of Alborz Province, and is situated 20 km west of Tehran, at the foot of the Alborz mountains. It is a major industrial city in Iran, with large factories producing sugar, textiles, wire, and alcohol.
Tabriz, the capital of East Azerbaijan Province, is considered the second industrial city of Iran (after Tehran). With a population of around 1.4 million (2011 census), it is the fifth major city of Iran, which had been the second-largest until the late 1960s. It is one of the former capitals of Iran, the first capital of the Safavid Empire, and has also been proven extremely influential in the country’s recent history.
Shiraz, with a population of around 1.4 million (2011 census), is the sixth major city of Iran. It is the capital of Fars Province, and was also a former capital of Iran. The area was greatly influenced by the Babylonian civilization, and after the emergence of the ancient Persians, soon came to be known as Persis. Persians were present in the region since the 9th century BC, and became rulers of a large empire under the reign of the Achaemenid Dynasty in the 6th century BC. The ruins of Persepolis and Pasargadae, two of the four capitals of the Achaemenid Empire, are located around the modern-day city of Shiraz.
Government and politics
thumbnail|Ali Khamenei, the Supreme Leader of Iran, meeting with Chinese President Xi Jinping on January 23, 2016. – Iran and China are strategic allies."China, Iran lift ties to comprehensive strategic partnership". Xinhua News Agency. 23 January 2016."Iran, China discuss $600b economic deals as Xi Jinping visits". The Times of Israel. 23 January 2016.
thumb|Iran's syncretic political system combines elements of a modern Islamic theocracy with democracy.
The political system of the Islamic Republic is based on the 1979 Constitution, and comprises several intricately connected governing bodies. The Leader of the Revolution ("Supreme Leader") is responsible for delineation and supervision of the general policies of the Islamic Republic of Iran.
The Supreme Leader is Commander-in-Chief of the armed forces, controls the military intelligence and security operations, and has sole power to declare war or peace. The heads of the judiciary, state radio and television networks, the commanders of the police and military forces and six of the twelve members of the Guardian Council are appointed by the Supreme Leader. The Assembly of Experts elects and dismisses the Supreme Leader on the basis of qualifications and popular esteem.
According to the Constitution of the Islamic Republic of Iran, the powers of government in the Islamic Republic of Iran are vested in the legislature, the judiciary, and the executive powers, functioning under the supervision of the "Absolute Guardianship and the Leadership of the Ummah" () that refers to the Supreme Leader of Iran.Constitution of Iran Unofficial English translation hosted at University of Bern, Switzerland
After the Supreme Leader, the Constitution defines the President of Iran as the highest state authority. The President is elected by universal suffrage for a term of four years and can only be re-elected for one term. Presidential candidates must be approved by the Guardian Council before running, in order to ensure their allegiance to the ideals of the Islamic Revolution.
The President is responsible for the implementation of the Constitution and for the exercise of executive powers, except for matters directly related to the Supreme Leader, who has the final say in all matters. The President appoints and supervises the Council of Ministers, coordinates government decisions, and selects government policies to be placed before the legislature. The current Supreme Leader Ali Khamenei has fired as well as reinstated Council of Ministers members. Eight Vice-Presidents serve under the President, as well as a cabinet of twenty-two ministers, who must all be approved by the legislature.
The legislature of Iran (known as the Islamic Consultative Assembly) is a unicameral body comprising 290 members elected for four-year terms. It drafts legislation, ratifies international treaties, and approves the national budget. All parliamentary candidates and all legislation from the assembly must be approved by the Guardian Council.
The Guardian Council comprises twelve jurists including six appointed by the Supreme Leader. The others are elected by the Iranian Parliament from among the jurists nominated by the Head of the Judiciary. The Council interprets the constitution and may veto Parliament. If a law is deemed incompatible with the constitution or Sharia (Islamic law), it is referred back to Parliament for revision. The Expediency Council has the authority to mediate disputes between Parliament and the Guardian Council, and serves as an advisory body to the Supreme Leader, making it one of the most powerful governing bodies in the country. Local city councils are elected by public vote to four-year terms in all cities and villages of Iran.
Iran has adopted the separation of powers, having three typical division of branches: Executive, Legislature, and the Judiciary.
Law
thumb|right|The Iranian Parliament
The Supreme Leader appoints the head of Iran's judiciary, who in turn appoints the head of the Supreme Court and the chief public prosecutor. There are several types of courts including public courts that deal with civil and criminal cases, and revolutionary courts which deal with certain categories of offenses, including crimes against national security. The decisions of the revolutionary courts are final and cannot be appealed.
The Special Clerical Court handles crimes allegedly committed by clerics, although it has also taken on cases involving lay people. The Special Clerical Court functions independently of the regular judicial framework and is accountable only to the Supreme Leader. The Court's rulings are final and cannot be appealed. The Assembly of Experts, which meets for one week annually, comprises 86 "virtuous and learned" clerics elected by adult suffrage for eight-year terms. As with the presidential and parliamentary elections, the Guardian Council determines candidates' eligibility. The Assembly elects the Supreme Leader and has the constitutional authority to remove the Supreme Leader from power at any time. It has not challenged any of the Supreme Leader's decisions. The current head of the judicial system Sadeq Larijani, appointed by long-time Supreme Leader Ali Khamenei, said that it is illegal for the Assembly of Experts to supervise Supreme Leader Ali Khamenei.
Foreign relations
thumb|Iranian President Hassan Rouhani meeting with Russian President Vladimir Putin. – Iran and Russia are strategic allies.
The Iranian government's officially stated goal is to establish a new world order based on world peace, global collective security and justice.Iran urges NAM to make collective bids to establish global peace. PressTV, 26 August 2012. Retrieved 20 November 2012. Ahmadinejad calls for new world order based on justice. PressTV 26 May 2012. Retrieved 20 November 2012.
Often, Iran's foreign relations since the time of the revolution have been portrayed as being based on two strategic principles: eliminating outside influences in the region and pursuing extensive diplomatic contacts with developing and non-aligned countries.Iran Country Study Guide Volume 1 Strategic Information and Developments, ISBN 1-4387-7462-1, page 141
thumb|left|Iranian FM M. Javad Zarif shaking hands with the US Secretary of State John Kerry during the Iranian nuclear talks. – There is no formal diplomatic relationship between Iran and the US.
Since 2005, Iran's nuclear program has become the subject of contention with the international community following earlier quotes of Iranian leadership favoring the use of an atomic bomb against Iran's enemies and in particular Israel.Ayatollah Ali Akbar Hashemi-Rafsanjani: Israel is a 'one bomb nation'. "...application of an atomic bomb would not leave any thing in Israel" (Dec 14 2001, Iran's Rafsanjani says Muslims should use nuclear weapon against Israel, (CNN report according to Iran Press)) Many countries have expressed concern that Iran's nuclear program could divert civilian nuclear technology into a weapons program. This has led the UN Security Council to impose sanctions against Iran which had further isolated Iran politically and economically from the rest of the global community. In 2009, the US Director of National Intelligence said that Iran, if choosing to, would not be able to develop a nuclear weapon until 2013.
, Iran maintains diplomatic relations with 99 members of the United Nations, but not with the United States or Israel, a state which Iran has not recognized since the 1979 Revolution.
On July 14, 2015, Tehran and the P5+1 came to a historic agreement to end economic sanctions after demonstrating a peaceful nuclear research project that meets International Atomic Energy Agency standards.Kutsch, Tom. (July 14, 2015) "Iran, world powers strike historic nuclear deal". Aljazeera America. Retrieved 15 July 2015. Aljazeera America website
Iran is also a member of dozens of international organizations including the G-15, G-24, G-77, IAEA, IBRD, IDA, IDB, IFC, ILO, IMF, International Maritime Organization, Interpol, OIC, OPEC, the United Nations, WHO, and currently has observer status at the World Trade Organization.
Military
right|thumb|Fotros (UCAV) is considered the largest in Iran's arsenal of unmanned aerial vehicles. Iran has made several indigenous UAVs.
The Islamic Republic of Iran has two types of armed forces: the regular forces Islamic Republic of Iran Army, Islamic Republic of Iran Air Force, Islamic Republic of Iran Navy and the Revolutionary Guards, totaling about 545,000 active troops. Iran also has around 350,000 Reserve Force totaling around 900,000 trained troops.IISS Military Balance 2006, Routledge for the IISS, London, 2006, p.187
Iran has a paramilitary, volunteer militia force within the IRGC, called the Basij, which includes about 90,000 full-time, active-duty uniformed members. Up to 11 million men and women are members of the Basij who could potentially be called up for service; GlobalSecurity.org estimates Iran could mobilize "up to one million men". This would be among the largest troop mobilizations in the world. In 2007, Iran's military spending represented 2.6% of the GDP or $102 per capita, the lowest figure of the Persian Gulf nations. Iran's military doctrine is based on deterrence. In 2014 arms spending the country spent $15 billion and was outspent by the states of the Gulf Cooperation Council by a factor of 13.Parsi, Trita and Cullis, Tyler. (July 10, 2015) "The Myth of the Iranian Military Giant" Foreign Policy. Retrieved 11 July 2015.Foreign Policy website
Iran supports the military activities of its allies in Syria, Iraq, and Lebanon (Hezbollah) with thousands of rockets and missiles.Karam, Joyce & Gutman, Roy, presenters. (5 August 2015) "Middle East Institute: "Iran Nuclear Agreement and Middle East Relations". Washington, DC: Johns Hopkins School of Advanced International Studies. Retrieved 5 August 2015. C-Span website
Since the 1979 Revolution, to overcome foreign embargoes, Iran has developed its own military industry, produced its own tanks, armored personnel carriers, guided missiles, submarines, military vessels, guided missile destroyer, radar systems, helicopters and fighter planes. In recent years, official announcements have highlighted the development of weapons such as the Hoot, Kowsar, Zelzal, Fateh-110, Shahab-3 and Sejjil missiles, and a variety of unmanned aerial vehicles (UAVs). Iran has the largest and most diverse ballistic missile arsenal in the Middle East."Are the Iran nuclear talks heading for a deal?". BBC News Online. Retrieved: August 4, 2016. The Fajr-3 (MIRV), a liquid fuel missile with an undisclosed range which was developed and produced domestically, is currently the most advanced ballistic missile of the country.
Economy
thumb|270px|Iran's provinces by their contribution to national GDP, 2014.
Iran's economy is a mixture of central planning, state ownership of oil and other large enterprises, village agriculture, and small-scale private trading and service ventures. In 2014, GDP was $404.1 billion ($1.334 trillion at PPP), or $17,100 at PPP per capita. Iran is ranked as an upper-middle income economy by the World Bank. In the early 21st century the service sector contributed the largest percentage of the GDP, followed by industry (mining and manufacturing) and agriculture.Iran Investment Monthly. Turquoise Partners (April 2012). Retrieved 24 July 2012.
The Central Bank of the Islamic Republic of Iran is responsible for developing and maintaining the Iranian rial, which serves as the country's currency. The government doesn't recognize trade unions other than the Islamic Labour Councils, which are subject to the approval of employers and the security services. The minimum wage in June 2013 was 487 million rials a month ($134). Unemployment has remained above 10% since 1997, and the unemployment rate for women is almost double that of the men.
In 2006, about 45% of the government's budget came from oil and natural gas revenues, and 31% came from taxes and fees. , Iran had earned $70 billion in foreign exchange reserves mostly (80%) from crude oil exports. Iranian budget deficits have been a chronic problem, mostly due to large-scale state subsidies, that include foodstuffs and especially gasoline, totaling more than $84 billion in 2008 for the energy sector alone. In 2010, the economic reform plan was approved by parliament to cut subsidies gradually and replace them with targeted social assistance. The objective is to move towards free market prices in a 5-year period and increase productivity and social justice.
thumb|Tehran is the economic center of Iran, hosting 45% of the country's industries.
The administration continues to follow the market reform plans of the previous one and indicated that it will diversify Iran's oil-reliant economy. Iran has also developed a biotechnology, nanotechnology, and pharmaceuticals industry. However, nationalized industries such as the bonyads have often been managed badly, making them ineffective and uncompetitive with years. Currently, the government is trying to privatize these industries, and, despite successes, there are still several problems to be overcome, such as the lagging corruption in the public sector and lack of competitiveness. In 2010, Iran was ranked 69, out of 139 nations, in the Global Competitiveness Report.
Iran has leading manufacturing industries in the fields of car-manufacture and transportation, construction materials, home appliances, food and agricultural goods, armaments, pharmaceuticals, information technology, power and petrochemicals in the Middle East. According to FAO, Iran has been a top five producer of the following agricultural products in the world in 2012: apricots, cherries, sour cherries, cucumbers and gherkins, dates, eggplants, figs, pistachios, quinces, walnuts, and watermelons.
Economic sanctions against Iran, such as the embargo against Iranian crude oil, have affected the economy. Sanctions have led to a steep fall in the value of the rial, and as of April 2013 one US dollar is worth 36,000 rial, compared with 16,000 in early 2012. In 2015, Iran and the P5+1 reached a deal on the nuclear program that removed the main sanctions pertaining to Iran's nuclear program by 2016.Bijan Khajehpour: Preventing Iran's post-sanctions job crisis. Al-Monitor, July 17, 2015. Retrieved July 27, 2015.
Tourism
thumb|Over 1 million tourists visit Kish Island each year.
Although tourism declined significantly during the war with Iraq, it has been subsequently recovered. About 1,659,000 foreign tourists visited Iran in 2004, and 2.3 million in 2009, mostly from Asian countries, including the republics of Central Asia, while about 10% came from the European Union and North America.Iran hosted 2.3 million tourists this year. PressTV, March 19, 2010. Retrieved March 22, 2011. Over five million tourists visited Iran in the fiscal year of 2014–2015, ending March 21, four percent more year-on-year.
Alongside the capital, the most popular tourist destinations are Isfahan, Mashhad and Shiraz.Sightseeing and excursions in Iran . Tehran Times, September 28, 2010. Retrieved March 22, 2011. In the early 2000s, the industry faced serious limitations in infrastructure, communications, industry standards and personnel training. The majority of the 300,000 tourist visas granted in 2003 were obtained by Asian Muslims, who presumably intended to visit important pilgrimage sites in Mashhad and Qom. Several organized tours from Germany, France and other European countries come to Iran annually to visit archaeological sites and monuments. In 2003, Iran ranked 68th in tourism revenues worldwide.Iran ranks 68th in tourism revenues worldwide. Payvand/IRNA, September 7, 2003. Retrieved February 12, 2008. According to UNESCO and the deputy head of research for Iran Travel and Tourism Organization (ITTO), Iran is rated 4th among the top 10 destinations in the Middle East. Domestic tourism in Iran is one of the largest in the world. Weak advertising, unstable regional conditions, a poor public image in some parts of the world, and absence of efficient planning schemes in the tourism sector have all hindered the growth of tourism.
Since the removal of some sanctions against Iran in 2015, tourism has re-surged in the country. Over 5 million tourists visited Iran in the fiscal year of 2014–2015, four percent more than the previous year.
Energy
thumb|Iran holds 10% of the world's proven oil reserves and 15% of its gas. It is OPEC's second largest exporter and the world's fourth oil producer.
Iran has the second largest proved gas reserves in the world after Russia, with 33.6 trillion cubic metres, and third largest natural gas production in the world after Indonesia, and Russia. It also ranks fourth in oil reserves with an estimated 153,600,000,000 barrels. It is OPEC's 2nd largest oil exporter and is an energy superpower.
In 2005, Iran spent US$4 billion on fuel imports, because of contraband and inefficient domestic use. Oil industry output averaged in 2005, compared with the peak of six million barrels per day reached in 1974. In the early years of the 2000s (decade), industry infrastructure was increasingly inefficient because of technological lags. Few exploratory wells were drilled in 2005.
In 2004, a large share of natural gas reserves in Iran were untapped. The addition of new hydroelectric stations and the streamlining of conventional coal and oil-fired stations increased installed capacity to 33,000 megawatts. Of that amount, about 75% was based on natural gas, 18% on oil, and 7% on hydroelectric power. In 2004, Iran opened its first wind-powered and geothermal plants, and the first solar thermal plant is to come online in 2009. Iran is the third country in the world to have developed GTL technology.
Demographic trends and intensified industrialization have caused electric power demand to grow by 8% per year. The government’s goal of 53,000 megawatts of installed capacity by 2010 is to be reached by bringing on line new gas-fired plants and by adding hydroelectric, and nuclear power generating capacity. Iran’s first nuclear power plant at Bushehr went online in 2011. It is the second Nuclear Power Plant that ever built in the Middle East after Metsamor Nuclear Power Plant in Armenia.
Education, science and technology
upright|thumb|An 18th-century Persian astrolabe.
Education in Iran is highly centralized. K-12 education is supervised by the Ministry of Education, and higher education is under the supervision of the Ministry of Science and Technology. The adult literacy rated 93.0% in September 2015, while it had rated 85.0% in 2008, up from 36.5% in 1976.
The requirement to enter into higher education is to have a high school diploma and pass the national university entrance examination, Iranian University Entrance Exam (known as concour), which is the equivalent of the US SAT exams. Many students do a 1–2 year course of pre-university (piš-dānešgāh), which is the equivalent of GCE A-levels and International Baccalaureate. The completion of the pre-university course earns students the Pre-University Certificate.
thumb|left|Central office of the FUM.
Higher education is sanctioned by different levels of diplomas. Kārdāni (associate degree; also known as fowq e diplom) is delivered after 2 years of higher education; kāršenāsi (bachelor's degree; also known as licāns) is delivered after 4 years of higher education; and kāršenāsi e aršad (master's degree) is delivered after 2 more years of study, after which another exam allows the candidate to pursue a doctoral program (PhD; known as doctorā).
According to the Webometrics Ranking of World Universities, the top-ranking universities in the country are the University of Tehran (468th worldwide), the Tehran University of Medical Sciences (612th) and Ferdowsi University of Mashhad (815th).
Iran has increased its publication output nearly tenfold from 1996 through 2004, and has been ranked first in terms of output growth rate, followed by China. According to SCImago, Iran could rank fourth in the world in terms of research output by 2018, if the current trend persists.
thumb|200px|Safir rocket – Iran is the ninth country to put a domestically built satellite into orbit and the sixth to send animals in space.
In 2009, a SUSE Linux-based HPC system made by the Aerospace Research Institute of Iran (ARI) was launched with 32 cores, and now runs 96 cores. Its performance was pegged at 192 GFLOPS. Sorena 2 Robot, which was designed by engineers at the University of Tehran, was unveiled in 2010. The Institute of Electrical and Electronics Engineers (IEEE) has placed the name of Surena among the five prominent robots of the world after analyzing its performance.
thumb|left|Production line for AryoSeven, inside the biopharmaceutical company of AryoGen.
In the biomedical sciences, Iran's Institute of Biochemistry and Biophysics is a UNESCO chair in biology. In late 2006, Iranian scientists successfully cloned a sheep by somatic cell nuclear transfer, at the Royan Research Center in Tehran.
According to a study by David Morrison and Ali Khadem Hosseini (Harvard-MIT and Cambridge), stem cell research in Iran is amongst the top 10 in the world. Iran ranks 15th in the world in nanotechnologies.
Iran placed its domestically built satellite, Omid into orbit on the 30th anniversary of the 1979 Revolution, on 2 February 2009, through Safir rocket, becoming the ninth country in the world capable of both producing a satellite and sending it into space from a domestically made launcher.
The Iranian nuclear program was launched in the 1950s. Iran is the seventh country to produce uranium hexafluoride, and controls the entire nuclear fuel cycle.
Iranian scientists outside Iran have also made some major contributions to science. In 1960, Ali Javan co-invented the first gas laser, and fuzzy set theory was introduced by Lotfi Zadeh. Iranian cardiologist, Tofigh Mussivand invented and developed the first artificial cardiac pump, the precursor of the artificial heart. Furthering research and treatment of diabetes, HbA1c was discovered by Samuel Rahbar. Iranian physics is especially strong in string theory, with many papers being published in Iran. Iranian-American string theorist Kamran Vafa proposed the Vafa-Witten theorem together with Edward Witten.
In August 2014, Maryam Mirzakhani became the first-ever woman, as well as the first-ever Iranian, to receive the Fields Medal, the highest prize in mathematics.
Demographics
thumb|270px|Iran's provinces by population, 2014.
thumb|270px|Iran's provinces by their population density, 2013.
Iran is a diverse country, consisting of many religious and ethnic groups that are unified through a shared Iranian language and culture.
Iran's population grew rapidly during the latter half of the 20th century, increasing from about 19 million in 1956 to around 75 million by 2009. However, Iran's birth rate has dropped significantly in recent years, leading to a population growth rate—recorded from July 2012—of about 1.29%. Studies project that the growth will continue to slow until it stabilizes above 105 million by 2050.U.S. Bureau of the Census, 2005. Unpublished work tables for estimating Iran’s mortality. Washington, D.C.:
Population Division, International Programs Center
Iran hosts one of the largest refugee populations in the world, with more than one million refugees, mostly from Afghanistan and Iraq. Since 2006, Iranian officials have been working with the UNHCR and Afghan officials for their repatriation. According to estimates, about five million Iranian citizens have emigrated to other countries, mostly since the 1979 Revolution.
According to the Iranian Constitution, the government is required to provide every citizen of the country with access to social security that covers retirement, unemployment, old age, disability, accidents, calamities, health and medical treatment and care services. This is covered by tax revenues and income derived from public contributions.
Languages
The majority of the population speak Persian, which is also the official language of the country. Others include speakers of the rest of the Iranian languages within the greater Indo-European languages, and the languages of the other ethnicities in Iran.
In northern Iran, mostly confined to Gilan and Mazenderan provinces, the Gilaki and Mazenderani languages are widely spoken. They both have affinities to the neighboring Caucasian languages. In parts of Gilan, the Talysh language is also widely spoken, which stretches up to the neighboring country of Azerbaijan. Kurdish is widely spoken in Kurdistan Province and nearby areas. In Khuzestan, many distinct Persian dialects are spoken. The Lurish and Lari languages are spoken in southwestern and southern Iran.
The Turkic languages and dialects, most importantly Azerbaijani Turkish which is by far the most spoken language in the country after Persian,Annika Rabo,Bo Utas. The Role of the State in West Asia Swedish Research Institute in Istanbul, 2005 ISBN 9186884131 are spoken in different areas in Iran, but are especially widely and dominantly spoken in the provinces of Azerbaijan.
Notable minority languages in Iran include Armenian, Georgian, Neo-Aramaic, and Arabic. Khuzi Arabic is spoken by the Arabs in Khuzestan, and the wider group of Iranian Arabs. Circassian was also once widely used by the large Circassian minority, but, due to assimilation over the many years, no sizable number of Circassians speak the language anymore.Encyclopedia of the Peoples of Africa and the Middle East Facts On File, Incorporated ISBN 143812676X p 141 Excerpted from:
Percentages of spoken language continue to be a point of debate, as many opt that they are politically motivated; most notably regarding the largest and second-largest ethnicities in Iran, the Persians and Azerbaijanis. The following percentages are according to the CIA's World Factbook: 53% Persian, 16% Azerbaijani Turkish, 10% Kurdish, 7% Mazenderani and Gilaki, 7% Luri, 2% Turkmen, 2% Balochi, 2% Arabic, and 2% the remainder Armenian, Georgian, Neo-Aramaic, and Circassian.
Ethnic groups
As with the spoken languages, the ethnic group composition also remains a point of debate, mainly regarding the largest and second largest ethnic groups, the Persians and Azerbaijanis, due to the lack of Iranian state censuses based on ethnicity. The CIA's World Factbook has estimated that around 79% of the population of Iran are a diverse Indo-European ethno-linguistic group that comprise the speakers of the Iranian languages,J. Harmatta in "History of Civilizations of Central Asia", Chapter 14, The Emergence of Indo-Iranians: The Indo-Iranian Languages, ed. by A. H. Dani & V.N. Masson, 1999, p. 357 with Persians (incl. Mazenderanis and Gilaks) constituting 61% of the population, Kurds 10%, Lurs 6%, and Balochs 2%. Peoples of the other ethno-linguistic groups make up the remaining 21%, with Azerbaijanis constituting 16%, Arabs 2%, Turkmens and Turkic tribes 2%, and others 1% (such as Armenians, Talysh, Georgians, Circassians, Assyrians).
The Library of Congress issued slightly different estimates: Persians 65% (incl. Mazenderanis, Gilaks and Talysh people), Azerbaijanis 16%, Kurds 7%, Lurs 6%, Baluchi 2%; Turkic tribal groups such as Qashqai 1%, and Turkmens 1%; and non-Iranian, non-Turkic groups such as Armenians, Georgians, Assyrians, Circassians, and Arabs less than 3%. It determined that Persian is the first language of at least 65% of the country's population, and is the second language for most of the remaining 35%.
thumb|400px|Ethnicities and religions in Iran.
Other non-governmental estimations regarding the groups other than the Persians and Azerbaijanis roughly congruate with the World Factbook and the Library of Congress. However, many scholarly and organisational estimations regarding the number of these two groups differ significantly from the mentioned census. According to many of them, the number of ethnic Azerbaijanis in Iran comprises between 21.6–30% of the total population, with the majority holding it on 25%. Shaffer, Brenda (2003). Borders and Brethren: Iran and the Challenge of Azerbaijani Identity. MIT Press. pp. 221–225. ISBN 0-262-19477-5 "There is considerable lack of consensus regarding the number of Azerbaijanis in Iran ... Most conventional estimates of the Azerbaijani population range between one-fifth to one-third of the general population of Iran, the majority claiming one-fourth." – "Azerbaijani student groups in Iran claim that there are 27 million Azerbaijanis residing in Iran."
Minahan, James (2002). Encyclopedia of the Stateless Nations: S-Z. Greenwood Publishing Group. p. 1765. ISBN 978-0-313-32384-3 "Approximately (2002e) 18,500,000 Southern Azeris in Iran, concentrated in the northwestern provinces of East and West Azerbaijan. It is difficult to determine the exact number of Southern Azeris in Iran, as official statistics are not published detailing Iran's ethnic structure. Estimates of the Southern Azeri population range from as low as 12 million up to 40% of the population of Iran – that is, nearly 27 million..."Rasmus Christian Elling, Minorities in Iran: Nationalism and Ethnicity after Khomeini, Palgrave Macmillan, 2013. Excerpt: "The number of Azeris in Iran is heavily disputed. In 2005, Amanolahi estimated all Turkic-speaking communities in Iran to number no more than 9 million. CIA and Library of congress estimates range from 16 percent to 24 percent – that is, 12–18 million people if we employ the latest total figure for Iran's population (77.8 million). Azeri ethnicsts, on the other hand, argue that overall number is much higher, even as much as 50 percent or more of the total population. Such inflated estimates may have influenced some Western scholars who suggest that up to 30 percent (that is, some 23 million today) Iranians are Azeris." Ali Gheissari. Contemporary Iran: Economy, Society – Politics: Economy, Society, Politics. Page 300. "Azeri ethnonationalist activist, however, claim that number to be 24 million, hence as high as 35 percent of the Iranian population." Oxford University Press. 2 April 2009. In any case, the largest population of Azerbaijanis in the world live in Iran.
Religion
+ Iranian people by religion, 2011 General Cencus Results Religion Percent of population Number of people Muslim 99.3989% 74,682,938 Christian 0.1566% 117,704 Jew 0.0117% 8,756 Zoroastrian 0.0336% 25,271 Other 0.0653% 49,101 Not declared 0.3538% 205,317
Historically, Proto-Iranian religion and the subsequent Zoroastrianism and Manichaeism were the dominant religions in Iran, particularly during the Median, Achaemenid, Parthian and Sassanid empires. This changed after the fall of the Sassanid Empire by the Muslim Conquest of Iran. Iran was predominantly Sunni until the conversion of the country (as well as the people of what is today the neighboring Republic of Azerbaijan) to Shia Islam by the order of the Safavid dynasty in the 16th century.
Today, the Twelver Shia Islam is the official state religion, to which about 90% to 95% of the population officially belong. About 4% to 8% of the population are Sunni Muslims, mainly Kurds and Balochs. The remaining 2% are non-Muslim religious minorities, including Christians, Jews, Bahais, Mandeans, Yezidis, Yarsanis, and Zoroastrians.
Judaism has a long history in Iran, dating back to the Achaemenid Conquest of Babylonia. Although many left in the wake of the establishment of the State of Israel and the 1979 Revolution, around 8,756 Jews remain in Iran, according to the latest census. Iran has the largest Jewish population in the Middle East outside of Israel.
Around 250,000–370,000 Christians reside in Iran,Country Information and Guidance "Christians and Christian converts, Iran" December 2014. p.9 and it is the largest recognized minority religion in the nation. Most are of Armenian background with a sizable minority of Assyrians as well.
Christianity, Judaism, Zoroastrianism, and the Sunni branch of Islam are officially recognized by the government, and have reserved seats in the Iranian Parliament. But the Bahá'í Faith, which is said to be the largest non-Muslim religious minority in Iran, is not officially recognized, and has been persecuted during its existence in Iran since the 19th century. Since the 1979 Revolution, the persecution of Bahais has increased with executions, the denial of civil rights and liberties, and the denial of access to higher education and employment.
The government has not released statistics regarding irreligiosity. However, the irreligious figures are growing and are higher in the diaspora, notably among Iranian Americans.Public Opinion Survey of Iranian Americans. Public Affairs Alliance of Iranian Americans (PAAIA)/Zogby, December 2008. Retrieved April 11, 2014.
Culture
The earliest recorded cultures within the region of Iran date back to the Lower Paleolithic era.
Owing to its dominant geopolitical position and culture in the world, Iran has directly influenced cultures as far away as Greece, Macedonia, and Italy to the West, Russia to the North, the Arabian Peninsula to the South, and indirectly South and East Asia to the East.
Art
thumb|Ceiling of the Lotfollah Mosque.
thumb|A Safavid painting kept at the Abbasi Caravanserai in Isfahan.
Iranian works of art show a great variety in style, in different regions and periods.[F. Hole and K. V. Flannery, Proceedings of the Prehistoric Society, 1968] Iranian art encompasses many disciplines, including architecture, painting, weaving, pottery, calligraphy, metalworking, and stonemasonry. The Median and Achaemenid empires left a significant classical art scene which remained as basic influences for the art of the later eras. The art of the Parthians was a mixture of Iranian and Hellenistic artworks, with their main motifs being scenes of royal hunting expeditions and investitures.; see also The Sassanid art played a prominent role in the formation of both European and Asian medieval art, which carried forward to the Islamic world, and much of what later became known as Islamic learning, such as philology, literature, jurisprudence, philosophy, medicine, architecture, and science, were of Sassanid basis.
There is also a vibrant Iranian modern and contemporary art scene, with its genesis in the late 1940s. The 1949 Apadana Gallery of Tehran, which was operated by Mahmoud Javadi Pour and other colleagues, and the emergence of artists such as Marcos Grigorian in the 1950s, signaled a commitment to the creation of a form of modern art grounded in Iran.
Iranian carpet-weaving dates back to the Bronze Age, and is one of the most distinguished manifestations of the art of Iran. Iran is the world's largest producer and exporter of handmade carpets, producing three quarters of the world's total output and having a share of 30% of world's export markets.
Iran is also home to one of the largest jewel collections in the world.
Architecture
The history of Iranian architecture goes back to the 7th millennium BC.Arthur Pope, Introducing Persian Architecture. Oxford University Press. London. 1971. Iranians were among the first to use mathematics, geometry and astronomy in architecture.
Iranian architecture displays great variety, both structural and aesthetic, developing gradually and coherently out of earlier traditions and experience.Arthur Upham Pope. Persian Architecture. George Braziller, New York, 1965. p.266 The guiding motifs of Iranian architecture are unity, continuity and cosmic symbolism.Nader Ardalan and Laleh Bakhtiar. Sense of Unity; The Sufi Tradition in Persian Architecture. 2000. ISBN 1-871031-78-8
Iran ranks seventh among countries with the most archaeological architectural ruins and attractions from antiquity, as recognized by UNESCO.
Literature
thumb|right|Mausoleum of Ferdowsi in Tus.
Iranian literature is one of the world's oldest literatures, dating back to the poetry of Avesta and Zoroastrian literature.
Poetry is used in many Iranian classical works, whether in literature, science, or metaphysics. The Persian language has been dubbed as a worthy language to serve as a conduit for poetry, and is considered as one of the four main bodies of world literature. Dialects of Persian are sporadically spoken throughout regions from China to Syria and Russia, though mainly in the Iranian Plateau.Arthur John Arberry, The Legacy of Persia, Oxford: Clarendon Press, 1953, ISBN 0-19-821905-9, p. 200.Von David Levinson; Karen Christensen, Encyclopedia of Modern Asia, Charles Scribner's Sons. 2002 p. 48
Iran has a number of famous poets; most notably Rumi, Ferdowsi, Hafez, Saadi Shirazi, Khayyám Ney-Shapuri, and Nezami Ganjavi. Historically, Iranian literature has inspired writers including Johann Wolfgang von Goethe, Henry David Thoreau, and Ralph Waldo Emerson.
Philosophy
thumb|left|alt=The Farohar Symbol from Persepolis|Depiction of Farvahar in Persepolis.
Iranian philosophy originates to Indo-Iranian roots, with Zarathustra's teachings having major influences.
According to the Oxford Dictionary of Philosophy, the chronology of the subject and science of philosophy starts with the Indo-Iranians, dating this event to 1500 BC. The Oxford dictionary also states, "Zarathushtra's philosophy entered to influence Western tradition through Judaism, and therefore on Middle Platonism."
While there are ancient relations between the Indian Vedas and the Iranian Avesta, the two main families of the Indo-Iranian philosophical traditions were characterized by fundamental differences, especially in their implications for the human being's position in society and their view of man's role in the universe.
The Cyrus cylinder, which is known as "the first charter of human rights", is often seen as a reflection of the questions and thoughts expressed by Zarathustra, and developed in Zoroastrian schools of the Achaemenid Era.Philip G. Kreyenbroek: "Morals and Society in Zoroastrian Philosophy" in "Persian Philosophy". Companion Encyclopedia of Asian Philosophy: Brian Carr and Indira Mahalingam. Routledge, 2009.Mary Boyce: "The Origins of Zoroastrian Philosophy" in "Persian Philosophy". Companion Encyclopedia of Asian Philosophy: Brian Carr and Indira Mahalingam. Routledge, 2009.
The earliest tenets of Zoroastrian schools are part of the extant scriptures of the Zoroastrian religion in the Avestan language. Among them are treatises such as the Shikand-gumanic Vichar, Denkard, Zātspram, as well as older passages of Avesta, and the Gathas.An Anthology of Philosophy in Persia. From Zoroaster to 'Umar Khayyam. S. H. Nasr & M. Aminrazavi. I. B. Tauris Publishers, London & New York, 2008. ISBN 978-1845115418.
Mythology
thumb|Statue of Arash the Archer at the Sadabad Complex.
Iranian mythology consists of ancient Iranian folklore and stories, all involving extraordinary beings. They reflect attitudes towards the confrontation of good and evil, actions of the gods, and the exploits of heroes and fabulous creatures.
Myths play a crucial part in the culture of Iran, and understanding of them is increased when they are considered within the context of actual events in the history of Iran. The geography of Greater Iran, a vast area covering the present-day Iran, the Caucasus, Anatolia, Mesopotamia and Central Asia, with its high mountain ranges, plays the main role in much of the Iranian mythology.
Shahnameh of Ferdowsi is the main collection of the mythology of Iran, which draws heavily on the stories and characters of Zoroastrianism, from the texts of Avesta, Denkard and Bundahishn.
Observances
thumb|right|Haft-Seen (or Haft-Čin), a customary of the Iranian New Year.
Iran has three official calendar systems, including the Solar calendar as the main, the Gregorian calendar for international and Christian events, and the Lunar calendar for Islamic events.
The main national annual of Iran is Nowruz, an ancient tradition celebrated on 21 March to mark the beginning of spring and the New Year of Iran. It is enjoyed by people with different religions, but is a holiday for Zoroastrians. It was registered on the list of Masterpieces of the Oral and Intangible Heritage of Humanity, and was described as the Persian New Year by UNESCO in 2009.
Other remained national annuals of Iran include:
Čā'r Šanbe Suri: A prelude to Nowruz, in honor of Ātar (the Holy Fire), celebrated by fireworks and fire-jumping, on the last Wednesday eve of the year.
Sizde be Dar: Leaving the house to join the nature, on the thirteenth day of the New Year (April 2).
Čelle ye Zemestān: Also known as Yaldā; the longest night of the year, celebrated on the eve of Winter Solstice, by reciting poetry and having the customary fruits which include watermelon, pomegranate and mixed nuts.
Tirgān: A mid summer festival, in honor of Tishtrya, celebrated on Tir 13 (July 4), by splashing water, reciting poetry and having traditional dishes such as šole-zard and spinach soup.
Mehrgān: An autumn festival, in honor of Mithra, celebrated on Mehr 16 (October 8), by family gathering and setting a table of sweets, flowers and a mirror.
Sepand Ārmazgān: Dedicated to Ameša Spenta (the Holy Devotion); celebrated by giving presents to partners, on Esfand 15 (February 24).
Along with the national celebrations, annuals such as Ramezān, Eid e Fetr, and Ruz e Āšurā are marked by Muslims; Noel, Čelle ye Ruze, and Eid e Pāk are celebrated by Christians; and the festivals Purim, Eid e Fatir, and Tu Bišvāt are celebrated by Jewish people in Iran.
Music
thumb|Karna, an ancient Iranian musical instrument from the 6th century BC.
Iran is the apparent birthplace of the earliest complex instruments, as evidenced by the archaeological records found in Western Iran, dating back to the 3rd millennium BC.Third Millennium BC: Arched Harps In Western Iran, Encyclopædia Iranica The Iranian use of both vertical and horizontal angular harps have been documented at the sites Madaktu and Kul-e Farah, with the largest collection of Elamite instruments documented at Kul-e Farah. Multiple depictions of horizontal harps were also sculpted in Assyrian palaces, dating back between 865 and 650 BC.
Xenophon's Cyropaedia refers to a great number of singing women at the court of the Achaemenid Iran. Athenaeus of Naucratis states that, by the time of the last Achaemenid king, Artashata (336–330 BC), Achaemenid singing girls were captured by the Macedonian general, Parmenion.[The Deipnosophistae, Athenaeus] Under the Parthian Empire, a type of epic music was taught to youth, depicting the national epics and myths which were later represented in the Shahnameh of Ferdowsi.["Parthians taught their young men songs about the deeds both of gods and of the noblest men." – Strabo'sGeographica, 15.3.18]
History of the Sassanid music is better documented than the earlier periods, and is specially more evident in the Zoroastrian contexts. iv. First millennium C.E. (1) Sasanian music, 224–651. By the time of Khosrow II, the Sassanid royal court was the host of prominent musicians, namely Ramtin, Bamshad, Nakisa, Azad, Sarkash, and Barbad.
thumb|left|A 17th-century Safavid painting depicting a banquet at Hasht Behesht.
Some Iranian traditional musical instruments include saz, Persian tar, Azerbaijani tar, dotar, setar, kamanche, harp, barbat, santur, tanbur, qanun, dap, tompak, and ney.
thumb|The National Orchestra of Iran, conducted by Khaleghi in the 1940s.
The first national music society of the modern-day Iran was founded by Rouhollah Khaleghi in the 1940s, with the School of National Music established in 1949. Today, the main orchestra of Iran include the National Orchestra, the Nations Orchestra, and the Symphony Orchestra of Tehran.
Iranian pop music emerged by the Qajar Era. It was led to major developments in the 1950s, by the emergence of stars such as Viguen, who was referred to as the king of Persian pop and jazz. The 1970s is known as a "Golden Age" for Iranian pop music, where a revolution was formed in the music industry of Iran, using indigenous instruments and forms and adding electric guitar. Hayedeh, Faramarz Aslani, Farhad Mehrad, Googoosh, and Ebi are among the leading artists of this period.
The emergence of genres such as modern rock in the 1970s and hip hop in the 1980s, which replaced the outdated musical styles among the youth, followed major movements and influences in the music of Iran.
Theater
Theater background of Iran dates back to antiquity. The earliest recorded representations of dancing figures within Iran were found in prehistoric sites such as Tepe Sialk and Tepe Mūsīān.
The oldest initiation of theater and phenomena of acting among the people of Iran can be traced in the epic ceremonial theaters, such as Soug e Sivash and Mogh Koshi (Megakhouni), and also dances and theater narrations of Iranian mythological tales reported by Herodotos and Xenophon.
There are several theatrical genres which emerged before the advent of cinema in Iran, including Xeyme Shab Bazi (Puppetry), Saye Bazi (Shadow play), Ru-howzi (Comical plays), and Tazieh (Sorrow plays).
thumb|The Roudaki Hall, Tehran.
Before the 1979 Revolution, the Iranian national stage had become a famous performing scene for known international artists and troupes,Kiann, Nima (2015). The History of Ballet in Iran. Wiesbaden: Reichert Publishingi with the Roudaki Hall of Tehran constructed to function as the national stage for opera and ballet. Opened on October 26, 1967, the hall is home to the Symphony Orchestra of Tehran, the Opera Orchestra of Tehran, and the Iranian National Ballet Company, and continues now with Vahdat Hall as its official name.
The opera Rostam o Sohrab, based on the epic of Rostam and Sohrab from Shahnameh, is an example of opera performances in the modern-day Iran.
Cinema and animation
The earliest examples of visual representations in Iranian history are traced back to the bas-reliefs of Persepolis, c. 500 BC. Persepolis was the ritual center of the ancient kingdom of Achaemenids, and the figures at Persepolis remain bound by the rules of grammar and syntax of visual language.Honour, Hugh and John Fleming, The Visual Arts: A History. New Jersey, Prentice Hall Inc., 1992. Page: 96. The Iranian visual arts reached a pinnacle by the Sassanid Era. A bas-relief from this period in Taq Bostan depicts a complex hunting scene. Similar works from the period have been found to articulate movements and actions in a highly sophisticated manner. It is even possible to see a progenitor of the cinema close-up in one of these works of art, which shows a wounded wild pig escaping from the hunting ground.
By the early 20th century, the five-year-old modern industry of cinema came to Iran. The first Iranian filmmaker was Mirza Ebrahim Khan (Akkas Bashi), the official photographer of Mozaffar od Din Shah of Qajar. He obtained a camera and filmed the Shah's visit to Europe.
In 1904, Mirza Ebrahim Khan (Sahhaf Bashi) opened the first movie theater in Tehran. After him, several others like Russi Khan, Ardeshir Khan, and Ali Vakili tried to establish new movie theaters in Tehran. Until the early 1930s, there were around 15 cinema theaters in Tehran and 11 in other provinces.
The first silent Iranian film was made by Professor Ovanes Ohanian in 1930, and the first sounded one, Lor Girl, was made by Abd ol Hossein Sepanta in 1932.
thumb|left|upright|Behrouz Vossoughi, a well-known Iranian actor who has appeared in over 90 films.
The 1960s was a significant decade for Iranian cinema, with 25 commercial films produced annually on average throughout the early 60s, increasing to 65 by the end of the decade. The majority of production focused on melodrama and thrillers. With the screening of the films Kaiser and The Cow, directed by Masoud Kimiai and Dariush Mehrjui respectively in 1969, alternative films set out to establish their status in the film industry and Bahram Beyzai's Downpour and Nasser Taghvai's Tranquility in the Presence of Others followed soon. Attempts to organize a film festival that had begun in 1954 within the framework of the Golrizan Festival, bore fruits in the form of the Sepas Festival in 1969. The endeavors also resulted in the formation of the Tehran World Festival in 1973.
thumb|upright|Abbas Kiarostami was an admired Iranian film director.
After the Revolution of 1979, as the new government imposed new laws and standards, a new age in Iranian cinema emerged, starting with Viva... by Khosrow Sinai and followed by many other directors, such as Abbas Kiarostami and Jafar Panahi. Kiarostami, an admired Iranian director, planted Iran firmly on the map of world cinema when he won the Palme d'Or for Taste of Cherry in 1997. The continuous presence of Iranian films in prestigious international festivals, such as the Cannes Film Festival, the Venice Film Festival, and the Berlin International Film Festival, attracted world attention to Iranian masterpieces. In 2006, six Iranian films, of six different styles, represented Iranian cinema at the Berlin International Film Festival. Critics considered this a remarkable event in the history of Iranian cinema.
Asghar Farhadi, a well-known Iranian director, has received a Golden Globe Award and an Academy Award for Best Foreign Language Film, and was named as one of the 100 Most Influential People in the world by Time Magazine in 2012.
thumb|Reproduction of the world's oldest example of animation, dating back to the late half of the 3rd millennium BC, found in Burnt City, Iran
The oldest records of animation in Iran date back to the late 3rd millennium BC. An earthen goblet discovered at the site of the 5,200-year-old Burnt City in southeastern Iran, depicts what could possibly be the world’s oldest example of animation. The artifact bears five sequential images depicting a Persian ibex jumping up to eat the leaves of a tree.
The art of animation, as practiced in modern Iran, started in the 1950s. After four decades of Iranian animation production and three-decade experience of Kanoon Institute, the Tehran International Animation Festival (TIAF) was established in February 1999. Every two years, participants from more than 70 countries attend this event in Tehran, which holds Iran's biggest national animation market.
Media
Iran's telecommunications are handled by the state-owned Telecommunication Company of Iran. Almost all of the media outlets in Iran are state-owned or subject to authority monitoring. Outlets such as books, movies and music albums must be approved by the Ministry of Ershad before being released to the public.
Most of the newspapers published in Iran are in Persian. The most widely circulated periodicals of the country are based in Tehran. Iran's widespread daily and weekly newspapers include Ettela'at, Kayhan, Hamshahri and Resalat. Tehran Times, Iran Daily, and Financial Tribune are among the English language newspapers based in Iran.
Television was introduced to Iran in 1958. Although the 1974 Asian Games was broadcast in color, full color programming began in 1978. Since the 1979 Revolution, Iran's largest media corporation is the Islamic Republic of Iran Broadcasting (IRIB). Over 30 percent of Iranians watch satellite channels, but observers state that the figures are likely to be higher.<ref>France 24: Iran’s war on satellite dishes: “We just buy new ones the next day”</ref>
Iran received access to the Internet in 1993. According to 2014 census, around 40% of the population of Iran are Internet users.World Bank: Internet users as percentage of population Iran ranks 24th among countries by number of Internet users. According to the statistics provided by the web information company of Alexa, Google Search and Yahoo! are the most used search engines in Iran.Alexa Internet: Top Sites in Iran. Reviewed on April 19, 2016. Over 80% of the users of Telegram, a cloud-based instant messaging service, are from Iran.Alexa Internet: How popular is telegram.me?. March 29, 2016. Instagram is the most popular online social networking service in Iran. Direct access to Facebook has been blocked in Iran since the 2009 Iranian presidential election protests, due to organization of the opposition movements on the website; but however, Facebook has around 12 to 17 million users in Iran who are using virtual private networks and proxy servers to access the website.France 24: How Iranian authorities break their own censorship laws. March 23, 2016. Around 90% of Iran's e-commerce takes place on the Iranian online store of Digikala, which has around 750,000 visitors per day and more than 2.3 million subscribers.The Guardian: From Digikala to Hamijoo: the Iranian startup revolution, phase two. Saeed Kamali Dehghan. May 13, 2015. Digikala is the most visited online store in the Middle East, and ranks 4th among the most visited websites in Iran.
Sports
thumb|left|The Azadi Stadium, Tehran.
With two thirds of the population under the age of 25, many sports are played in Iran, both traditional and modern.
Iran is most likely the birthplace of polo, which is locally known as čowgān, with its earliest records attributed to the ancient Medes.
Freestyle wrestling has been traditionally regarded as the national sport of Iran, and the national wrestlers have been Olympic and world champions on many occasions. Iran's traditional wrestling called košti e pahlevāni ("the heroic wrestling") is registered on the UNESCO's intangible cultural heritage list.
thumb|Weightlifter Kianoush Rostami wins gold at the 2016 Summer Olympics.
Iran's National Olympic Committee was founded in 1947. Wrestlers and weightlifters have achieved the country's highest records at the Olympics.
Soccer has been regarded as the most popular sport in Iran, with the men's national team having won the Asian Cup on three occasions. The national team has maintained its position as the best Asian squad, as it ranks 1st in Asia and 39th in the world according to the FIFA World Rankings ().
Volleyball is the second most popular sport in Iran. The men's national team is currently the strongest team in Asia, having won the 2011 and 2013 Asian Men's Volleyball Championships, and rank 8th in the FIVB World Rankings (as of July 2016).
Basketball is also popular, with the men's national team having won three Asian Championships since 2007.
thumb|left|Skiers at the Dizin Ski Resort.
Being a mountainous country, Iran is a venue for skiing, snowboarding, hiking, rock climbing, and mountain climbing.
Iran is home to several ski resorts, the most famous being Tochal, Dizin and Shemshak which are all within one to three hours traveling from the capital city Tehran. The resort of Tochal, located in the Alborz mountain rage, is the world's fifth-highest ski resort ( at its highest station). Potentially suitable terrain can also be found in Lorestan, Mazenderan and other provinces.
In September 1974, Iran became the first country in West Asia to host the Asian Games. The Azadi Sport Complex, which is the largest sport complex in Iran, was originally built for this occasion.
In 2016, Iran made global headlines for international female champions boycotting tournaments in Iran in chess (U.S. Woman Grandmaster Nazi Paikidze) and in shooting (Indian world champion Heena Sidhu) because they refused to enter a country where they would be forced to wear a hijab to compete in their sports.
Cuisine
thumb|Kuku Sabzi with herbs, topped with barberries and walnuts
Iranian cuisine is diverse due to its variety of ethnic groups and the influence of other cultures. Herbs are frequently used along with fruits such as plums, pomegranates, quince, prunes, apricots, and raisins. Iranians usually eat plain yogurt with lunch and dinner; it is a staple of the diet in Iran. To achieve a balanced taste, characteristic flavourings such as saffron, dried limes, cinnamon, and parsley are mixed delicately and used in some special dishes. Onions and garlic are normally used in the preparation of the accompanying course, but are also served separately during meals, either in raw or pickled form. Iran is also famous for its caviar.
See also
List of Iran-related topics
Outline of Iran
Notes
Bibliography
Iran: A Country Study''. 2008, Washington, D.C.: Library of Congress, 354 pp.
References
External links
The e-office of the Supreme Leader of Iran
The President of Iran
Iran.ir
Videos
Iran—Weekly program that explores Iran's past, present and future with exclusive reports. (PressTV)
Category:Caspian littoral states
Category:Developing 8 Countries member states
Category:G15 nations
Category:Iranian Plateau
Category:Islamic republics
Category:Member states of OPEC
Category:Member states of the Organisation of Islamic Cooperation
Category:Member states of the United Nations
Category:Middle Eastern countries
Category:Muslim-majority countries
Category:Near Eastern countries
Category:Persian-speaking countries and territories
Category:States and territories established in the 6th century BC
Category:6th-century BC establishments in Asia | 14,653 | 2017-01 |
Labour Party (UK) | The Labour Party is a centre-left political party in the United Kingdom. Growing out of the trade union movement and socialist parties of the nineteenth century, the Labour Party has been described as a "broad church", encompassing a diversity of ideological trends from strongly socialist to moderately social democratic.
Founded in 1900, the Labour Party overtook the Liberal Party as the main opposition to the Conservative Party in the early 1920s, forming minority governments under Ramsay MacDonald in 1924 and from 1929 to 1931. Labour later served in the wartime coalition from 1940 to 1945, after which it formed a majority government under Clement Attlee. Labour was also in government from 1964 to 1970 under Harold Wilson and from 1974 to 1979, first under Wilson and then James Callaghan.
The Labour Party was last in government from 1997 to 2010 under Tony Blair and Gordon Brown, beginning with a landslide majority of 179, reduced to 167 in 2001 and 66 in 2005. Having won 232 seats in the 2015 general election, the party is the Official Opposition in the Parliament of the United Kingdom.
The Labour Party is the largest party in the Welsh Assembly, the third largest party in the Scottish Parliament and has twenty MEPs in the European Parliament, sitting in the Socialists and Democrats Group. The party also organises in Northern Ireland, but does not contest elections to the Northern Ireland Assembly. The Labour Party is a full member of the Party of European Socialists and Progressive Alliance, and holds observer status in the Socialist International. In September 2015, Jeremy Corbyn was elected Leader of the Labour Party.
History
Founding
The Labour Party originated in the late 19th century, when it became apparent that there was a need for a new political party to represent the interests and needs of the urban proletariat, a demographic which had increased in number and had recently been given franchise.See, for instance, the 1899 Lyons vs. Wilkins judgement, which limited certain types of picketing Some members of the trades union movement became interested in moving into the political field, and after further extensions of the voting franchise in 1867 and 1885, the Liberal Party endorsed some trade-union sponsored candidates. The first Lib–Lab candidate to stand was George Odger in the Southwark by-election of 1870. In addition, several small socialist groups had formed around this time, with the intention of linking the movement to political policies. Among these were the Independent Labour Party, the intellectual and largely middle-class Fabian Society, the Marxist Social Democratic FederationMartin Crick, The History of the Social-Democratic Federation and the Scottish Labour Party.
In the 1895 general election, the Independent Labour Party put up 28 candidates but won only 44,325 votes. Keir Hardie, the leader of the party, believed that to obtain success in parliamentary elections, it would be necessary to join with other left-wing groups. Hardie's roots as a lay preacher contributed to an ethos in the party which led to the comment by 1950s General Secretary Morgan Phillips that "Socialism in Britain owed more to Methodism than Marx".p.131 The Foundations of the British Labour Party by Matthew Worley ISBN 9780754667315
Labour Representation Committee
thumb|upright|Keir Hardie, one of the Labour Party's founders and its first leader
In 1899, a Doncaster member of the Amalgamated Society of Railway Servants, Thomas R. Steels, proposed in his union branch that the Trade Union Congress call a special conference to bring together all left-wing organisations and form them into a single body that would sponsor Parliamentary candidates. The motion was passed at all stages by the TUC, and the proposed conference was held at the Memorial Hall on Farringdon Street on 26 and 27 February 1900. The meeting was attended by a broad spectrum of working-class and left-wing organisations—trades unions represented about one third of the membership of the TUC delegates.‘The formation of the Labour Party – Lessons for today’ Jim Mortimer, 2000; Jim Mortimer was a General Secretary of the Labour Party in the 1980s
After a debate, the 129 delegates passed Hardie's motion to establish "a distinct Labour group in Parliament, who shall have their own whips, and agree upon their policy, which must embrace a readiness to cooperate with any party which for the time being may be engaged in promoting legislation in the direct interests of labour." This created an association called the Labour Representation Committee (LRC), meant to coordinate attempts to support MPs sponsored by trade unions and represent the working-class population. It had no single leader, and in the absence of one, the Independent Labour Party nominee Ramsay MacDonald was elected as Secretary. He had the difficult task of keeping the various strands of opinions in the LRC united. The October 1900 "Khaki election" came too soon for the new party to campaign effectively; total expenses for the election only came to £33. Only 15 candidatures were sponsored, but two were successful; Keir Hardie in Merthyr Tydfil and Richard Bell in Derby.
Support for the LRC was boosted by the 1901 Taff Vale Case, a dispute between strikers and a railway company that ended with the union being ordered to pay £23,000 damages for a strike. The judgement effectively made strikes illegal since employers could recoup the cost of lost business from the unions. The apparent acquiescence of the Conservative Government of Arthur Balfour to industrial and business interests (traditionally the allies of the Liberal Party in opposition to the Conservative's landed interests) intensified support for the LRC against a government that appeared to have little concern for the industrial proletariat and its problems.
thumb|left|Labour Party Plaque from Caroone House, 14 Farringdon Street
In the 1906 election, the LRC won 29 seats—helped by a secret 1903 pact between Ramsay MacDonald and Liberal Chief Whip Herbert Gladstone that aimed to avoid splitting the opposition vote between Labour and Liberal candidates in the interest of removing the Conservatives from office.
In their first meeting after the election the group's Members of Parliament decided to adopt the name "The Labour Party" formally (15 February 1906). Keir Hardie, who had taken a leading role in getting the party established, was elected as Chairman of the Parliamentary Labour Party (in effect, the Leader), although only by one vote over David Shackleton after several ballots. In the party's early years the Independent Labour Party (ILP) provided much of its activist base as the party did not have individual membership until 1918 but operated as a conglomerate of affiliated bodies. The Fabian Society provided much of the intellectual stimulus for the party. One of the first acts of the new Liberal Government was to reverse the Taff Vale judgement.
The People's History Museum in Manchester holds the minutes of the first Labour Party meeting in 1906 and has them on display in the Main Galleries. Also within the museum is the Labour History Archive and Study Centre, which holds the collection of the Labour Party, with material ranging from 1900 to the present day.
Early years
The 1910 election saw 42 Labour MPs elected to the House of Commons, a significant victory since, a year before the election, the House of Lords had passed the Osborne judgment ruling that Trades Unions in the United Kingdom could no longer donate money to fund the election campaigns and wages of Labour MPs. The governing Liberals were unwilling to repeal this judicial decision with primary legislation. The height of Liberal compromise was to introduce a wage for Members of Parliament to remove the need to involve the Trade Unions. By 1913, faced with the opposition of the largest Trades Unions, the Liberal government passed the Trade Disputes Act to allow Trade Unions to fund Labour MPs once more.
During the First World War the Labour Party split between supporters and opponents of the conflict but opposition to the war grew within the party as time went on. Ramsay MacDonald, a notable anti-war campaigner, resigned as leader of the Parliamentary Labour Party and Arthur Henderson became the main figure of authority within the party. He was soon accepted into Prime Minister Asquith's war cabinet, becoming the first Labour Party member to serve in government.
Despite mainstream Labour Party's support for the coalition the Independent Labour Party was instrumental in opposing conscription through organisations such as the Non-Conscription Fellowship while a Labour Party affiliate, the British Socialist Party, organised a number of unofficial strikes.
Arthur Henderson resigned from the Cabinet in 1917 amid calls for party unity to be replaced by George Barnes. The growth in Labour's local activist base and organisation was reflected in the elections following the war, the co-operative movement now providing its own resources to the Co-operative Party after the armistice. The Co-operative Party later reached an electoral agreement with the Labour Party.
With the Representation of the People Act 1918, almost all adult men (excepting only peers, criminals and lunatics) and most women over the age of thirty were given the right to vote, almost tripling the British electorate at a stroke, from 7.7 million in 1912 to 21.4 million in 1918. This set the scene for a surge in Labour representation in parliament.Rosemary Rees, Britain, 1890–1939 (2003), p. 200
The Communist Party of Great Britain was refused affiliation to the Labour Party between 1921 and 1923. Meanwhile, the Liberal Party declined rapidly, and the party also suffered a catastrophic split which allowed the Labour Party to gain much of the Liberals' support. With the Liberals thus in disarray, Labour won 142 seats in 1922, making it the second largest political group in the House of Commons and the official opposition to the Conservative government. After the election the now-rehabilitated Ramsay MacDonald was voted the first official leader of the Labour Party.
First Labour government, 1924
thumb|upright|Ramsay MacDonald: First Labour Prime Minister, 1924 and 1929–31
The 1923 general election was fought on the Conservatives' protectionist proposals but, although they got the most votes and remained the largest party, they lost their majority in parliament, necessitating the formation of a government supporting free trade. Thus, with the acquiescence of Asquith's Liberals, Ramsay MacDonald became the first ever Labour Prime Minister in January 1924, forming the first Labour government, despite Labour only having 191 MPs (less than a third of the House of Commons).
Because the government had to rely on the support of the Liberals it was unable to get any socialist legislation passed by the House of Commons. The only significant measure was the Wheatley Housing Act, which began a building programme of 500,000 homes for rental to working-class families. Legislation on education, unemployment and social insurance were also passed.
While there were no major labour strikes during his term, MacDonald acted swiftly to end those that did erupt. When the Labour Party executive criticised the government, he replied that, "public doles, Poplarism [local defiance of the national government], strikes for increased wages, limitation of output, not only are not Socialism, but may mislead the spirit and policy of the Socialist movement."
The government collapsed after only nine months when the Liberals voted for a Select Committee inquiry into the Campbell Case, a vote which MacDonald had declared to be a vote of confidence. The ensuing 1924 general election saw the publication, four days before polling day, of the Zinoviev letter, in which Moscow talked about a Communist revolution in Britain. The letter had little impact on the Labour vote—which held up. It was the collapse of the Liberal party that led to the Conservative landslide. The Conservatives were returned to power although Labour increased its vote from 30.7% to a third of the popular vote, most Conservative gains being at the expense of the Liberals. However many Labourites for years blamed their defeat on foul play (the Zinoviev letter), thereby according to A. J. P. Taylor misunderstanding the political forces at work and delaying needed reforms in the party.
In opposition MacDonald continued his policy of presenting the Labour Party as a moderate force. During the General Strike of 1926 the party opposed the general strike, arguing that the best way to achieve social reforms was through the ballot box. The leaders were also fearful of Communist influence orchestrated from Moscow.
The Party had a distinctive and suspicious foreign policy based on pacifism. Its leaders believed that peace was impossible because of capitalism, secret diplomacy, and the trade in armaments. That is it stressed material factors that ignored the psychological memories of the Great War, and the highly emotional tensions regarding nationalism and the boundaries of the countries.Henry R. Winkler, . "The Emergence of a Labor Foreign Policy in Great Britain, 1918-1929." Journal of Modern History 28.3 (1956): 247-258. in JSTORKenneth E. Miller, Socialism and Foreign Policy: Theory and Practice in Britain to 1931 (1967) ch 4-7.
Second Labour government, 1929–1931
thumb|upright|The original "Liberty" logo, in use until 1983
In the 1929 general election, the Labour Party became the largest in the House of Commons for the first time, with 287 seats and 37.1% of the popular vote. However MacDonald was still reliant on Liberal support to form a minority government. MacDonald went on to appoint Britain's first female cabinet minister, Margaret Bondfield, who was appointed Minister of Labour.
The government, however, soon found itself engulfed in crisis: the Wall Street Crash of 1929 and eventual Great Depression occurred soon after the government came to power, and the crisis hit Britain hard. By the end of 1930 unemployment had doubled to over two and a half million.Davies, A.J. (1996) To Build A New Jerusalem: The British Labour Party from Keir Hardie to Tony Blair, Abacus The government had no effective answers to the crisis. By the summer of 1931 a dispute over whether or not to reduce public spending had split the government.
As the economic situation worsened MacDonald agreed to form a "National Government" with the Conservatives and the Liberals. On 24 August 1931 MacDonald submitted the resignation of his ministers and led a small number of his senior colleagues in forming the National Government together with the other parties. This caused great anger among those within the Labour Party who felt betrayed by MacDonald's actions: he and his supporters were promptly expelled from the Labour Party and formed a separate National Labour Organisation. The remaining Labour Party MPs (led again by Arthur Henderson) and a few Liberals went into opposition. The ensuing 1931 general election resulted in overwhelming victory for the National Government and disaster for the Labour Party which won only 52 seats, 225 fewer than in 1929.
In 1931 Labour campaigned on opposition to public spending cuts, but found it difficult to defend the record of the party's former government and the fact that most of the cuts had been agreed before it fell. Historian Andrew Thorpe argues that Labour lost credibility by 1931 as unemployment soared, especially in coal, textiles, shipbuilding, and steel. The working class increasingly lost confidence in the ability of Labour to solve the most pressing problem.
The 2.5 million Irish Catholics in England and Scotland were a major factor in the Labour base in many industrial areas. The Catholic Church had previously tolerated the Labour Party, and denied that it represented true socialism. However, the bishops by 1930 had grown increasingly alarmed at Labour's policies toward Communist Russia, toward birth control and especially toward funding Catholic schools. They warned its members. The Catholic shift against Labour and in favour of the National government played a major role in Labour's losses.
1930s split
Arthur Henderson, elected in 1931 to succeed MacDonald, lost his seat in the 1931 general election. The only former Labour cabinet member who had retained his seat, the pacifist George Lansbury, accordingly became party leader.
The party experienced another split in 1932 when the Independent Labour Party, which for some years had been increasingly at odds with the Labour leadership, opted to disaffiliate from the Labour Party and embarked on a long, drawn-out decline.
Lansbury resigned as leader in 1935 after public disagreements over foreign policy. He was promptly replaced as leader by his deputy, Clement Attlee, who would lead the party for two decades. The party experienced a revival in the 1935 general election, winning 154 seats and 38% of the popular vote, the highest that Labour had achieved.
As the threat from Nazi Germany increased, in the late 1930s the Labour Party gradually abandoned its pacifist stance and supported re-armament, largely due to the efforts of Ernest Bevin and Hugh Dalton who by 1937 had also persuaded the party to oppose Neville Chamberlain's policy of appeasement.
Wartime coalition, 1940–1945
The party returned to government in 1940 as part of the wartime coalition. When Neville Chamberlain resigned in the spring of 1940, incoming Prime Minister Winston Churchill decided to bring the other main parties into a coalition similar to that of the First World War. Clement Attlee was appointed Lord Privy Seal and a member of the war cabinet, eventually becoming the United Kingdom's first Deputy Prime Minister.
A number of other senior Labour figures also took up senior positions: the trade union leader Ernest Bevin, as Minister of Labour, directed Britain's wartime economy and allocation of manpower, the veteran Labour statesman Herbert Morrison became Home Secretary, Hugh Dalton was Minister of Economic Warfare and later President of the Board of Trade, while A. V. Alexander resumed the role he had held in the previous Labour Government as First Lord of the Admiralty.
Attlee government, 1945–1951
thumb|upright|Clement Attlee: Labour Prime Minister, 1945–51
thumb|upright|Aneurin Bevan speaking in October 1952
At the end of the war in Europe, in May 1945, Labour resolved not to repeat the Liberals' error of 1918, and promptly withdrew from government, on trade union insistence, to contest the 1945 general election in opposition to Churchill's Conservatives. Surprising many observers, Labour won a formidable victory, winning just under 50% of the vote with a majority of 159 seats.
Although Clement Attlee was no great radical himself, Attlee's government proved one of the most radical British governments of the 20th century, enacting Keynesian economic policies, presiding over a policy of nationalising major industries and utilities including the Bank of England, coal mining, the steel industry, electricity, gas, and inland transport (including railways, road haulage and canals). It developed and implemented the "cradle to grave" welfare state conceived by the economist William Beveridge. To this day, most people in the United Kingdom see the 1948 creation of Britain's publicly funded National Health Service (NHS) under health minister Aneurin Bevan as Labour's proudest achievement. Attlee's government also began the process of dismantling the British Empire when it granted independence to India and Pakistan in 1947, followed by Burma (Myanmar) and Ceylon (Sri Lanka) the following year. At a secret meeting in January 1947, Attlee and six cabinet ministers, including Foreign Secretary Ernest Bevin, decided to proceed with the development of Britain's nuclear weapons programme, in opposition to the pacifist and anti-nuclear stances of a large element inside the Labour Party.
Labour went on to win the 1950 general election, but with a much reduced majority of five seats. Soon afterwards, defence became a divisive issue within the party, especially defence spending (which reached a peak of 14% of GDP in 1951 during the Korean War),Clark, Sir George, Illustrated History Of Great Britain, (1987) Octopus Books straining public finances and forcing savings elsewhere. The Chancellor of the Exchequer, Hugh Gaitskell, introduced charges for NHS dentures and spectacles, causing Bevan, along with Harold Wilson (then President of the Board of Trade), to resign over the dilution of the principle of free treatment on which the NHS had been established.
In the 1951 general election, Labour narrowly lost to Churchill's Conservatives, despite receiving the larger share of the popular vote – its highest ever vote numerically. Most of the changes introduced by the 1945–51 Labour government were accepted by the Conservatives and became part of the "post-war consensus" that lasted until the late 1970s. Food and clothing rationing, however, still in place since the war, were swiftly relaxed, then abandoned from about 1953.
Post-war consensus, 1951–1964
Following the defeat of 1951 the party spent 13 years in opposition. The party suffered an ideological split, while the postwar economic recovery and the social effects of Attlee's reforms made the public broadly content with the Conservative governments of the time. Attlee remained as leader until his retirement in 1955.
His replacement, Hugh Gaitskell, associated with the right wing of the party, struggled in dealing with internal party divisions (particularly over Clause IV of the Labour Party Constitution, which was viewed as Labour's commitment to nationalisation and Gaitskell wanted scrapped) in the late 1950s and early 1960s and Labour lost the 1959 general election. In 1963, Gaitskell's sudden death from a heart attack made way for Harold Wilson to lead the party.
Wilson government, 1964–1970
thumb|upright|Harold Wilson: Labour Prime Minister, 1964–70 and 1974–76
A downturn in the economy and a series of scandals in the early 1960s (the most notorious being the Profumo affair) had engulfed the Conservative government by 1963. The Labour Party returned to government with a 4-seat majority under Wilson in the 1964 general election but increased its majority to 96 in the 1966 general election.
Wilson's government was responsible for a number of sweeping social and educational reforms under the leadership of Home Secretary Roy Jenkins such as the abolishment of the death penalty in 1964, the legalisation of abortion and homosexuality (initially only for men aged 21 or over, and only in England and Wales) in 1967 and the abolition of theatre censorship in 1968. Comprehensive education was expanded and the Open University created. However Wilson's government had inherited a large trade deficit that led to a currency crisis and ultimately a doomed attempt to stave off devaluation of the pound. Labour went on to lose the 1970 general election to the Conservatives under Edward Heath.
Spell in opposition, 1970–1974
After losing the 1970 general election, Labour returned to opposition, but retained Harold Wilson as Leader. Heath's government soon ran into trouble over Northern Ireland and a dispute with miners in 1973 which led to the "three-day week". The 1970s proved a difficult time to be in government for both the Conservatives and Labour due to the 1973 oil crisis which caused high inflation and a global recession.
The Labour Party was returned to power again under Wilson a few weeks after the February 1974 general election, forming a minority government with the support of the Ulster Unionists. The Conservatives were unable to form a government alone as they had fewer seats despite receiving more votes numerically. It was the first general election since 1924 in which both main parties had received less than 40% of the popular vote and the first of six successive general elections in which Labour failed to reach 40% of the popular vote. In a bid to gain a majority, a second election was soon called for October 1974 in which Labour, still with Harold Wilson as leader, won a slim majority of three, gaining just 18 seats taking its total to 319.
Majority to minority, 1974–1979
For much of its time in office the Labour government struggled with serious economic problems and a precarious majority in the Commons, while the party's internal dissent over Britain's membership of the European Economic Community, which Britain had entered under Edward Heath in 1972, led in 1975 to a national referendum on the issue in which two thirds of the public supported continued membership.
Harold Wilson's personal popularity remained reasonably high but he unexpectedly resigned as Prime Minister in 1976 citing health reasons, and was replaced by James Callaghan. The Wilson and Callaghan governments of the 1970s tried to control inflation (which reached 23.7% in 1975) by a policy of wage restraint. This was fairly successful, reducing inflation to 7.4% by 1978. However it led to increasingly strained relations between the government and the trade unions.
thumb|left|upright|James Callaghan: Labour Prime Minister, 1976–79
Fear of advances by the nationalist parties, particularly in Scotland, led to the suppression of a report from Scottish Office economist Gavin McCrone that suggested that an independent Scotland would be "chronically in surplus". By 1977 by-election losses and defections to the breakaway Scottish Labour Party left Callaghan heading a minority government, forced to trade with smaller parties in order to govern. An arrangement negotiated in 1977 with Liberal leader David Steel, known as the Lib–Lab pact, ended after one year. Deals were then forged with various small parties including the Scottish National Party and the Welsh nationalist Plaid Cymru, prolonging the life of the government.
The nationalist parties, in turn, demanded devolution to their respective constituent countries in return for their supporting the government. When referendums for Scottish and Welsh devolution were held in March 1979 Welsh devolution was rejected outright while the Scottish referendum returned a narrow majority in favour without reaching the required threshold of 40% support. When the Labour government duly refused to push ahead with setting up the proposed Scottish Assembly, the SNP withdrew its support for the government: this finally brought the government down as it triggered a vote of confidence in Callaghan's government that was lost by a single vote on 28 March 1979, necessitating a general election.
Callaghan had been widely expected to call a general election in the autumn of 1978 when most opinion polls showed Labour to have a narrow lead. However he decided to extend his wage restraint policy for another year hoping that the economy would be in a better shape for a 1979 election. But during the winter of 1978–79 there were widespread strikes among lorry drivers, railway workers, car workers and local government and hospital workers in favour of higher pay-rises that caused significant disruption to everyday life. These events came to be dubbed the "Winter of Discontent".
In the 1979 general election Labour was heavily defeated by the Conservatives now led by Margaret Thatcher. The number of people voting Labour hardly changed between February 1974 and 1979 but the Conservative Party achieved big increases in support in the Midlands and South of England, benefiting from both a surge in turnout and votes lost by the ailing Liberals.
Internal conflict and opposition, 1979–1997
After its defeat in the 1979 general election the Labour Party underwent a period of internal rivalry between the left represented by Tony Benn, and the right represented by Denis Healey. The election of Michael Foot as leader in 1980, and the leftist policies he espoused, such as unilateral nuclear disarmament, leaving the European Economic Community and NATO, closer governmental influence in the banking system, the creation of a national minimum wage and a ban on fox hunting led in 1981 to four former cabinet ministers from the right of the Labour Party (Shirley Williams, William Rodgers, Roy Jenkins and David Owen) forming the Social Democratic Party. Benn was only narrowly defeated by Healey in a bitterly fought deputy leadership election in 1981 after the introduction of an electoral college intended to widen the voting franchise to elect the leader and their deputy. By 1982, the National Executive Committee had concluded that the entryist Militant tendency group were in contravention of the party's constitution. The Militant newspaper's five member editorial board were expelled on 22 February 1983.
The Labour Party was defeated heavily in the 1983 general election, winning only 27.6% of the vote, its lowest share since 1918, and receiving only half a million votes more than the SDP-Liberal Alliance who leader Michael Foot condemned for "siphoning" Labour support and enabling the Conservatives to greatly increase their majority of parliamentary seats.
thumb|upright|left|Neil Kinnock, leader of the party in opposition, 1983–92
Foot resigned and was replaced as leader by Neil Kinnock, with Roy Hattersley as his deputy. The new leadership progressively dropped unpopular policies. The miners strike of 1984–85 over coal mine closures, for which miners' leader Arthur Scargill was blamed, and the Wapping dispute led to clashes with the left of the party, and negative coverage in most of the press. Tabloid vilification of the so-called loony left continued to taint the parliamentary party by association from the activities of "extra-parliamentary" militants in local government.
The alliances which campaigns such as Lesbians and Gays Support the Miners forged between lesbian, gay, bisexual, and transgender (LGBT) and labour groups, as well as the Labour Party itself, also proved to be an important turning point in the progression of LGBT issues in the UK. At the 1985 Labour Party conference in Bournemouth, a resolution committing the party to support LGBT equality rights passed for the first time due to block voting support from the National Union of Mineworkers.
Labour improved its performance in 1987, gaining 20 seats and so reducing the Conservative majority from 143 to 102. They were now firmly re-established as the second political party in Britain as the Alliance had once again failed to make a breakthrough with seats. A merger of the SDP and Liberals formed the Liberal Democrats. Following the 1987 election, the National Executive Committee resumed disciplinary action against members of Militant, who remained in the party, leading to further expulsions of their activists and the two MPs who supported the group.
In November 1990 following a contested leadership election, Margaret Thatcher resigned as leader of the Conservative Party and was succeeded as leader and Prime Minister by John Major. Most opinion polls had shown Labour comfortably ahead of the Tories for more than a year before Thatcher's resignation, with the fall in Tory support blamed largely on her introduction of the unpopular poll tax, combined with the fact that the economy was sliding into recession at the time.
thumb|upright|right|Labour Party logo under Kinnock, Smith and Blair's leaderships
The change of leader in the Tory government saw a turnaround in support for the Tories, who regularly topped the opinion polls throughout 1991 although Labour regained the lead more than once.
The "yo-yo" in the opinion polls continued into 1992, though after November 1990 any Labour lead in the polls was rarely sufficient for a majority. Major resisted Kinnock's calls for a general election throughout 1991. Kinnock campaigned on the theme "It's Time for a Change", urging voters to elect a new government after more than a decade of unbroken Conservative rule. However, the Conservatives themselves had undergone a dramatic change in the change of leader from Thatcher to Major, at least in terms of style if not substance. From the outset, it was clearly a well-received change, as Labour's 14-point lead in the November 1990 "Poll of Polls" was replaced by an 8% Tory lead a month later.
The 1992 general election was widely tipped to result in a hung parliament or a narrow Labour majority, but in the event the Conservatives were returned to power, though with a much reduced majority of 21.1992: Tories win again against odds BBC News, 5 April 2005 Despite the increased number of seats and votes, it was still an incredibly disappointing result for supporters of the Labour party. For the first time in over 30 years there was serious doubt among the public and the media as to whether Labour could ever return to government.
Kinnock then resigned as leader and was replaced by John Smith. Once again the battle erupted between the old guard on the party's left and those identified as "modernisers". The old guard argued that trends showed they were regaining strength under Smith's strong leadership. Meanwhile, the breakaway SDP merged with the Liberal Party. The new Liberal Democrats seemed to pose a major threat to the Labour base. Tony Blair (the Shadow Home Secretary) had an entirely different vision. Blair, the leader of the "modernising" faction (Blairites), argued that the long-term trends had to be reversed, arguing that the party was too locked into a base that was shrinking, since it was based on the working-class, on trade unions, and on residents of subsidised council housing. Blairites argued that the rapidly growing middle class was largely ignored, as well as more ambitious working-class families. It was said that they aspired to become middle-class, but accepted the Conservative argument that Labour was holding ambitious people back, with its leveling down policies. They increasingly saw Labour in a negative light, regarding higher taxes and higher interest rates. In order to present a fresh face and new policies to the electorate, New Labour needed more than fresh leaders; it had to jettison outdated policies, argued the modernisers.David Butler and Dennis Kavanagh, The British general election of 1997 (1997) pp 46–67. The first step was procedural, but essential. Calling on the slogan, "One Member, One Vote" Blair (with some help from Smith) defeated the union element and ended block voting by leaders of labour unions. Blair and the modernisers called for radical adjustment of Party goals by repealing "Clause IV", the historic commitment to nationalisation of industry. This was achieved in 1995.
The Black Wednesday economic disaster in September 1992 left the Conservative government's reputation for monetary excellence in tatters, and by the end of that year Labour had a comfortable lead over the Tories in the opinion polls. Although the recession was declared over in April 1993 and a period of strong and sustained economic growth followed, coupled with a relatively swift fall in unemployment, the Labour lead in the opinion polls remained strong. However, Smith died from a heart attack in May 1994.
"New Labour" government, 1997–2010
Tony Blair continued to move the party further to the centre, abandoning the largely symbolic Clause Four at the 1995 mini-conference in a strategy to increase the party's appeal to "middle England". More than a simple re-branding, however, the project would draw upon the Third Way strategy, informed by the thoughts of the British sociologist Anthony Giddens.
"New Labour" was first termed as an alternative branding for the Labour Party, dating from a conference slogan first used by the Labour Party in 1994, which was later seen in a draft manifesto published by the party in 1996, called New Labour, New Life For Britain. It was a continuation of the trend that had begun under the leadership of Neil Kinnock. "New Labour" as a name has no official status, but remains in common use to distinguish modernisers from those holding to more traditional positions, normally referred to as "Old Labour".
The Labour Party won the 1997 general election with a landslide majority of 179; it was the largest Labour majority ever, and the largest swing to a political party achieved since 1945. Over the next decade, a wide range of progressive social reforms were enacted, with millions lifted out of poverty during Labour's time in office largely as a result of various tax and benefit reforms.
Among the early acts of Blair's government were the establishment of the national minimum wage, the devolution of power to Scotland, Wales and Northern Ireland, major changes to the regulation of the banking system, and the re-creation of a citywide government body for London, the Greater London Authority, with its own elected-Mayor.
Combined with a Conservative opposition that had yet to organise effectively under William Hague, and the continuing popularity of Blair, Labour went on to win the 2001 election with a similar majority, dubbed the "quiet landslide" by the media. In 2003 Labour introduced tax credits, government top-ups to the pay of low-wage workers.
A perceived turning point was when Blair controversially allied himself with US President George W. Bush in supporting the Iraq War, which caused him to lose much of his political support. The UN Secretary-General, among many, considered the war illegal and a violation of the UN Charter. The Iraq War was deeply unpopular in most western countries, with Western governments divided in their support and under pressure from worldwide popular protests. The decisions that led up to the Iraq war and its subsequent conduct are currently the subject of Sir John Chilcot's Iraq Inquiry.
In the 2005 general election, Labour was re-elected for a third term, but with a reduced majority of 66.
Blair announced in September 2006 that he would quit as leader within the year, though he had been under pressure to quit earlier than May 2007 in order to get a new leader in place before the May elections which were expected to be disastrous for Labour.I will quit within a year – Blair BBC News, 7 September 2007 In the event, the party did lose power in Scotland to a minority Scottish National Party government at the 2007 elections and, shortly after this, Blair resigned as Prime Minister and was replaced by his Chancellor, Gordon Brown. Although the party experienced a brief rise in the polls after this, its popularity soon slumped to its lowest level since the days of Michael Foot. During May 2008, Labour suffered heavy defeats in the London mayoral election, local elections and the loss in the Crewe and Nantwich by-election, culminating in the party registering its worst ever opinion poll result since records began in 1943, of 23%, with many citing Brown's leadership as a key factor. Membership of the party also reached a low ebb, falling to 156,205 by the end of 2009: over 40 per cent of the 405,000 peak reached in 1997 and thought to be the lowest total since the party was founded.John Marshall: Membership of UK political parties; House of Commons, SN/SG/5125; 2009, page 9
Finance proved a major problem for the Labour Party during this period; a "cash for peerages" scandal under Blair resulted in the drying up of many major sources of donations. Declining party membership, partially due to the reduction of activists' influence upon policy-making under the reforms of Neil Kinnock and Blair, also contributed to financial problems. Between January and March 2008, the Labour Party received just over £3 million in donations and were £17 million in debt; compared to the Conservatives' £6 million in donations and £12 million in debt.
In the 2010 general election on 6 May that year, Labour with 29.0% of the vote won the second largest number of seats (258). The Conservatives with 36.5% of the vote won the largest number of seats (307), but no party had an overall majority, meaning that Labour could still remain in power if they managed to form a coalition with at least one smaller party.UK election results: data for every candidate in every seat The Guardian (London), 7 May 2010 However, the Labour Party would have had to form a coalition with more than one other smaller party to gain an overall majority; anything less would result in a minority government. On 10 May 2010, after talks to form a coalition with the Liberal Democrats broke down, Brown announced his intention to stand down as Leader before the Labour Party Conference but a day later resigned as both Prime Minister and party leader.
Opposition, 2010–present
Harriet Harman became the Leader of the Opposition and acting Leader of the Labour Party following the resignation of Gordon Brown on 11 May 2010, pending a leadership election subsequently won by Ed Miliband. Miliband emphasised "responsible capitalism" and greater state intervention to change the balance of the UK economy away from financial services. Tackling vested interests and opening up closed circles in British society were also themes he returned to a number of times. Miliband also argued for greater regulation on banks and the energy companies.
The Parliamentary Labour Party voted to abolish Shadow Cabinet elections at a meeting on 5 July 2011, ratified by the National Executive Committee and Party Conference. Henceforth the leader of the party chose the Shadow Cabinet members.
The party's performance held up in local elections in 2012 with Labour consolidating its position in the North and Midlands, while also regaining some ground in Southern England. In Wales the party enjoyed good successes, regaining control of most Welsh Councils lost in 2008, including the capital city, Cardiff. In Scotland, Labour's held overall control of Glasgow City Council despite some predictions to the contrary, and also enjoyed a +3.26 swing across Scotland. In London, results were mixed for the party; Ken Livingstone lost the election for Mayor of London, but the party gained its highest ever representation in the Greater London Authority in the concurrent assembly election.
thumb|upright|Jeremy Corbyn, current leader of the party
On 1 March 2014, at a special conference the party reformed internal Labour election procedures, including replacing the electoral college system for selecting new leaders with a "one member, one vote" system following the recommendation of a review by former general-secretary Ray Collins. Mass membership would be encouraged by allowing "registered supporters" to join at a low cost, as well as full membership. Members from the trade unions would also have to explicitly "opt in" rather than "opt out" of paying a political levy to Labour.
The party edged out the Conservatives in the May 2014 European parliamentary elections winning 20 seats versus the Conservatives 19. However the UK Independence Party won 24 seats. Labour also won a majority of seats in the local council elections of 2014, gaining 324 more councillors than they had before the election.
In September 2014, Shadow Chancellor Ed Balls outlined his plans to cut the government's current account deficit, and the party carried these plans into the 2015 general election. Whereas Conservatives campaigned for a surplus on all government spending, including investment, by 2018/19, Labour stated it would balance the budget, excluding investment, by 2020.
The 2015 General Election resulted in a net loss of seats throughout Great Britain, with Labour representation falling to 232 seats in the House of Commons. The Party lost 40 of its 41 seats in Scotland in the face of record breaking swings to the Scottish National Party. The scale of the decline in Labour's support was much greater than what had occurred at the 2011 elections for the Scottish parliament. Though Labour gained more than 20 seats in England and Wales, mostly from the Liberal Democrats but also from the Conservative Party, it lost more seats to Conservative challengers, including that of Ed Balls, for net losses overall.
The day after the 7 May 2015 election, Miliband resigned as party leader. Harriet Harman again took charge as interim leader. Following a leadership election, Jeremy Corbyn was announced as the new party leader on 12 September 2015. Corbyn, then a member of the Socialist Campaign Group and a fixture of the party's hard left, was considered little more than a fringe hopeful when the contest began, but benefited from a large influx of new members as well as the registration of significant numbers of the new affiliated and registered classes of voting supporters introduced under Miliband. Corbyn received the backing of only 16 of the party's MPs. Membership numbers continued to climb after the start of Corbyn's leadership.
Tensions soon developed in the parliamentary party over Corbyn's leadership. Following the referendum on EU membership more than two dozen members of the Shadow Cabinet resigned in late June 2016, and a no-confidence vote was supported by 172 MPs against 40 supporting Corbyn. On 11 July 2016 an official leadership election was called as Angela Eagle launched a challenge against Corbyn. She was soon joined by rival challenger Owen Smith, prompting Eagle to withdraw on 19 July 2016 in order to ensure there was only one challenger on the ballot. On 24 September 2016 Corbyn retained leadership of the party with an increased share of the vote. By the end of the contest Labour's membership had grown to more than 500,000, making it the largest political party in terms of membership in Western Europe.
Ideology
The Labour Party is considered to be left of centre. It was initially formed as a means for the trade union movement to establish political representation for itself at Westminster. It only gained a "socialist" commitment with the original party constitution of 1918. That "socialist" element, the original Clause IV, was seen by its strongest advocates as a straightforward commitment to the "common ownership", or nationalisation, of the "means of production, distribution and exchange". Although about a third of British industry was taken into public ownership after the Second World War, and remained so until the 1980s, the right of the party were questioning the validity of expanding on this objective by the late 1950s. Influenced by Anthony Crosland's book, The Future of Socialism (1956), the circle around party leader Hugh Gaitskell felt that the commitment was no longer necessary. While an attempt to remove Clause IV from the party constitution in 1959 failed, Tony Blair, and the "modernisers" saw the issue as putting off potential voters,Martin Daunton "The Labour Party and Clause Four 1918–1995", History Review 1995 (History Today website) and were successful thirty-five years later,Philip Gould The Unfinished Revolution: How New Labour Changed British Politics Forever, London: Hachette digital edition, 2011, p.30 (originally published by Little, Brown, 1998) with only limited opposition from senior figures in the party.John Rentoul "'Defining moment' as Blair wins backing for Clause IV", The Independent, 14 March 1995
Party electoral manifestos have not contained the term socialism since 1992. The new version of Clause IV, though affirming a commitment to democratic socialism, no longer definitely commits the party to public ownership of industry: in its place it advocates "the enterprise of the market and the rigour of competition" along with "high quality public services ... either owned by the public or accountable to them."
Historically, influenced by Keynesian economics, the party favoured government intervention in the economy, and the redistribution of wealth. Taxation was seen as a means to achieve a "major redistribution of wealth and income" in the October 1974 election manifesto. The party also desired increased rights for workers, and a welfare state including publicly funded healthcare.
From the late-1980s onwards, the party adopted free market policies, leading many observers to describe the Labour Party as social democratic or the Third Way, rather than democratic socialist. Other commentators go further and argue that traditional social democratic parties across Europe, including the British Labour Party, have been so deeply transformed in recent years that it is no longer possible to describe them ideologically as "social democratic", and claim that this ideological shift has put new strains on the party's traditional relationship with the trade unions.
Historically within the party, differentiation was made between the "soft left" and the "hard left", with the former embracing more moderately social democratic views while the hard left subscribed to a strongly socialist, even Marxist, ideology. Members on the hard left were often disparaged as the "loony left", particularly in the popular media. The term "hard left" was sometimes used in the 1980s to describe Trotskyist groups such as the Militant tendency, Socialist Organiser and Socialist Action. In more recent times, Members of Parliament in the Socialist Campaign Group and the Labour Representation Committee are seen as constituting a hard left in contrast to a soft left represented by organisations such as Compass and the magazine Tribune.
Symbols
thumb|upright=0.6|The red flag, originally the official flag and symbol of the Labour party
Labour has long been identified with red, a political colour traditionally affiliated with socialism and the labour movement. Prior to the red flag logo, the party had used a modified version of the classic 1924 shovel, torch and quill emblem. In 1924 a brand conscious Labour leadership had devised a competition, inviting supporters to design a logo to replace the 'polo mint' like motif that had previously appeared on party literature. The winning entry, emblazoned with the word ‘Liberty’ over a design incorporating a torch, shovel and quill symbol, was popularised through its sale, in badge form, for a shilling.
The party conference in 1931 passed a motion "That this conference adopts Party Colours, which should be uniform throughout the country, colours to be red and gold"."Labour Party Annual Conference Report", 1931, p. 233. Since the party's inception, the red flag has been Labour's official symbol; the flag has been associated with socialism and revolution ever since the 1789 French Revolution and the revolutions of 1848. The red rose, a symbol of social democracy, was adopted as the party symbol in 1986 as part of a rebranding exercise and is now incorporated into the party logo.
The red flag became an inspiration which resulted in the composition of "The Red Flag", the official party anthem since its inception, being sung at the end of party conferences and on various occasions such as in parliament on February 2006 to mark the centenary of the Labour Party's founding. During New Labour attempts were made to play down the role of the song, however it still remains in use.
Constitution and structure
The Labour Party is a membership organisation consisting of Constituency Labour Parties, affiliated trade unions, socialist societies and the Co-operative Party, with which it has an electoral agreement. Members who are elected to parliamentary positions take part in the Parliamentary Labour Party (PLP) and European Parliamentary Labour Party (EPLP).
The party's decision-making bodies on a national level formally include the National Executive Committee (NEC), Labour Party Conference and National Policy Forum (NPF)—although in practice the Parliamentary leadership has the final say on policy. The 2008 Labour Party Conference was the first at which affiliated trade unions and Constituency Labour Parties did not have the right to submit motions on contemporary issues that would previously have been debated. Labour Party conferences now include more "keynote" addresses, guest speakers and question-and-answer sessions, while specific discussion of policy now takes place in the National Policy Forum.
The Labour Party is an unincorporated association without a separate legal personality, and the Labour Party Rule Book legally regulates the organisation and the relationship with members. The General Secretary represents the party on behalf of the other members of the Labour Party in any legal matters or actions.
Membership and registered supporters
thumb|upright=1.5|A graph showing Labour Party individual membership, excluding affiliated members and supporters, 1928 to June 2016
In August 2015, prior to the 2015 leadership election, the Labour Party reported 292,505 full members, 147,134 affiliated supporters (mostly from affiliated trade unions and socialist societies) and 110,827 registered supporters; a total of about 550,000 members and supporters. the party has approximately 380,000 members.'Revealed: how Jeremy Corbyn has reshaped the Labour party'.'Membership jumped from 201,293 on 6 May last year, the day before the general election, to 388,407 on 10 January'.The Guardian [online], published 13/01/16, sourced 13/01/16. Author – Ewen MacAskill.
For many years Labour held to a policy of not allowing residents of Northern Ireland to apply for membership,, ca. 1999. Retrieved 31 March 2007. "Residents of Northern Ireland are not eligible for membership." instead supporting the Social Democratic and Labour Party (SDLP) which informally takes the Labour whip in the House of Commons.Understanding Ulster by Antony Alcock, Ulster Society Publications, 1997. Chapter II: The Unloved, Unwanted Garrison. Via Conflict Archive on the Internet. Retrieved 31 October 2008. The 2003 Labour Party Conference accepted legal advice that the party could not continue to prohibit residents of the province joining, and whilst the National Executive has established a regional constituency party it has not yet agreed to contest elections there. In December 2015 a meeting of the members of the Labour Party in Northern Ireland decided unanimously to contest the elections for the Northern Ireland Assembly held in May 2016.
Trade union link
thumb|left|upright=0.8|Unite the Union showing their support for the Labour party on their Leeds offices during the 2015 general election
The Trade Union and Labour Party Liaison Organisation is the coordinating structure that supports the policy and campaign activities of affiliated union members within the Labour Party at the national, regional and local level.
As it was founded by the unions to represent the interests of working-class people, Labour's link with the unions has always been a defining characteristic of the party. In recent years this link has come under increasing strain, with the RMT being expelled from the party in 2004 for allowing its branches in Scotland to affiliate to the left-wing Scottish Socialist Party.RMT 'breached' Labour party rules BBC News, 27 January 2004 Other unions have also faced calls from members to reduce financial support for the PartyLabour's link to unions in danger BBC News, 16 June 2004 and seek more effective political representation for their views on privatisation, public spending cuts and the anti-trade union laws. Unison and GMB have both threatened to withdraw funding from constituency MPs and Dave Prentis of UNISON has warned that the union will write "no more blank cheques" and is dissatisfied with "feeding the hand that bites us". Union funding was redesigned in 2013 after the Falkirk candidate-selection controversy. The Fire Brigades Union, which "severed links" with Labour in 2004, re-joined the party under Corbyn's leadership in 2015.
European and international affiliation
The Labour Party is a founder member of the Party of European Socialists (PES). The European Parliamentary Labour Party's 20 MEPs are part of the Socialists and Democrats (S&D), the second largest group in the European Parliament. The Labour Party is represented by Emma Reynolds in the PES Presidency.
The party was a member of the Labour and Socialist International between 1923 and 1940.Kowalski, Werner. Geschichte der sozialistischen arbeiter-internationale: 1923 – 1940 Berlin: Dt. Verl. d. Wissenschaften, 1985 Since 1951 the party has been a member of the Socialist International, which was founded thanks to the efforts of the Clement Attlee leadership. However, in February 2013, the Labour Party NEC decided to downgrade participation to observer membership status, "in view of ethical concerns, and to develop international co-operation through new networks". Labour was a founding member of the Progressive Alliance international founded in co-operation with the Social Democratic Party of Germany and other social-democratic parties on 22 May 2013.Sozialdemokratische Parteien gründen neues Bündnis | Aktuell Welt | DW.DE | 22.05.2013
Electoral performance
+ Parliament of the United Kingdom Election Votes Seats Win Outcome # % # ± 1900 62,698 1.8 % Conservative majority 1906 321,663 5.7 % 27 Liberal majority Jan-1910 505,657 7.6 % 11 Liberal minority Dec-1910 371,802 7.1 % 2 Liberal minority 1918 2,245,777 21.5 % 15 Coalition majority 1922 4,076,665 29.7 % 85 Conservative majority 1923 4,267,831 30.7 % 49 Labour minority 1924 5,281,626 33.3 % 40 Conservative majority 1929 8,048,968 37.1 % 136 Labour minority 1931 6,339,306 30.8 % 235 National Government majority 1935 7,984,988 38.0 % 102 National Government majority 1945 11,967,746 49.7 % 239 Labour majority 1950 13,266,176 46.1 % 78 Labour majority 1951 13,948,883 48.8 % 20 Conservative majority 1955 12,405,254 46.4 % 18 Conservative majority 1959 12,216,172 43.8 % 19 Conservative majority 1964 12,205,808 44.1 % 59 Labour majority 1966 13,096,629 48.0 % 47 Labour majority 1970 12,208,758 43.1 % 76 Conservative majority Feb-1974 11,645,616 37.2 % 13 Labour minority Oct-1974 11,457,079 39.2 % 18 Labour majority 1979 11,532,218 36.9 % 50 Conservative majority 1983 8,456,934 27.6 % 60 Conservative majority 1987 10,029,807 30.8 % 20 Conservative majority 1992 11,560,484 34.4 % 42 Conservative majority 1997 13,518,167 43.2 % 148 Labour majority 2001 10,724,953 40.7 % 6 Labour majority 2005 9,562,122 35.3 % 57 Labour majority 2010 8,601,441 29.1 % 98 Conservative–Lib Dem majority 2015 9,339,818 30.5 % 26 Conservative majority
The first election held under the Representation of the People Act 1918 in which all men over 21, and most women over the age of 30 could vote, and therefore a much larger electorate
The first election under universal suffrage in which all women aged over 21 could vote
Franchise extended to all 18- to 20-year-olds under the Representation of the People Act 1969
Leadership
Leaders of the Labour Party since 1906
Keir Hardie, 1906–08
Arthur Henderson, 1908–10
George Nicoll Barnes, 1910–11
Ramsay MacDonald, 1911–14
Arthur Henderson, 1914–17
William Adamson, 1917–21
John Robert Clynes, 1921–22
Ramsay MacDonald, 1922–31
Arthur Henderson, 1931–32
George Lansbury, 1932–35
Clement Attlee, 1935–55
Hugh Gaitskell, 1955–63
George Brown, 1963 (acting)
Harold Wilson, 1963–76
James Callaghan, 1976–80
Michael Foot, 1980–83
Neil Kinnock, 1983–92
John Smith, 1992–94
Margaret Beckett, 1994 (acting)
Tony Blair, 1994–2007
Gordon Brown, 2007–2010
Harriet Harman, 2010 (acting)
Ed Miliband, 2010–2015
Harriet Harman, 2015 (acting)
Jeremy Corbyn, 2015–present
Deputy Leaders of the Labour Party since 1922
John Robert Clynes, 1922–32
William Graham, 1931–32
Clement Attlee, 1932–35
Arthur Greenwood, 1935–45
Herbert Morrison, 1945–55
Jim Griffiths, 1955–59
Aneurin Bevan, 1959–60
George Brown, 1960–70
Roy Jenkins, 1970–72
Edward Short, 1972–76
Michael Foot, 1976–80
Denis Healey, 1980–83
Roy Hattersley, 1983–92
Margaret Beckett, 1992–94
John Prescott, 1994–2007
Harriet Harman, 2007–15
Tom Watson, 2015–present
Leaders in the House of Lords since 1924
Richard Haldane, 1st Viscount Haldane, 1924–28
Charles Cripps, 1st Baron Parmoor, 1928–31
Arthur Ponsonby, 1st Baron Ponsonby of Shulbrede, 1931–35
Harry Snell, 1st Baron Snell, 1935–40
Christopher Addison, 1st Viscount Addison, 1940–52
William Jowitt, 1st Earl Jowitt, 1952–55
Albert Victor Alexander, 1st Earl Alexander of Hillsborough, 1955–64
Frank Pakenham, 7th Earl of Longford, 1964–68
Edward Shackleton, Baron Shackleton, 1968–74
Malcolm Shepherd, 2nd Baron Shepherd, 1974–76
Fred Peart, Baron Peart, 1976–82
Cledwyn Hughes, Baron Cledwyn of Penrhos, 1982–92
Ivor Richard, Baron Richard, 1992–98
Margaret Jay, Baroness Jay of Paddington, 1998–2001
Gareth Williams, Baron Williams of Mostyn, 2001–2003
Valerie Amos, Baroness Amos, 2003–2007
Catherine Ashton, Baroness Ashton of Upholland, 2007–2008
Janet Royall, Baroness Royall of Blaisdon, 2008–2015
Angela Smith, Baroness Smith of Basildon, 2015–present
Labour Prime Ministers
Name Portrait Country of birth Periods in office Ramsay MacDonald 65px Scotland 1924; 1929–1931 (First and Second MacDonald Ministry) Clement Attlee 65px England 1945–1950; 1950–1951 (Attlee Ministry) Harold Wilson 65px England 1964–1966; 1966–1970; 1974; 1974–1976 (First and Second Wilson Ministry) James Callaghan 65px England 1976–1979 (Callaghan Ministry) Tony Blair 65px Scotland 1997–2001; 2001–2005; 2005–2007 (Blair Ministry) Gordon Brown 65px Scotland 2007–2010 (Brown Ministry)
Current elected MPs
232 Labour MPs were elected at the 2015 election. The MPs as of June 2015 are:
Member of Parliament Constituency First elected Notes Diane Abbott Hackney North and Stoke Newington 1987 Debbie Abrahams Oldham East and Saddleworth 2011 Heidi Alexander Lewisham East 2010 Rushanara Ali Bethnal Green and Bow 2010 First person of Bangladeshi origin to be elected to the House of Commons, and one of the first three Muslim women to be elected as a Member of Parliament. Graham Allen Nottingham North 1987 David Anderson Blaydon 2005 Jon Ashworth Leicester South 2011 Ian Austin Dudley North 2005 Adrian Bailey West Bromwich West 2000 Kevin Barron Rother Valley 1983 Margaret Beckett Derby South 1974 Member for Lincoln 1974–79, Derby South 1983– Hilary Benn Leeds Central 1999 Luciana Berger Liverpool Wavertree 2010 Clive Betts Sheffield South East 1992 Member for Sheffield Attercliffe 1992–2010, Sheffield South East 2010– Roberta Blackman-Woods City of Durham 2005 Tom Blenkinsop Middlesbrough South and East Cleveland 2010 Paul Blomfield Sheffield Central 2010 Ben Bradshaw Exeter 1997 Kevin Brennan Cardiff West 2001 Lyn Brown West Ham 2005 Nick Brown Newcastle upon Tyne East 1983 Member for Newcastle upon Tyne East 1983–97, Newcastle upon Tyne East and Wallsend 1997–2010, Newcastle upon Tyne East 2010– Chris Bryant Rhondda 2001 Karen Buck Westminster North 1997 Member for Regent's Park and Kensington North 1997–2010, Westminster North 2010– Richard Burden Birmingham Northfield 1992 Richard Burgon Leeds East 2015 Andy Burnham Leigh 2001 Dawn Butler Brent Central 2015 Liam Byrne Birmingham Hodge Hill 2004 Ruth Cadbury Brentford & Isleworth 2015 Alan Campbell Tynemouth 1997 Ronnie Campbell Blyth Valley 1987 Sarah Champion Rotherham 2012 Jenny Chapman Darlington 2010 Ann Clwyd Cynon Valley 1984 Vernon Coaker Gedling 1997 Ann Coffey Stockport 1987 Julie Cooper Burnley 2015 Rosie Cooper West Lancashire 2005 Yvette Cooper Normanton, Pontefract and Castleford 1997 Member for Pontefract and Castleford 1997–2010, Normanton, Pontefract and Castleford 2010– Jeremy Corbyn Islington North 1983 Jo Cox Batley and Spen 2015 Murdered in 2016 during the referendum campaign to leave the European Union. Succeeded by Tracy Brabin after a by-election Neil Coyle Bermondsey and Old Southwark 2015 David Crausby Bolton North East 1997 Mary Creagh Wakefield 2005 Stella Creasy Walthamstow 2010 Jon Cruddas Dagenham and Rainham 2001 Member for Dagenham 2001–2010, Dagenham and Rainham 2010– John Cryer Leyton and Wanstead 1997 Member for Hornchurch 1997–2005, Leyton and Wanstead 2010– Judith Cummins Bradford South 2015 Alex Cunningham Stockton North 2010 Jim Cunningham Coventry South 1992 Member for Coventry South East 1992–97, Coventry South 1997– Nic Dakin Scunthorpe 2010 Simon Danczuk Rochdale 2010 Suspended from the Labour Party in December 2015 Wayne David Caerphilly 2001 Geraint Davies Swansea West 1997 Member for Croydon Central 1997–2005, Swansea West 2010– Thangam Debbonaire Bristol West 2015 Gloria De Piero Ashfield 2010 Stephen Doughty Cardiff South and Penarth 2012 Jim Dowd Lewisham West and Penge 1992 Member for Lewisham West 1992–2010, Lewisham West and Penge 2010– Peter Dowd Bootle 2015 Jack Dromey Birmingham Erdington 2010 Michael Dugher Barnsley East 2010 Angela Eagle Wallasey 1992 Maria Eagle Garston and Halewood 1997 Member for Liverpool Garston 1997–2010, Garston and Halewood 2010– Clive Efford Eltham 1997 Julie Elliott Sunderland Central 2010 Louise Ellman Liverpool Riverside 1997 Natascha Engel North East Derbyshire 2005 Bill Esterson Sefton Central 2010 Chris Evans Islwyn 2010 Paul Farrelly Newcastle-under-Lyme 2001 Frank Field Birkenhead 1979 Jim Fitzpatrick Poplar and Limehouse 1997 Member for Poplar and Canning Town 1997–2010, Poplar and Limehouse 2010– Robert Flello Stoke-on-Trent South 2005 Colleen Fletcher Coventry North East 2015 Caroline Flint Don Valley 1997 Paul Flynn Newport West 1987 Yvonne Fovargue Makerfield 2010 Vicky Foxcroft Lewisham Deptford 2015 Mike Gapes Ilford South 1992 Barry Gardiner Brent North 1997 Pat Glass North West Durham 2010 Mary Glindon North Tyneside 2010 Roger Godsiff Birmingham Hall Green 1992 Member for Birmingham Small Heath 1992–97, Birmingham Sparkbrook and Small Heath 1997–2010, Birmingham Hall Green 2010– Helen Goodman Bishop Auckland 2005 Kate Green Stretford and Urmston 2010 Margaret Greenwood Wirral West 2015 Lilian Greenwood Nottingham South 2010 Nia Griffith Llanelli 2005 Andrew Gwynne Denton and Reddish 2005 Louise Haigh Sheffield Heeley 2015 Fabian Hamilton Leeds North East 1997 David Hanson Delyn 1992 Harriet Harman Camberwell and Peckham 1982 Member for Peckham 1982–97, Camberwell and Peckham 1997– Harry Harpham Sheffield Brightside and Hillsborough 2015 Died February 2016, triggering a by-election. Carolyn Harris Swansea East 2015 Helen Hayes Dulwich and West Norwood 2015 Sue Hayman Workington 2015 John Healey Wentworth and Dearne 1997 Member for Wentworth 1997–2010, Wentworth and Dearne 2010– Mark Hendrick Preston 2000 Stephen Hepburn Jarrow 1997 Meg Hillier Hackney South and Shoreditch 2005 Margaret Hodge Barking 1994 Sharon Hodgson Washington and Sunderland West 2005 Member for Gateshead East and Washington West 2005–2010, Washington and Sunderland West 2010– Kate Hoey Vauxhall 1989 Kate Hollern Blackburn 2015 Kelvin Hopkins Luton North 1997 George Howarth Knowsley 1986 Member for Knowsley North 1986–97, Knowsley North and Sefton East 1997–2010, Knowsley 2010– Lindsay Hoyle Chorley 1997 Tristram Hunt Stoke-on-Trent Central 2010 Rupa Huq Ealing Central & Acton 2015 Imran Hussain Bradford East 2015 Huw Irranca-Davies Ogmore 2002 Dan Jarvis Barnsley Central 2011 Alan Johnson Kingston upon Hull West and Hessle 1997 Diana Johnson Kingston upon Hull North 2005 Member for Hull North 2005–2010, Kingston upon Hull North 2010– Gerald Jones Merthyr Tydfil and Rhymney 2015 Graham Jones Hyndburn 2010 Helen Jones Warrington North 1997 Kevan Jones North Durham 2001 Susan Elan Jones Clwyd South 2010 Mike Kane Wythenshawe and Sale East 2014 Gerald Kaufman Manchester Gorton 1970 Member for Ardwick 1970–83, Manchester Gorton 1983– Barbara Keeley Worsley and Eccles South 2005 Member for Worsley 2005–2010, Worsley and Eccles South 2010– Liz Kendall Leicester West 2010 Sadiq Khan Tooting 2005 Stephen Kinnock Aberavon 2015 Peter Kyle Hove 2015 David Lammy Tottenham 2000 Ian Lavery Wansbeck 2010 Christopher Leslie Nottingham East 1997 Member for Shipley 1997–2005, Nottingham East 2010– Emma Lewell-Buck South Shields 2013 Clive Lewis Norwich South 2015 Ivan Lewis Bury South 1997 Rebecca Long-Bailey Salford and Eccles 2015 Ian Lucas Wrexham 2001 Holly Lynch Halifax 2015 Fiona Mactaggart Slough 1997 Justin Madders Ellesmere Port and Neston 2015 Khalid Mahmood Birmingham Perry Barr 2001 Shabana Mahmood Birmingham Ladywood 2010 Seema Malhotra Feltham and Heston 2011 John Mann Bassetlaw 2001 Rob Marris Wolverhampton South West 2001 Member 2001–2010, 2015– Gordon Marsden Blackpool South 1997 Rachael Maskell York Central 2015 Chris Matheson City of Chester 2015 Steve McCabe Birmingham Selly Oak 2010 Member for Birmingham Hall Green 1997–2010, Birmingham Selly Oak 2010– Kerry McCarthy Bristol East 2005 Siobhain McDonagh Mitcham and Morden 1997 Andy McDonald Middlesbrough 2012 John McDonnell Hayes and Harlington 1997 Pat McFadden Wolverhampton South East 2005 Conor McGinn St Helens North 2015 Alison McGovern Wirral South 2010 Liz McInnes Heywood and Middleton 2014 Catherine McKinnell Newcastle upon Tyne North 2010 Jim McMahon Oldham West and Royton 2015 Alan Meale Mansfield 1987 Ian Mearns Gateshead 2010 Ed Miliband Doncaster North 2005 Madeleine Moon Bridgend 2005 Jessica Morden Newport East 2005 Grahame Morris Easington 2010 Ian Murray Edinburgh South 2010 Lisa Nandy Wigan 2010 Melanie Onn Great Grimsby 2015 Chi Onwurah Newcastle upon Tyne Central 2010 Kate Osamor Edmonton 2015 Albert Owen Ynys Mon 2001 Teresa Pearce Erith and Thamesmead 2010 Matthew Pennycook Greewich and Woolwich 2015 Toby Perkins Chesterfield 2010 Jess Phillips Birmingham Yardley 2015 Bridget Phillipson Houghton and Sunderland South 2010 Stephen Pound Ealing North 1997 Lucy Powell Manchester Central 2012 Yasmin Qureshi Bolton South East 2010 Angela Rayner Ashton-under-Lyne 2015 Jamie Reed Copeland 2005 Steve Reed Croydon North 2012 Christina Rees Neath 2015 Rachel Reeves Leeds West 2010 Emma Reynolds Wolverhampton North East 2010 Jonathan Reynolds Stalybridge and Hyde 2010 Marie Rimmer St Helens South and Whiston 2015 Geoffrey Robinson Coventry North West 1976 Steve Rotherham Liverpool Walton 2010 Joan Ryan Enfield North 2015 Naz Shah Bradford West 2015 Suspended from the Labour Party in April 2016 Virendra Sharma Ealing Southall 2007 Barry Sheerman Huddersfield 1979 Member for Huddersfield East 1979–83, Huddersfield 1983– Paula Sherriff Dewsbury 2015 Gavin Shuker Luton South 2010 Tulip Siddiq Hampstead and Kilburn 2015 Dennis Skinner Bolsover 1970 Andy Slaughter Hammersmith 2005 Member for Ealing, Acton and Shepherd's Bush 2005–2010, Hammersmith 2010– Ruth Smeeth Stoke-on-Trent North 2015 Andrew Smith Oxford East 1987 Angela Smith Penistone and Stocksbridge 2005 Member for Sheffield Hillsborough 2005–2010, Penistone and Stocksbridge 2010– Cat Smith Lancaster & Fleetwood 2015 Jeff Smith Manchester Withington 2015 Nick Smith Blaenau Gwent 2010 Owen Smith Pontypridd 2010 Karin Smyth Bristol South 2015 John Spellar Warley 1982 Member for Birmingham Northfield 1982–83, Warley West 1992–97, Warley 1997– Sir Keir Starmer Holborn and St Pancras 2015 Jo Stevens Cardiff Central 2015 Wes Streeting Ilford North 2015 Graham Stringer Blackley and Broughton 1997 Member for Manchester Blackley, Blackley and Broughton 2010– Gisela Stuart Birmingham Edgbaston 1997 Mark Tami Alyn and Deeside 2001 Gareth Thomas Harrow West 1997 Nick Thomas-Symonds Torfaen 2015 Emily Thornberry Islington South and Finsbury 2005 Stephen Timms East Ham 1994 Member for Newham North East 1994–97, East Ham 1997– Jon Trickett Hemsworth 1996 Anna Turley Redcar 2015 Karl Turner Kingston upon Hull East 2010 Derek Twigg Halton 1997 Stephen Twigg Liverpool West Derby 1997 Member for Enfield Southgate 1997–2005, Liverpool West Derby 2010– Chuka Umunna Streatham 2010 Keith Vaz Leicester East 1987 Valerie Vaz Walsall South 2010 Tom Watson West Bromwich East 2001 Catherine West Hornsey & Wood Green 2015 Alan Whitehead Southampton Test 1997 Phil Wilson Sedgefield 2007 David Winnick Walsall North 1966 Member for Croydon South 1966–70, Walsall North 1979– Rosie Winterton Doncaster Central 1997 John Woodcock Barrow and Furness 2010 Iain Wright Hartlepool 2004 Daniel Zeichner Cambridge 2015
See also
Labour Co-operative
Labour In for Britain
Labour Leave
Labour Party in Northern Ireland
Labour Representation Committee election results
List of Labour Parties
List of Labour Party (UK) MPs
List of organisations associated with the British Labour Party
List of UK Labour Party general election manifestos
People's Assembly Against Austerity
Politics of the United Kingdom
Scottish Labour Party
Socialist Labour Party (UK)
Socialist Party (England and Wales)
Welsh Labour
References
Bibliography
Further reading
Davies, A. J. To Build a New Jerusalem: Labour Movement from the 1890s to the 1990s (1996).
Driver, Stephen and Luke Martell. New Labour: Politics after Thatcherism (Polity Press, wnd ed. 2006).
Field, Geoffrey G. Blood, Sweat, and Toil: Remaking the British Working Class, 1939–1945 (2011) online
Foote, Geoffrey. The Labour Party's Political Thought: A History (Macmillan, 1997).
Francis, Martin. Ideas and Policies under Labour 1945–51 (Manchester UP, 1997).
Howell, David.British Social Democracy (Croom Helm, 1976)
Howell, David. MacDonald's Party, (Oxford University Press, 2002).
Kavanagh, Dennis. The Politics of the Labour Party (Routledge, 2013).
Matthew, H. C. G., R. I. McKibbin, J. A. Kay. "The Franchise Factor in the Rise of the Labour Party," English Historical review 91#361 (Oct. 1976), pp. 723–752 in JSTOR
Miliband, Ralph. Parliamentary Socialism (1972).
Mioni, Michele. "The Attlee government and welfare state reforms in post-war Italian Socialism (1945–51): Between universalism and class policies." Labor History 57#2 (2016): 277-297. DOI:10.1080/0023656X.2015.1116811
Morgan, Kenneth O. Labour in Power, 1945–51, OUP, 1984
Morgan, Kenneth O. Labour People: Leaders and Lieutenants, Hardie to Kinnock OUP, 1992, scholarly biographies of 30 key leaders.
Pelling, Henry, and Alastair J. Reid, A Short History of the Labour Party, Palgrave Macmillan, 2005 ed.
Ben Pimlott, Labour and the Left in the 1930s, Cambridge University Press, 1977.
Plant, Raymond, Matt Beech and Kevin Hickson (2004), The Struggle for Labour's Soul: understanding Labour's political thought since 1945, Routledge
Clive Ponting, Breach of Promise, 1964–70 (Penguin, 1990).
Reeves, Rachel, and Martin McIvor. "Clement Attlee and the foundations of the British welfare state." Renewal: a Journal of Labour Politics 22.3/4 (2014): 42+ online.
Rogers, Chris. "‘Hang on a Minute, I've Got a Great Idea’: From the Third Way to Mutual Advantage in the Political Economy of the British Labour Party." British Journal of Politics and International Relations 15#1 (2013): 53-69.
Rosen, Greg, ed. Dictionary of Labour Biography. Politicos Publishing, 2001, 665pp; short biographies
Rosen, Greg. Old Labour to New, Politicos Publishing, 2005
Shaw, Eric. The Labour Party since 1979: Crisis and Transformation (Routledge, 1994).
Shaw, Eric. "Understanding Labour Party Management under Tony Blair." Political Studies Review 14.2 (2016): 153-162.
External links
Official party websites
Labour
Scottish Labour
Welsh Labour
London Assembly Labour
Young Labour – Party youth wing
Labour Party in Northern Ireland
Labour Party in Westminster
Social media pages
Facebook
Twitter
YouTube channel
Other
Labour History Group website
Guardian Unlimited Politics—Special Report: Labour Party
Tony Benn Speech Archive, former Labour Party Chairman, 1971–72
Labour History Archive and Study Centre holds archives of the National Labour Party
Labour Campaign for Electoral Reform website
Category:Labour parties
Category:Labour parties in the United Kingdom
Category:1900 establishments in the United Kingdom
Category:Democratic socialist parties in Europe
Category:Party of European Socialists member parties
Category:Parties represented in the European Parliament
Category:Political parties established in 1900
Category:Progressive Alliance
Category:Second International
Category:Socialist parties in the United Kingdom
Category:Social democratic parties in the United Kingdom
Category:Socialist International
Category:Anti-austerity political parties in the United Kingdom | 19,279,158 | 2017-01 |
Separation of church and state in the United States | "Separation of church and state" is a phrase used by Thomas Jefferson and others expressing an understanding of the intent and function of the Establishment Clause and Free Exercise Clause of First Amendment to the Constitution of the United States which reads: "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof..."
The phrase "separation of church and state" is generally traced to a January 1, 1802 letter by Thomas Jefferson, addressed to the Danbury Baptist Association in Connecticut, and published in a Massachusetts newspaper. Jefferson wrote,
Jefferson was echoing the language of the founder of the first Baptist church in America, Roger Williams who had written in 1644 of "[A] hedge or wall of separation between the garden of the church and the wilderness of the world."Article Six of the United States Constitution also specifies that "no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States."
Jefferson's metaphor of a wall of separation has been cited repeatedly by the U.S. Supreme Court. In Reynolds v. United States (1879) the Court wrote that Jefferson's comments "may be accepted almost as an authoritative declaration of the scope and effect of the [First] Amendment." In Everson v. Board of Education (1947), Justice Hugo Black wrote: "In the words of Thomas Jefferson, the clause against establishment of religion by law was intended to erect a wall of separation between church and state."Jefferson's Danbury letter has been cited favorably by the Supreme Court several times.
However, the Court has not always interpreted the constitutional principle as absolute, and the proper extent of separation between government and religion in the U.S. remains an ongoing subject of impassioned debate.See Lynch v. Donnelly, 465 U.S. 668, 673 (1984) ("The concept of a 'wall' of separation is a useful figure of speech probably deriving from views of Thomas Jefferson. ... [b]ut the metaphor itself is not a wholly accurate description of the practical aspects of the relationship that in fact exists between church and state.")Committee for Public Education & Religious Liberty v. Nyquist, 413 U.S. 756, 760 (1973) ("Yet, despite Madison's admonition and the 'sweep of the absolute prohibitions' of the Clauses, this Nation's history has not been one of entirely sanitized separation between Church and State. It has never been thought either possible or desirable to enforce a regime of total separation.")Zorach v. Clauson, 343 U.S. 306, 312 (U.S. 1952) ("The First Amendment, however, does not say that in every and all respects there shall be a separation of Church and State.").Lemon v. Kurtzman, 403 U.S. 602 (1971) ("Our prior holdings do not call for total separation between church and state; total separation is not possible in an absolute sense.")
Early history
Many early immigrant groups traveled to America to worship freely, particularly after the English Civil War and religious conflict in France and Germany.The Cousins' Wars, Kevin Phillips, 1999 They included nonconformists like the Puritans, who were Protestant Christians fleeing religious persecution from the Anglican King of England. Despite a common background, the groups' views on religious toleration were mixed. While some such as Roger Williams of Rhode Island and William Penn of Pennsylvania ensured the protection of religious minorities within their colonies, others like the Plymouth Colony and Massachusetts Bay Colony had established churches. The Dutch colony of New Netherland established the Dutch Reformed Church and outlawed all other worship, though enforcement was sparse. Religious conformity was desired partly for financial reasons: the established Church was responsible for poverty relief, putting dissenting churches at a significant disadvantage.
Former state churches in British North America
Protestant colonies
The colony of Plymouth was founded by Pilgrims, English Dissenters or Separatists, Calvinists.
The colonies of Massachusetts Bay, New Haven, and New Hampshire were founded by Puritan, Calvinist, Protestants.
New Netherland was founded by Dutch Reformed Calvinists.
The colonies of New York, Virginia, North Carolina, South Carolina, and Georgia were officially Church of England.
Catholic colonies
When New France was transferred to Great Britain in 1763, the Catholic Church remained under toleration, but Huguenots were allowed entrance where they had formerly been banned from settlement by Parisian authorities.
The Colony of Maryland was founded by a charter granted in 1632 to George Calvert, secretary of state to Charles I, and his son Cecil, both recent converts to Catholicism. Under their leadership many English Catholic gentry families settled in Maryland. However, the colonial government was officially neutral in religious affairs, granting toleration to all Christian groups and enjoining them to avoid actions which antagonized the others. On several occasions low-church dissenters led insurrections which temporarily overthrew the Calvert rule. In 1689, when William and Mary came to the English throne, they acceded to demands to revoke the original royal charter. In 1701 the Church of England was proclaimed, and in the course of the eighteenth century Maryland Catholics were first barred from public office, then disenfranchised, although not all of the laws passed against them (notably laws restricting property rights and imposing penalties for sending children to be educated in foreign Catholic institutions) were enforced, and some Catholics even continued to hold public office.
Spanish Florida was ceded to Great Britain in 1763, the British divided Florida into two colonies. Both East and West Florida continued a policy of toleration for the Catholic Residents.
Colonies with no established church
The Province of Pennsylvania was founded by Quakers, but the colony never had an established church.
West Jersey, also founded by Quakers, prohibited any establishment.
Delaware Colony
The Colony of Rhode Island and Providence Plantations, founded by religious dissenters, is widely regarded as the first polity to grant religious freedom to all its citizens.
Tabular summary
Colony Denomination Disestablished Connecticut Congregational 1818 Georgia Church of England 1789 Maryland Catholic/Church of England 1701/1776 Massachusetts Congregational 1780 (in 1833 state funding suspended) New Brunswick Church of England New Hampshire Congregational 1790 Newfoundland Church of England North Carolina Church of England 1776 Nova Scotia Church of England 1850 Prince Edward Island Church of England South Carolina Church of England 1790 Canada West Church of England 1854 West Florida Church of England N/A, East Florida Church of England N/A, Virginia Church of England 1786 West Indies Church of England 1868
In several colonies, the establishment ceased to exist in practice at the Revolution, about 1776; this is the date of permanent legal abolition.
in 1789 the Georgia Constitution was amended as follows:
"Article IV. Section 10. No person within this state shall, upon any pretense, be deprived of the inestimable privilege of worshipping God in any manner agreeable to his own conscience, nor be compelled to attend any place of worship contrary to his own faith and judgment; nor shall he ever be obliged to pay tithes, taxes, or any other rate, for the building or repairing any place of worship, or for the maintenance of any minister or ministry, contrary to what he believes to be right, or hath voluntarily engaged to do. No one religious society shall ever be established in this state, in preference to another; nor shall any person be denied the enjoyment of any civil right merely on account of his religious principles."
From 1780 Massachusetts had a system which required every man to belong to a church, and permitted each church to tax its members, but forbade any law requiring that it be of any particular denomination. This was objected to, as in practice establishing the Congregational Church, the majority denomination, and was abolished in 1833.
Until 1877 the New Hampshire Constitution required members of the State legislature to be of the Protestant religion.
The North Carolina Constitution of 1776 disestablished the Anglican church, but until 1835 the NC Constitution allowed only Protestants to hold public office. From 1835 to 1876 it allowed only Christians (including Catholics) to hold public office. Article VI, Section 8 of the current NC Constitution forbids only atheists from holding public office.Article VI of the North Carolina state constitution Such clauses were held by the United States Supreme Court to be unenforceable in the 1961 case of Torcaso v. Watkins, when the court ruled unanimously that such clauses constituted a religious test incompatible with First and Fourteenth Amendment protections.
Religious tolerance for Catholics with an established Church of England was policy in the former Spanish Colonies of East and West Florida while under British rule.
In Treaty of Paris (1783), which ended the American Revolutionary War, the British ceded both East and West Florida back to Spain (see Spanish Florida).
Tithes for the support of the Anglican Church in Virginia were suspended in 1776, and never restored. 1786 is the date of the Virginia Statute of Religious Freedom, which prohibited any coercion to support any religious body.
Colonial support for separation
The Flushing Remonstrance shows support for separation of church and state as early as the mid-17th century, stating their opposition to religious persecution of any sort: "The law of love, peace and liberty in the states extending to Jews, Turks and Egyptians, as they are considered sons of Adam, which is the glory of the outward state of Holland, so love, peace and liberty, extending to all in Christ Jesus, condemns hatred, war and bondage." The document was signed December 27, 1657 by a group of English citizens in America who were affronted by persecution of Quakers and the religious policies of the Governor of New Netherland, Peter Stuyvesant. Stuyvesant had formally banned all religions other than the Dutch Reformed Church from being practiced in the colony, in accordance with the laws of the Dutch Republic. The signers indicated their "desire therefore in this case not to judge lest we be judged, neither to condemn least we be condemned, but rather let every man stand or fall to his own Master.""Remonstrance of the Inhabitants of the Town of Flushing to Governor Stuyvesant", Dec 27, 1657. Stuyvesant fined the petitioners and threw them in prison until they recanted. However, John Bowne allowed the Quakers to meet in his home. Bowne was arrested, jailed, and sent to the Netherlands for trial; the Dutch court exonerated Bowne.
New York Historical Society President and Columbia University Professor of History Kenneth T. Jackson describes the Flushing Remonstrance as "the first thing that we have in writing in the United States where a group of citizens attests on paper and over their signature the right of the people to follow their own conscience with regard to God - and the inability of government, or the illegality of government, to interfere with that.""Drawing the Line Between Church and State", CBS News, Dec 23, 2007.
Given the wide diversity of opinion on Christian theological matters in the newly independent American States, the Constitutional Convention believed a government sanctioned (established) religion would disrupt rather than bind the newly formed union together. George Washington wrote a letter in 1790 to the country's first Jewish congregation, the Touro Synagogue in Newport, Rhode Island to state:
Allowing rights and immunities of citizenship. It is now no more that toleration is spoken of, as if it were by the indulgence of one class of people, that another enjoyed the exercise of their inherent natural rights. For happily the Government of the United States, which gives to bigotry no sanction, to persecution no assistance requires only that they who live under its protection should demean themselves as good citizens, in giving it on all occasions their effectual support.
There were also opponents to the support of any established church even at the state level. In 1773, Isaac Backus, a prominent Baptist minister in New England, wrote against a state sanctioned religion, saying: "Now who can hear Christ declare, that his kingdom is, not of this world, and yet believe that this blending of church and state together can be pleasing to him?" He also observed that when "church and state are separate, the effects are happy, and they do not at all interfere with each other: but where they have been confounded together, no tongue nor pen can fully describe the mischiefs that have ensued." Thomas Jefferson's influential Virginia Statute for Religious Freedom was enacted in 1786, five years before the Bill of Rights.
Most Anglican ministers, and many Anglicans, were Loyalists. The Anglican establishment, where it had existed, largely ceased to function during the American Revolution, though the new States did not formally abolish and replace it until some years after the Revolution.
Jefferson, Madison, and the "wall of separation"
The phrase "[A] hedge or wall of separation between the garden of the church and the wilderness of the world" was first used by Baptist theologian Roger Williams, the founder of the colony of Rhode Island, in his 1644 book The Bloody Tenent of Persecution."Mr. Cotton's Letter Lately Printed, Examined and Answered," The Complete Writings of Roger Williams, Volume 1, page 108 (1644).Feldman, Noah (2005). Divided by God. Farrar, Straus and Giroux, p. 24 ("Williams's metaphor was rediscovered by Isaac Backus, a New England Baptist of Jefferson's generation, who believed, like Williams, that an established church—which he considered to exist in the Massachusetts of his day—would never protect religious dissenters like himself and must be opposed in order to keep religion pure.") The phrase was later used by Thomas Jefferson as a description of the First Amendment and its restriction on the legislative branch of the federal government, in an 1802 letterTo Messrs. Nehemiah Dodge and Others, a Committee of the Danbury Baptist Association, in the State of Connecticut. January 1, 1802. Full text available online. to the Danbury Baptists (a religious minority concerned about the dominant position of the Congregationalist church in Connecticut):
Believing with you that religion is a matter which lies solely between man and his god, that he owes account to none other for his faith or his worship, that the legitimate powers of government reach actions only, and not opinions, I contemplate with sovereign reverence that act of the whole American people which declared that their "legislature" should "make no law respecting an establishment of religion, or prohibiting the free exercise thereof," thus building a wall of separation between church and State. Adhering to this expression of the supreme will of the nation in behalf of the rights of conscience, I shall see with sincere satisfaction the progress of those sentiments which tend to restore to man all his natural rights, convinced he has no natural right in opposition to his social duties.
Jefferson's letter was in reply to a letterDanbury Baptist Association's letter to Thomas Jefferson, October 7, 1801. Full text available online. that he had received from the Danbury Baptist Association dated October 7, 1801. In an 1808 letter to Virginia Baptists, Jefferson used the same theme:
We have solved, by fair experiment, the great and interesting question whether freedom of religion is compatible with order in government and obedience to the laws. And we have experienced the quiet as well as the comfort which results from leaving every one to profess freely and openly those principles of religion which are the inductions of his own reason and the serious convictions of his own inquiries.
Jefferson and James Madison's conceptions of separation have long been debated. Jefferson refused to issue Proclamations of Thanksgiving sent to him by Congress during his presidency, though he did issue a Thanksgiving and Prayer proclamation as Governor of Virginia.Official Letters of the Governors of the State of Virginia (Virginia State Library, 1928), Vol. II, pp. 64–66, November 11, 1779.Lee v. Weisman, 505 U.S. 577 (1992) (Souter, J., concurring)("President Jefferson, for example, steadfastly refused to issue Thanksgiving proclamations of any kind, in part because he thought they violated the Religion Clauses.") Madison issued four religious proclamations while President,James D. Richardson, A Compilation of the Messages and Papers of the Presidents (Washington: Bureau of National Literature, 1897), Vol. II, pp. 498, 517–518, 543, 545–546. but vetoed two bills on the grounds they violated the first amendment.James Madison's veto messages On the other hand, both Jefferson and Madison attended religious services at the Capitol.Religion and the Founding of the American Republic; Library of Congress exhibit website. Retrieved 2007-02-07 Years before the ratification of the Constitution, Madison contended "Because if Religion be exempt from the authority of the Society at large, still less can it be subject to that of the Legislative Body."James Madison, Memorial and Remonstrance against Religious Assessments After retiring from the presidency, Madison wrote of "total separation of the church from the state."(March 2, 1819 letter to Robert Walsh), " "Strongly guarded as is the separation between Religion & Govt in the Constitution of the United States," Madison wrote, and he declared, "practical distinction between Religion and Civil Government is essential to the purity of both, and as guaranteed by the Constitution of the United States."(1811 letter to Baptist Churches) In a letter to Edward Livingston Madison further expanded, "We are teaching the world the great truth that Govts. do better without Kings & Nobles than with them. The merit will be doubled by the other lesson that Religion flourishes in greater purity, without than with the aid of Govt."Madison's letter to Edward Livingston, July 10, 1822 Madison's original draft of the Bill of Rights had included provisions binding the States, as well as the Federal Government, from an establishment of religion, but the House did not pass them.
Jefferson's opponents said his position was the destruction and the governmental rejection of Christianity, but this was a caricature.See Morison and Commager, vol I In setting up the University of Virginia, Jefferson encouraged all the separate sects to have preachers of their own, though there was a constitutional ban on the State supporting a Professorship of Divinity, arising from his own Virginia Statute for Religious Freedom.Jefferson's letter to Thomas Cooper, November 2, 1822 Some have argued that this arrangement was "fully compatible with Jefferson's views on the separation of church and state;"Dumas Malone, Jefferson and His Times, 6, 393 however, others point to Jefferson's support for a scheme in which students at the university would attend religious worship each morning as evidence that his views were not consistent with strict separation.Ashley M. Bell, "God Save this Honorable Court": How Current Establishment Clause Jurisprudence can be Reconciled with the Secularization of Historical Religious Expressions, 50 Am. U.L. Rev. 1273, 1282 n.49 (2001) Still other scholars, such as Mark David Hall, attempt to sidestep the whole issue by arguing that American jurisprudence focuses too narrowly on this one Jeffersonian letter while failing to account for other relevant historyHall, Mark David. "Jeffersonian Walls and Madisonian Lines: The Supreme Court's Use of History in Religion Clause Cases." Oregon Law Review 85 (2006), 563–614
Jefferson's letter entered American jurisprudence in the 1878 Mormon polygamy case Reynolds v. U.S., in which the court cited Jefferson and Madison, seeking a legal definition for the word religion. Writing for the majority, Justice Stephen Johnson Field cited Jefferson's Letter to the Danbury Baptists to state that "Congress was deprived of all legislative power over mere opinion, but was left free to reach actions which were in violation of social duties or subversive of good order."Reynolds v. U.S., 98 U.S. 145 (1878)
Considering this, the court ruled that outlawing polygamy was constitutional.
Patrick Henry, Massachusetts, and Connecticut
Jefferson and Madison's approach was not the only one taken in the eighteenth century. Jefferson's Statute of Religious Freedom was drafted in opposition to a bill, chiefly supported by Patrick Henry, which would permit any Virginian to belong to any denomination, but which would require him to belong to some denomination and pay taxes to support it. Similarly, the Constitution of Massachusetts originally provided that "no subject shall be hurt, molested, or restrained, in his person, liberty, or estate, for worshipping God in the manner and season most agreeable to the dictates of his own conscience... provided he doth not disturb the public peace, or obstruct others in their religious worship," (Article II) but also that:
the people of this commonwealth have a right to invest their legislature with power to authorize and require, and the legislature shall, from time to time, authorize and require, the several towns, parishes, precincts, and other bodies politic, or religious societies, to make suitable provision, at their own expense, for the institution of the public worship of God, and for the support and maintenance of public Protestant teachers of piety, religion and morality, in all cases where such provision shall not be made voluntarily.
And the people of this commonwealth have also a right to, and do, invest their legislature with authority to enjoin upon all the subjects an attendance upon the instructions of the public teachers aforesaid, at stated times and seasons, if there be any on whose instructions they can conscientiously and conveniently attend. (Article III)
Since, in practice, this meant that the decision of who was taxable for a particular religion rested in the hands of the selectmen, usually Congregationalists, this system was open to abuse. It was abolished in 1833. The intervening period is sometimes referred to as an "establishment of religion" in Massachusetts.
The Duke of York had required that every community in his new lands of New York and New Jersey support some church, but this was more often Dutch Reformed, Quaker or Presbyterian, than Anglican. Some chose to support more than one church. He also ordained that the tax-payers were free, having paid his local tax, to choose their own church. The terms for the surrender of New Amsterdam had provided that the Dutch would have liberty of conscience, and the Duke, as an openly divine-right Catholic, was no friend of Anglicanism. The first Anglican minister in New Jersey arrived in 1698, though Anglicanism was more popular in New York.The Story of New Jersey; ed., William Starr Myers (1945) Vol. II, chapter 4
Connecticut had a real establishment of religion. Its citizens did not adopt a constitution at the Revolution, but rather amended their Charter to remove all references to the British Government. As a result, the Congregational Church continued to be established, and Yale College, at that time a Congregational institution, received grants from the State until Connecticut adopted a constitution in 1818 partly because of this issue.
Test acts
The absence of an establishment of religion did not necessarily imply that all men were free to hold office. Most colonies had a Test Act, and several states retained them for a short time. This stood in contrast to the Federal Constitution, which explicitly prohibits the employment of any religious test for Federal office, and which through the Fourteenth Amendment later extended this prohibition to the States.
For example, the New Jersey Constitution of 1776 provides liberty of conscience in much the same language as Massachusetts (similarly forbidding payment of "taxes, tithes or other payments" contrary to conscience). It then provides:
That there shall be no establishment of any one religious sect in this Province, in preference to another; and that no Protestant inhabitant of this Colony shall be denied the enjoyment of any civil right, merely on account of his religious principles; but that all persons, professing a belief in the faith of any Protestant sect, who shall demean themselves peaceably under the government, as hereby established, shall be capable of being elected into any office of profit or trust, or being a member of either branch of the Legislature, and shall fully and freely enjoy every privilege and immunity, enjoyed by others their fellow subjects.Article XIX, italics added.
This would permit a Test Act, but did not require one.
The original charter of the Province of East Jersey had restricted membership in the Assembly to Christians; the Duke of York was fervently Catholic, and the proprietors of Perth Amboy, New Jersey were Scottish Catholic peers. The Province of West Jersey had declared, in 1681, that there should be no religious test for office. An oath had also been imposed on the militia during the French and Indian War requiring them to abjure the pretensions of the Pope, which may or may not have been applied during the Revolution. That law was replaced by 1799.
The Pennsylvania Constitution of 1776 provided:
And each member, before he takes his seat, shall make and subscribe the following declaration, viz:
I do believe in one God, the creator and governor of the universe, the rewarder of the good and the punisher of the wicked. And I do acknowledge the Scriptures of the Old and New Testament to be given by Divine inspiration.
And no further or other religious test shall ever hereafter be required of any civil officer or magistrate in this State.
Again, it provided in general that all tax-paying freemen and their sons shall be able to vote, and that no "man, who acknowledges the being of a God, be justly deprived or abridged of any civil right as a citizen, on account of his religious sentiments or peculiar mode of religious worship."
The U.S. Constitution
Article 6
Article Six of the United States Constitution provides that "no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States". Prior to the adoption of the Bill of Rights, this was the only mention of religion in the Constitution.
The First Amendment
The first amendment to the US Constitution states "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof" The two parts, known as the "establishment clause" and the "free exercise clause" respectively, form the textual basis for the Supreme Court's interpretations of the "separation of church and state" doctrine. Three central concepts were derived from the 1st Amendment which became America's doctrine for church-state separation: no coercion in religious matters, no expectation to support a religion against one's will, and religious liberty encompasses all religions. In sum, citizens are free to embrace or reject a faith, any support for religion - financial or physical - must be voluntary, and all religions are equal in the eyes of the law with no special preference or favoritism.
The First Congress' deliberations show that its understanding of the separation of church and state differed sharply from that of their contemporaries in Europe. As 19th century Union Theological Seminary historian Philip Schaff observed:
The American separation of church and state rests upon respect for the church; the [European anticlerical] separation, on indifference and hatred of the church, and of religion itself…. The constitution did not create a nation, nor its religion and institutions. It found them already existing, and was framed for the purpose of protecting them under a republican form of government, in a rule of the people, by the people, and for the people.
An August 15, 1789 entry in Madison's papers indicates he intended for the establishment clause to prevent the government imposition of religious beliefs on individuals. The entry says: "Mr. Madison said he apprehended the meaning of the words to be, that Congress should not establish a religion, and enforce the legal observation of it by law, nor compel men to worship God in any manner contrary to their conscience. ..."The Founders' Constitution Volume 5, Amendment I (Religion), Document 53. The University of Chicago Press. Retrieved 2007-08-09.
Some legal scholars, such as John Baker of LSU, theorize that Madison's initial proposed language—that Congress should make no law regarding the establishment of a "national religion"—was rejected by the House, in favor of the more general "religion" in an effort to appease the Anti-Federalists. To both the Anti-Federalists and the Federalists, the very word "national" was a cause for alarm because of the experience under the British crown.Forgotten Purposes of the First Amendment Religion Clauses Gary D. Glenn. The Review of Politics, Vol. 49, No. 3 (Summer, 1987), pp. 340–367. During the debate over the establishment clause, Rep. Elbridge Gerry of Massachusetts took issue with Madison's language regarding whether the government was a national or federal government (in which the states retained their individual sovereignty), which Baker suggests compelled Madison to withdraw his language from the debate.
Following the argument between Madison and Gerry, Rep. Samuel Livermore of New Hampshire proposed language stating that, "Congress shall make no laws touching religion or the rights of conscience." This raised an uproar from members, such as Rep. Benjamin Huntingdon of Connecticut and Rep. Peter Sylvester of New York, who worried the language could be used to harm religious practice.
Others, such as Rep. Roger Sherman of Connecticut, believed the clause was unnecessary because the original Constitution only gave Congress stated powers, which did not include establishing a national religion.
Anti-Federalists such as Rep. Thomas Tucker of South Carolina moved to strike the establishment clause completely because it could preempt the religious clauses in the state constitutions. However, the Anti-Federalists were unsuccessful in persuading the House of Representatives to drop the clause from the first amendment.
The Senate went through several more narrowly targeted versions before reaching the contemporary language. One version read, "Congress shall make no law establishing one religious sect or society in preference to others, nor shall freedom of conscience be infringed," while another read, "Congress shall make no law establishing one particular religious denomination in preference to others." Ultimately, the Senate rejected the more narrowly targeted language.
At the time of the passage of the Bill of Rights, many states acted in ways that would now be held unconstitutional. All of the early official state churches were disestablished by 1833 (Massachusetts), including the Congregationalist establishment in Connecticut. It is commonly accepted that, under the doctrine of Incorporation—which uses the Due Process clause of the Fourteenth Amendment to hold the Bill of Rights applicable to the states—these state churches could not be reestablished today.
Yet the provisions of state constitutions protected religious liberty, particularly the so-called freedom on conscience. During the nineteenth century (and before the incorporation of the First Amendment of the U.S. Constitution through the Fourteenth Amendment), litigants turned to these provisions to challenge Sunday laws (blue laws), bible-reading in schools, and other ostensibly religious regulations.Kyle G. Volk, Moral Minorities and the Making of American Democracy (Oxford University Press, 2014)
The 14th Amendment
The Fourteenth Amendment to the United States Constitution (Amendment XIV) is one of the post-Civil War amendments, intended to secure rights for former slaves. It includes the due process and equal protection clauses among others. The amendment introduces the concept of incorporation of all relevant federal rights against the states. While it has not been fully implemented, the doctrine of incorporation has been used to ensure, through the Due Process Clause and Privileges and Immunities Clause, the application of most of the rights enumerated in the Bill of Rights to the states.
The incorporation of the First Amendment establishment clause in the landmark case of Everson v. Board of Education has impacted the subsequent interpretation of the separation of church and state in regard to the state governments.Everson v. Board of Education, 330 U.S. 1 (1947). Although upholding the state law in that case, which provided for public busing to private religious schools, the Supreme Court held that the First Amendment establishment clause was fully applicable to the state governments. A more recent case involving the application of this principle against the states was Board of Education of Kiryas Joel Village School District v. Grumet (1994).
The "Separation" principle and the Supreme Court
Jefferson's concept of "separation of church and state" first became a part of Establishment Clause jurisprudence in Reynolds v. U.S., 98 U.S. 145 (1878).REYNOLDS v. U.S., 98 U.S. 145 (1878) 98 U.S. 145 In that case, the court examined the history of religious liberty in the US, determining that while the constitution guarantees religious freedom, "The word 'religion' is not defined in the Constitution. We must go elsewhere, therefore, to ascertain its meaning, and nowhere more appropriately, we think, than to the history of the times in the midst of which the provision was adopted." The court found that the leaders in advocating and formulating the constitutional guarantee of religious liberty were James Madison and Thomas Jefferson. Quoting the "separation" paragraph from Jefferson's letter to the Danbury Baptists, the court concluded that, "coming as this does from an acknowledged leader of the advocates of the measure, it may be accepted almost as an authoritative declaration of the scope and effect of the amendment thus secured."
The centrality of the "separation" concept to the Religion Clauses of the Constitution was made explicit in Everson v. Board of Education, 330 U.S. 1 (1947), a case dealing with a New Jersey law that allowed government funds to pay for transportation of students to both public and Catholic schools. This was the first case in which the court applied the Establishment Clause to the laws of a state, having interpreted the due process clause of the Fourteenth Amendment as applying the Bill of Rights to the states as well as the federal legislature. Citing Jefferson, the court concluded that "The First Amendment has erected a wall between church and state. That wall must be kept high and impregnable. We could not approve the slightest breach."
While the decision (with four dissents) ultimately upheld the state law allowing the funding of transportation of students to religious schools, the majority opinion (by Justice Hugo Black) and the dissenting opinions (by Justice Wiley Blount Rutledge and Justice Robert H. Jackson) each explicitly stated that the Constitution has erected a "wall between church and state" or a "separation of Church from State": their disagreement was limited to whether this case of state funding of transportation to religious schools breached that wall. Rutledge, on behalf of the four dissenting justices, took the position that the majority had indeed permitted a violation of the wall of separation in this case: "Neither so high nor so impregnable today as yesterday is the wall raised between church and state by Virginia's great statute of religious freedom and the First Amendment, now made applicable to all the states by the Fourteenth." Writing separately, Justice Jackson argued that "[T]here are no good grounds upon which to support the present legislation. In fact, the undertones of the opinion, advocating complete and uncompromising separation of Church from State, seem utterly discordant with its conclusion yielding support to their commingling in educational matters."
In 1962, the Supreme Court addressed the issue of officially sponsored prayer or religious recitations in public schools. In Engel v. Vitale, 370 U.S. 421 (1962), the Court, by a vote of 6-1, determined it unconstitutional for state officials to compose an official school prayer and require its recitation in public schools, even when the prayer is non-denominational and students may excuse themselves from participation. (The prayer required by the New York State Board of Regents prior to the Court's decision consisted of: "Almighty God, we acknowledge our dependence upon Thee, and we beg Thy blessings upon us, our parents, our teachers, and our country. Amen.") As the Court stated:
The petitioners contend, among other things, that the state laws requiring or permitting use of the Regents' prayer must be struck down as a violation of the Establishment Clause because that prayer was composed by governmental officials as a part of a governmental program to further religious beliefs. For this reason, petitioners argue, the State's use of the Regents' prayer in its public school system breaches the constitutional wall of separation between Church and State. We agree with that contention, since we think that the constitutional prohibition against laws respecting an establishment of religion must at least mean that, in this country, it is no part of the business of government to compose official prayers for any group of the American people to recite as a part of a religious program carried on by government.
The court noted that it "is a matter of history that this very practice of establishing governmentally composed prayers for religious services was one of the reasons which caused many of our early colonists to leave England and seek religious freedom in America."Engel v. Vitale, 370 U.S. 421 (1962) The lone dissenter, Justice Potter Stewart, objected to the court's embrace of the "wall of separation" metaphor: "I think that the Court's task, in this as in all areas of constitutional adjudication, is not responsibly aided by the uncritical invocation of metaphors like the "wall of separation," a phrase nowhere to be found in the Constitution."
In Epperson v. Arkansas, 393 U.S. 97 (1968), the Supreme Court considered an Arkansas law that made it a crime "to teach the theory or doctrine that mankind ascended or descended from a lower order of animals," or "to adopt or use in any such institution a textbook that teaches" this theory in any school or university that received public funds. The court's opinion, written by Justice Abe Fortas, ruled that the Arkansas law violated "the constitutional prohibition of state laws respecting an establishment of religion or prohibiting the free exercise thereof. The overriding fact is that Arkansas' law selects from the body of knowledge a particular segment which it proscribes for the sole reason that it is deemed to conflict with a particular religious doctrine; that is, with a particular interpretation of the Book of Genesis by a particular religious group." The court held that the Establishment Clause prohibits the state from advancing any religion, and that "[T]he state has no legitimate interest in protecting any or all religions from views distasteful to them." EPPERSON v. ARKANSAS, 393 U.S. 97 (1968)
thumb|200px|right| Justice Sandra Day O'Connor
In Lemon v. Kurtzman, 403 U.S. 602 (1971), the court determined that a Pennsylvania state policy of reimbursing the salaries and related costs of teachers of secular subjects in private religious schools violated the Establishment Clause. The court's decision argued that the separation of church and state could never be absolute: "Our prior holdings do not call for total separation between church and state; total separation is not possible in an absolute sense. Some relationship between government and religious organizations is inevitable," the court wrote. "Judicial caveats against entanglement must recognize that the line of separation, far from being a "wall," is a blurred, indistinct, and variable barrier depending on all the circumstances of a particular relationship."
Subsequent to this decision, the Supreme Court has applied a three-pronged test to determine whether government action comports with the Establishment Clause, known as the "Lemon Test". First, the law or policy must have been adopted with a neutral or non-religious purpose. Second, the principle or primary effect must be one that neither advances nor inhibits religion. Third, the statute or policy must not result in an "excessive entanglement" of government with religion.Lemon v. Kurtzman, 403 U.S. 602, 612–613, 91 S.Ct. 2105, 2111, 29 L.Ed.2d 745 (1971). (The decision in Lemon v. Kurtzman hinged upon the conclusion that the government benefits were flowing disproportionately to Catholic schools, and that Catholic schools were an integral component of the Catholic Church's religious mission, thus the policy involved the state in an "excessive entanglement" with religion.) Failure to meet any of these criteria is a proof that the statute or policy in question violates the Establishment Clause.
In 2002, a three judge panel on the Ninth Circuit Court of Appeals held that classroom recitation of the Pledge of Allegiance in a California public school was unconstitutional, even when students were not compelled to recite it, due to the inclusion of the phrase "under God." In reaction to the case, Elk Grove Unified School District v. Newdow, both houses of Congress passed measures reaffirming their support for the pledge, and condemning the panel's ruling.Senate Pledges Allegiance Under God. Fox News, Thursday, June 27, 2002 The case was appealed to the Supreme Court, where the case was ultimately overturned in June 2004, solely on procedural grounds not related to the substantive constitutional issue. Rather, a five-justice majority held that Newdow, a non-custodial parent suing on behalf of his daughter, lacked standing to sue.
When the Louisiana state legislature passed a law requiring public school biology teachers to give Creationism and Evolution equal time in the classroom, the Supreme Court ruled that the law was unconstitutional because it was intended to advance a particular religion, and did not serve the secular purpose of improved scientific education.
(See also: Creation and evolution in public education)
The display of the Ten Commandments as part of courthouse displays was considered in a group of cases decided in summer of 2005, including McCreary County v. ACLU of Kentucky and Van Orden v. Perry. While parties on both sides hoped for a reformulation or clarification of the Lemon test, the two rulings ended with narrow 5–4 and opposing decisions, with Justice Stephen Breyer the swing vote.
On December 20, 2005, the United States Court of Appeals for the Sixth Circuit ruled in the case of ACLU v. Mercer County that the continued display of the Ten Commandments as part of a larger display on American legal traditions in a Kentucky courthouse was allowed, because the purpose of the display (educating the public on American legal traditions) was secular in nature. In ruling on the Mount Soledad cross controversy on May 3, 2006, however, a federal judge ruled that the cross on public property on Mount Soledad must be removed.Onell R. Soto, City has 90 days to remove Mt. Soledad cross, The San Diego Union-Tribune, May 4, 2006, p. A1.
In what will be the case is Town of Greece v. Galloway, 12-696, the Supreme Court agreed to hear a case regarding whether prayers at town meetings, which are allowed, must allow various faiths to lead prayer, or whether the prayers can be predominately Christian.June, Daniel, "Supreme Court to Hear Case About Public Prayers" On May 5, 2014, the U.S. Supreme Court ruled 5-4 in favor of the Town of Greece by holding that the U.S. Constitution not only allows for prayer at government meetings, but also for sectarian prayers like predominately Christian prayers.
The Treaty of Tripoli
In 1797, the United States Senate ratified a treaty with Tripoli that stated in Article 11:
As the Government of the United States of America is not, in any sense, founded on the Christian religion; as it has in itself no character of enmity against the laws, religion, or tranquillity, of Mussulmen; and, as the said States never entered into any war, or act of hostility against any Mahometan nation, it is declared by the parties, that no pretext arising from religious opinions, shall ever produce an interruption of the harmony existing between the two countries.See Wikipedia article: Treaty of Tripoli
Interpretive controversies
Some scholars and organizations disagree with the notion of "separation of church and state", or the way the Supreme Court has interpreted the constitutional limitation on religious establishment.Ed Whelan, This Week in Liberal Judicial Activism—Week of February 5, National Review Online. February 5, 2007 Such critics generally argue that the phrase misrepresents the textual requirements of the Constitution, while noting that many aspects of church and state were intermingled at the time the Constitution was ratified. These critics argue that the prevalent degree of separation of church and state could not have been intended by the constitutional framers. Some of the intermingling between church and state include religious references in official contexts, and such other founding documents as the United States Declaration of Independence, which references the idea of a "Creator" and "Nature's God", though these references did not ultimately appear in the Constitution nor do they mention any particular religious view of a "Creator" or "Nature's God."
These critics of the modern separation of church and state also note the official establishment of religion in several of the states at the time of ratification, to suggest that the modern incorporation of the Establishment Clause as to state governments goes against the original constitutional intent. The issue is complex, however, as the incorporation ultimately bases on the passage of the 14th Amendment in 1868, at which point the first amendment's application to the state government was recognized. Many of these constitutional debates relate to the competing interpretive theories of originalism versus modern, progressivist theories such as the doctrine of the Living Constitution. Other debates center on the principle of the law of the land in America being defined not just by the Constitution's Supremacy Clause, but also by legal precedence, making an accurate reading of the Constitution subject to the mores and values of a given era, and rendering the concept of historical revisionism irrelevant when discussing the Constitution.
thumb|left|Ten commandments monument at a Minnesota courthouse.
The "religious test" clause has been interpreted to cover both elected officials and appointed ones, career civil servants as well as political appointees. Religious beliefs or the lack of them have therefore not been permissible tests or qualifications with regard to federal employees since the ratification of the Constitution. Seven states, however, have language included in their Bill of Rights, Declaration of Rights, or in the body of their constitutions that require state office-holders to have particular religious beliefs, though some of these have been successfully challenged in court. These states are Texas, Massachusetts, Maryland, North Carolina, Pennsylvania, South Carolina, and Tennessee.
The required beliefs of these clauses include belief in a Supreme Being and belief in a future state of rewards and punishments. (Tennessee Constitution Article IX, Section 2 is one such example.) Some of these same states specify that the oath of office include the words "so help me God." In some cases these beliefs (or oaths) were historically required of jurors and witnesses in court. At one time, such restrictions were allowed under the doctrine of states' rights; today they are deemed to be in violation of the federal First Amendment, as applied to the states via the 14th amendment, and hence unconstitutional and unenforceable.
While sometimes questioned as possible violations of separation, the appointment of official chaplains for government functions, voluntary prayer meetings at the Department of Justice outside of duty hours, voluntary prayer at meals in U.S. armed forces, inclusion of the (optional) phrase "so help me God" in the oaths for many elected offices, FBI agents, etc., have been held not to violate the First Amendment, since they fall within the realm of free exercise of religion.
Relaxed zoning rules and special parking privileges for churches, the tax-free status of church property, the fact that Christmas is a federal holiday, etc., have also been questioned, but have been considered examples of the governmental prerogative in deciding practical and beneficial arrangements for the society. The national motto "In God We Trust" has been challenged as a violation, but the Supreme Court has ruled that ceremonial deism is not religious in nature. A circuit court ruling affirmed Ohio's right to use as its motto a passage from the Bible, "With God, all things are possible", because it displayed no preference for a particular religion.
Jeffries and Ryan (2001) argue that the modern concept of separation of church and state dates from the mid-twentieth century rulings of the Supreme Court. The central point, they argue, was a constitutional ban against aid to religious schools, followed by a later ban on religious observance in public education. Jeffries and Ryan argue that these two propositions—that public aid should not go to religious schools and that public schools should not be religious—make up the separationist position of the modern Establishment Clause.
Jeffries and Ryan argue that no-aid position drew support from a coalition of separationist opinion. Most important was "the pervasive secularism that came to dominate American public life," which sought to confine religion to a private sphere. Further, the ban against government aid to religious schools was supported before 1970 by most Protestants (and most Jews), who opposed aid to religious schools, which were mostly Catholic at the time. After 1980, however, anti-Catholic sentiment has diminished among mainline Protestants, and the crucial coalition of public secularists and Protestant churches has collapsed. While mainline Protestant denominations are more inclined towards strict separation of church and state, much evangelical opinion has now largely deserted that position. As a consequence, strict separationism is opposed today by members of many Protestant faiths, even perhaps eclipsing the opposition of Roman Catholics.
Critics of the modern concept of the "separation of church and state" argue that it is untethered to anything in the text of the constitution and is contrary to the conception of the phrase as the Founding Fathers understood it. Philip Hamburger, Columbia Law school professor and prominent critic of the modern understanding of the concept, maintains that the modern concept, which deviates from the constitutional establishment clause jurisprudence, is rooted in American anti-Catholicism and Nativism. Briefs before the Supreme Court, including by the U.S. government, have argued that some state constitutional amendments relating to the modern conception of separation of church and state (Blaine Amendments) were motivated by and intended to enact anti-Catholicism.LOCKE V. DAVEY 540 U.S. 712 (2004)
J. Brent Walker, Executive Director of the Baptist Joint Committee, responded to Hamburger's claims noting; "The fact that the separation of church and state has been supported by some who exhibited an anti-Catholic animus or a secularist bent does not impugn the validity of the principle. Champions of religious liberty have argued for the separation of church and state for reasons having nothing to do with anti-Catholicism or desire for a secular culture. Of course, separationists have opposed the Catholic Church when it has sought to tap into the public till to support its parochial schools or to argue for on-campus released time in the public schools. But that principled debate on the issues does not support a charge of religious bigotry"Book Review: Separation of Church and State
Steven Waldman notes that; "The evangelicals provided the political muscle for the efforts of Madison and Jefferson, not merely because they wanted to block official churches but because they wanted to keep the spiritual and secular worlds apart." "Religious freedom resulted from an alliance of unlikely partners," writes the historian Frank Lambert in his book The Founding Fathers and the Place of Religion in America. "New Light evangelicals such as Isaac Bachus and John Leland joined forces with Deists and skeptics such as James Madison and Thomas Jefferson to fight for a complete separation of church and state."The Framers and the Faithful: How modern evangelicals are ignoring their own history. By Steven Waldman
Politics and religion in the United States
Robert N. Bellah has in his writings that although the separation of church and state is grounded firmly in the constitution of the United States, this does not mean that there is no religious dimension in the political society of the United States. He used the term "Civil Religion" to describe the specific relation between politics and religion in the United States. His 1967 article analyzes the inaugural speech of John F. Kennedy: "Considering the separation of church and state, how is a president justified in using the word 'God' at all? The answer is that the separation of church and state has not denied the political realm a religious dimension." From the issue entitled Religion in America.
Robert S. Wood has argued that the United States is a model for the world in terms of how a separation of church and state—no state-run or state-established church—is good for both the church and the state, allowing a variety of religions to flourish. Speaking at the Toronto-based Center for New Religions, Wood said that the freedom of conscience and assembly allowed under such a system has led to a "remarkable religiosity" in the United States that isn't present in other industrialized nations. Wood believes that the U.S. operates on "a sort of civic religion," which includes a generally shared belief in a creator who "expects better of us." Beyond that, individuals are free to decide how they want to believe and fill in their own creeds and express their conscience. He calls this approach the "genius of religious sentiment in the United States."
See also
Americans United for Separation of Church and State
American Civil Liberties Union
American Humanist Association
Anti-clericalism
Ban on Sharia law
Ceremonial deism
Christian amendment
Christian Left
Christian Right
Freedom From Religion Foundation
Freedom of religion in the United States
Interfaith Alliance
Mount Soledad cross controversy
Pledge of Allegiance
Criticism of the Pledge of Allegiance
Public menorah
Separation of church and state
Sharia
State religion
United States religious history
References
Bibliography
Barry McGowan, How to Separate Church & State: A Manual from the Trenches Hufton Mueller, LLC, 2012 ISBN 978-0-615-63802-7
Philip Hamburger, Separation of Church and State Harvard University Press, 2002. ISBN 0-674-00734-4 OCLC: 48958015
Marci A. Hamilton, God vs. the Gavel: Religion and the Rule of Law, Cambridge University Press, 2005, ISBN 0-521-85304-4
Mark DeWolfe Howe. The Garden and the Wilderness: Religion and Government in American Constitutional History(U. of Chicago Press, 1965)
Daniel L. Dreisbach. Thomas Jefferson and the Wall of Separation Between Church and State(New York University Press, 2003)
Daniel L. Dreisbach and Mark David Hall. The Sacred Rights of Conscience: Selected Readings on Religious Liberty and Church-State Relations in the American Founding (Indianapolis: Liberty Fund Press, 2009)
Daniel L. Dreisbach, Mark David Hall, and Jeffry Morrison. The Forgotten Founders on Religion and Public Life (Notre Dame: University of Notre Dame Press, 2009)
John C. Jeffries, Jr. and James E. Ryan, "A Political History of the Establishment Clause," 100 Michigan Law Rev. (2001) online version
Mark David Hall, "Jeffersonian Walls and Madisonian Lines: The Supreme Court's Use of History in Religion Clause Cases," 85 Oregon Law Review (2006), 563-614. http://www.law.uoregon.edu/org/olr/archives/85/852hall.pdf
Isaac Kramnick and R. Laurence Moore, The Godless Constitution: The Case Against Religious Correctness (Norton, 1996)
Philip B. Kurland, ed., Church and State: The Supreme Court and the First Amendment (U. of Chicago Press, 1975)
Adam M. Samaha; "Separation of Church and State." Constitutional Commentary. 19#3 2002. pp 713+. online version
Anson P. Stokes and Leo Pfeffer, Church and Stare in the United States (reprint, 1964)
Kyle G. Volk, Moral Minorities and the Making of American Democracy (Oxford University Press, 2014)
External links
American court battles over separation
1947, first case concerning separation of church and state; supporting bussing for children to private religious schools and declaring that states were required to provide the same guarantees of religious freedom as the federal government
1948, banning religious instruction in public schools
1952, allowing religious instruction off school property during regular school hours
1962, banning teacher-led prayer from public schools
1963, banning Bible-reading and the recital of the Lord's Prayer in public schools
1973, allowing state funding for textbooks and teachers' salaries in religious schools; creating the Lemon test
1987, declared the Creation Act invalid, which had mandated the teaching of Creation if Evolution was taught
1989, banning religious displays depicting only one religion
1992, banning prayers given by clergy as a part of an official public school graduation ceremony.
Other
Christian Science Monitor analysis of George Washington's letter and its implications
"The Intellectual Origins of the Establishment Clause" by Noah Feldman, Asst. Professor of Law, New York University, 2002.
Robert Struble, Jr., [http://www.tell-usa.org/totl/ Treatise on Twelve Lights: To Restore America the Beautiful under God and the Written Constitution], 2007–08 edition.
Baptist Joint Committee for Religious Liberty
Separation of Church and State
Misunderstanding Jefferson's "wall of separation" metaphor
'A Wall of Separation': FBI Helps Restore Jefferson's Obliterated Draft, Library of Congress information Bulletin, June 1998 – Vol. 57, No. 6, by James H. Hutson, Chief, Manuscript Division, Library of Congress.
Category:History of religion in the United States | 596,325 | 2017-01 |
Nonprofit organization | A nonprofit organization (NPO) (also known as a non-business entity) is an organization with the purpose of which is something other than making a profit.http://beta.merriam-webster.com/dictionary/nonprofit Merriam-Webster Dictionary A nonprofit organization is often dedicated to furthering a particular social cause or advocating for a particular point of view. In economic terms, a nonprofit organization uses its surplus revenues to further achieve its purpose or mission, rather than distributing its surplus income to the organization's shareholders (or equivalents) as profit or dividends. This is known as the non-distribution constraint.Hansmann, R. B. (1980). The role of nonprofit enterprise. Yale law journal, 835-901. The decision to adopt a nonprofit legal structure is one that will often have taxation implications, particularly where the nonprofit seeks income tax exemption, charitable status and so on.
The terms nonprofit and not-for-profit are not consistently differentiated across jurisdictions. In layman's terms they are usually equivalent in concept, although in various jurisdictions there are accounting and legal differences.
The nonprofit landscape is highly varied, although many people have come to associate NPOs with charitable organizations. Although charities do make up an often high-profile or visible aspect of the sector, there are many other types of nonprofit organization. Overall, they tend to be either member-serving or community-serving. Member-serving organizations include mutual societies, cooperatives, trade unions, credit unions, industry associations, sports clubs, retired serviceman's clubs and peak bodies – organizations that benefit a particular group of people, i.e. the members of the organization. Typically, community-serving organizations are focused on providing services to the community in general, either globally or locally: organizations delivering human services programs or projects, aid and development programs, medical research, education and health services, and so on. It could be argued many nonprofits sit across both camps, at least in terms of the impact they make.Lyons, Mark. Third Sector: The contribution of nonprofit and cooperative enterprises in Australia. Allen & Unwin, 2001. For example, the grassroots support group that provides a lifeline to those with a particular condition or disease could be deemed to be serving its members (by directly supporting them) and the broader community (through the provision of a service for fellow citizens).
Many NPOs use the model of a double bottom line in that furthering their cause is more important than making a profit, though both are needed to ensure the organization's sustainability.The Nonprofit Handbook: Everything You Need to Know to Start and Run Your Nonprofit Organization (Paperback), Gary M. Grobman, White Hat Communications, 2008.
Although NPOs are permitted to generate surplus revenues, they must be retained by the organization for its self-preservation, expansion, or plans. NPOs have controlling members or a board of directors. Many have paid staff including management, whereas others employ unpaid volunteers and executives who work with or without compensation (occasionally nominal).Drucker, Peter (1989). "What Business Can Learn from Nonprofits". Harvard Business Review: 1–7. In some countries, where there is a token fee, in general it is used to meet legal requirements for establishing a contract between the executive and the organization.
Designation as a nonprofit does not mean that the organization does not intend to make a profit, but rather that the organization has no 'owners' and that the funds realized in the operation of the organization will not be used to benefit any owners. The extent to which an NPO can generate surplus revenues may be constrained or use of surplus revenues may be restricted.
Objectives and goals
Some NPOs may also be a charity or service organization; they may be organized as a profit corporation or as a trust, a cooperative, or they exist informally. A very similar type of organization termed a supporting organization operates like a foundation, but they are more complicated to administer, hold more favorable tax status and are restricted in the public charities they support. Their goal is not to be successful in terms of wealth, but in terms of giving value to the groups of people they administer to.
Functions
NPOs have a wide diversity of structures and purposes. For legal classification, there are, nevertheless, some elements of importance:
Management provisions
Accountability and auditing provisions
Provisory for the amendment of the statutes or articles of incorporation
Provisions for the dissolution of the entity
Tax statuses of corporate and private donors
Tax status of the founders.
Some of the above must be (in most jurisdictions in the USA at least) expressed in the organization's charter of establishment or constitution. Others may be provided by the supervising authority at each particular jurisdiction.
While affiliations will not affect a legal status, they may be taken into consideration by legal proceedings as an indication of purpose.
Most countries have laws that regulate the establishment and management of NPOs and that require compliance with corporate governance regimes. Most larger organizations are required to publish their financial reports detailing their income and expenditure publicly.
In many aspects they are similar to corporate business entities though there are often significant differences. Both not-for-profit and for-profit corporate entities must have board members, steering-committee members, or trustees who owe the organization a fiduciary duty of loyalty and trust. A notable exception to this involves churches, which are often not required to disclose finances to anyone, including church members.
Formation and structure
In the United States, nonprofit organizations are formed by filing bylaws or articles of incorporation or both in the state in which they expect to operate. The act of incorporation creates a legal entity enabling the organization to be treated as a distinct body (corporation) by law and to enter into business dealings, form contracts, and own property as individuals or for-profit corporations can.
Nonprofits can have members, but many do not. The nonprofit may also be a trust or association of members. The organization may be controlled by its members who elect the Board of Directors, Board of Governors or Board of Trustees. A nonprofit may have a delegate structure to allow for the representation of groups or corporations as members. Alternatively, it may be a non-membership organization and the board of directors may elect its own successors.
The two major types of nonprofit organization are membership and board-only. A membership organization elects the board and has regular meetings and the power to amend the bylaws. A board-only organization typically has a self-selected board and a membership whose powers are limited to those delegated to it by the board. A board-only organization's bylaws may even state that the organization does not have any membership, although the organization's literature may refer to its donors or service recipients as 'members'; examples of such organizations are FairvoteFairVote - Board of Directors.FairVote - FAQs. and the National Organization for the Reform of Marijuana Laws.NORML Board of Directors - NORML. The Model Nonprofit Corporation Act imposes many complexities and requirements on membership decision-making. Accordingly, many organizations, such as Wikimedia, have formed board-only structures. The National Association of Parliamentarians has generated concerns about the implications of this trend for the future of openness, accountability, and understanding of public concerns in nonprofit organizations. Specifically, they note that nonprofit organizations, unlike business corporations, are not subject to market discipline for products and shareholder discipline of their capital; therefore, without membership control of major decisions such as election of the board, there are few inherent safeguards against abuse.Charity on Trial: What You Need to Know Before You Give / Doug White (2007) ISBN 1-56980-301-3. A rebuttal to this might be that as nonprofit organizations grow and seek larger donations, the degree of scrutiny increases, including expectations of audited financial statements.SSRN-Voluntary Disclosure in Nonprofit Organizations: an Exploratory Study by Bruce Behn, Delwyn DeVries, Jing Lin. A further rebuttal might be that NPOs are constrained, by their choice of legal structure, from financial benefit as far as distribution of profit to members and directors is concerned.
Tax exemption
In many countries, nonprofits may apply for tax exempt status, so that the organization itself may be exempt from income tax and other taxes. In the United States, to be exempt from federal income taxes, the organization must meet the requirements set forth by the Internal Revenue Service.
Australia
In Australia, nonprofit organizations include trade unions, charitable entities, co-operatives, universities and hospitals, mutual societies, grass-root and support groups, political parties, religious groups, incorporated associations, not-for-profit companies, trusts and more. Furthermore, they operate across a multitude of domains and industries, from health, employment, disability and other human services to local sporting clubs, credit unions and research institutes.Lyons, M. 2001, 'Third Sector', Allen & Unwin, Crows Nest. A nonprofit organization in Australia can choose from a number of legal forms depending on the needs and activities of the organization: co-operative, company limited by guarantee, unincorporated association, incorporated association (by the Associations Incorporation Act 1985) or incorporated association or council (by the Commonwealth Aboriginal Councils and Associations Act 1976). From an academic perspective, social enterprise is for the most part considered a sub-set of the nonprofit sector as typically they too are concerned with a purpose relating to a public good. However, these are not bound to adhere to a nonprofit legal structure, and many incorporate and operate as for-profit entities.
In Australia, nonprofit organizations are primarily established in one of three ways: companies limited by guarantee, trusts and incorporated associations. However, the incorporated association form is typically used by organizations intending to operate only within one Australian state jurisdiction. Nonprofit organizations seeking to establish a presence across Australia typically consider incorporating as a company or as a trust.David Ford, Emil Ford Lawyers, Guide to Establishing Non-Profit Organisations in Australia (Advocates for International Development, August 2012). http://a4id.org/sites/default/files/user/%5BA4ID%5D%20AustraliaNon-ProfitLawLegalGuide.pdf
Belgium
By Belgian law, there are several kinds of nonprofit organization:
Vereniging zonder winstoogmerk (Dutch, abbreviated vzw), Vereinigung ohne Gewinnerzielungsabsicht (German language|German) or Association sans but lucratif (French, abbreviated asbl).
Internationale vereniging zonder winstoogmerk (Dutch, often abbreviated ivzw) or Association internationale sans but lucratif (French, often abbreviated aisbl) for international nonprofit organizations.
Stichting van openbaar nut (Dutch, abbreviated son) or Fondation d’utilités publique (French, abbreviated fup).
These three kinds of nonprofit organization are in contrast to a fourth:
Feitelijke vereniging (Dutch) or Association de fait (French), an informal organization, often started for a short-term project, or managed alongside another NPO that does not have any status in law so cannot purchase property etc.(association sans personnalité morale).
Canada
Canada allows nonprofit organizations to be incorporated or unincorporated. They may incorporate either federally, under Part II of the Canada Business Corporations Act, or under provincial legislation. Many of the governing Acts for Canadian nonprofits date to the early 1900s, meaning that nonprofit legislation has not kept pace with legislation that governs for-profit corporations, particularly with regards to corporate governance. Federal, and in some provinces (including Ontario), incorporation is by way of Letters Patent, and any change to the Letters Patent (even a simple name change) requires formal approval by the appropriate government, as do bylaw changes. Other provinces (including Alberta) permit incorporation as of right, by the filing of Articles of Incorporation or Articles of Association.
During 2009, the federal government enacted new legislation repealing the Canada Corporations Act, Part II - the Canada Not-for-Profit Corporations Act. This Act was last amended on 10 October 2011, and the act was current until 4 March 2013. It allows for incorporation as of right, by Articles of Incorporation; does away with the ultra vires doctrine for nonprofits; establishes them as legal persons; and substantially updates the governance provisions for nonprofits. Ontario also overhauled its legislation, adopting the Ontario Not-for-Profit Corporations Act during 2010; pending the outcome of an anticipated election during October 2011, the new Act is expected to be in effect as of 1 July 2013.
Canada also permits a variety of charities (including public and private foundations). Charitable status is granted by the Canada Revenue Agency (CRA) upon application by a nonprofit; charities are allowed to issue income tax receipts to donors, must spend a certain percentage of their assets (including cash, investments and fixed assets) and file annual reports in order to maintain their charitable status. In determining whether an organization can become a charity, CRA applies a common law test to its stated objects and activities. These must be:
The relief of poverty
The advancement of education
The advancement of religion, or
Certain other purposes that benefit the community in a way the courts have said is charitable
Charities are not permitted to engage in partisan political activity; doing so may result in the revocation of charitable status. However, a charity can carry out a small number of political activities that are non-partisan, help further the charities' purposes, and subordinate to the charity's charitable purposes.
France
In France, nonprofits are called associations. They are based on a law enacted 1 July 1901. As a consequence, the nonprofits are also called association loi 1901.
A nonprofit can be created by two people to accomplish a common goal. The association can have industrial or commercial activities or both, but the members cannot make any profit from the activities. Thereby, worker's unions and political parties can be organized from this law.
In 2008, the National Institute of Statistics and Economic Studies (INSEE) counted more than a million of these associations in the country, and about 16 million people older than 16 are members of a nonprofit in France (a third or the population over 16 years old). The nonprofits employ 1.6 million people, and 8 million are volunteers for them.Vie associative : 16 millions d’adhérents en 2008, INSEE, publication December 2010
This law is also relevant in a many former French colonies, particularly in Africa.
Hong Kong
The Hong Kong Company Registry provides a memorandum of procedure for applying to Registrar of Companies for a Licence under Section 21 of the Companies Ordinance (Cap.32) for a limited company for the purpose of promoting commerce, art, science, religion, charity, or any other useful object.http://www.hkicpa.org.hk/APLUS/0704/44.pdf
India
In India, non-governmental organizations (NGOs) are the most common type of societal institutions that do not have commercial interests. However, they are not the only category of non-commercial organizations that can gain official recognition. For example, memorial trusts, which honour renowned individuals through social work, may not be considered as NGOs.
They can be registered in four ways:
Trust
Society
Section-25 company (Section 8 as per the new Companies Act, 2013)
Special licensing
Schools
Sports.
Registration can be with either the Registrar of Companies (RoC) or the Registrar of Societies (RoS).
The following laws or Constitutional Articles of the Republic of India are relevant to the NGOs:
Articles 19(1)(c) and 30 of the Constitution of India
Income Tax Act, 1961
Public Trusts Acts of various states
Societies Registration Act, 1860
Section 25 of the Indian Companies Act, 1956 (Section 8 as per the new Companies Act, 2013)
Foreign Contribution (Regulation) Act, 1976.
Ireland
The Irish Nonprofits Database was created by Irish Nonprofits Knowledge Exchange (INKEx) to act as a repository for regulatory and voluntarily disclosed information about Irish public-benefit nonprofits. The database lists more than 10,000 nonprofit organizations in Ireland. INKEx is currently looking for government funding to continue to provide the service and maintain the accuracy of the database.
Japan
In Japan, an NPO is any citizen's group that serves the public interest and does not produce a profit for its members. NPOs are given corporate status to assist them in conducting business transactions. As at February 2011, there were 41,600 NPOs in Japan. Two hundred NPOs were given tax-deductible status by the government, which meant that only contributions to those organizations were tax deductible for the contributors.Kamiya, Setsuko, "NPO tax status threatened by Diet split", Japan Times, 22 February 2011, p. 3.
Russia
Russian law contains many legal forms of non-commercial organization (NCO), resulting in a complex, often contradictory, and limiting regulatory framework. The primary requirements are that NCOs, whatever their type, do not have the generation of profit as their main objective and do not distribute any such profit among their participants (Article 50(1), Civil Code). Most commonly there are five forms of NCO:
Public associations - A public association is the form most comparable to an 'association' as used in international parlance. A public association is a membership-based organization of individuals who associate on the basis of common interests and goals stipulated in the organization's charter.
Foundations - Foundations are property-based, non-membership organizations created by individuals or legal persons (or both) to pursue social, charitable, cultural, educational, or other public benefit goals.
Institutions - The institution (uchrezhdeniye) is a form that exists in Russia and several other countries of the former Soviet Union. Like foundations, institutions do not have members. Unlike foundations, however, institutions do not acquire property rights in the property conveyed to them (Article 120, Civil Code, and Article 20, NCO Law). Moreover, the founders are liable for any obligations of the institution that it cannot meet on its own.
Non-commercial partnerships - A non-commercial partnership (NP) (Article 8, NCO Law) is a membership organization pursuing activities for the mutual benefit of members. Therefore, assets that have been transferred to an NP as donations can be used for purposes other than those having public benefit.
Autonomous non-commercial organizations - An autonomous non-commercial organization (ANO) (Article 10, NCO Law) is a non-membership organization undertaking services in the field of education, social policy, culture, etc., which in practice often generates income by providing its services for a fee.
South Africa
In South Africa, certain types of charity may issue a tax certificate when requested, which donors can use to apply for a tax deduction. Charities/NGOs may be established as voluntary associations, trusts or nonprofit companies (NPCs). Voluntary associations are established by agreement under the common law, and trusts are registered by the Master of the High Court.
Non-profit companies (NPCs) are registered by the Companies and Intellectual Property Commission.http://www.gov.za/node/727559 All of these may voluntarily register with The Directorate for Nonprofit Organisations and may apply for tax-exempt status to the South African Revenue Service (SARS).
United Kingdom
In the UK, many nonprofit companies are incorporated as a company limited by guarantee. This means that the company does not have shares or shareholders, but it has the benefits of corporate status. This includes limited liability for its members and being able to enter into contracts and purchase property in its own name. The profits of the company (also referred to as the trading surplus) must be invested in achieving these goals and not distributed to the company's members.
Since the Companies Act 2006, nonprofit companies may be formed as a Community Interest Company (CIC). These are forms of company limited by guarantee or company limited by shares but with special conditions and are intended specifically to ensure that the profits and assets of the company are used for the public good, even when managed for (limited) profit.
A charity is a nonprofit organization that meets stricter criteria regarding its purpose and the method in which it makes decisions and reports its finances."How to Start a Charity in the UK", Capterra. For example, a charity is generally not allowed to pay its trustees. In England and Wales, charities may be registered with the Charity Commission.Charity Commissioners information page. In Scotland, the Office of the Scottish Charity Regulator serves the same function. Other organizations that are classified as nonprofit organizations elsewhere, such as trade unions, are subject to separate regulations and are not regarded as 'charities' in the technical sense.
United States
For a United States analysis of this issue, see 501(c) and Charitable organization (United States).
After a nonprofit organization has been formed at the state level, the organization may seek recognition of tax-exempt status with respect to U.S. federal income tax. That is done typically by applying to the Internal Revenue Service (IRS), although statutory exemptions exist for limited types of nonprofit organization. The IRS, after reviewing the application to ensure the organization meets the conditions to be recognized as a tax exempt organization (such as the purpose, limitations on spending, and internal safeguards for a charity), may issue an authorization letter to the nonprofit granting it tax-exempt status for income-tax payment, filing, and deductibility purposes. The exemption does not apply to other federal taxes such as employment taxes. Additionally, a tax-exempt organization must pay federal tax on income that is unrelated to their exempt purpose. Failure to maintain operations in conformity to the laws may result in the loss of tax-exempt status.
Individual states and localities offer nonprofits exemptions from other taxes such as sales tax or property tax. Federal tax-exempt status does not guarantee exemption from state and local taxes, and vice versa. These exemptions generally have separate applications, and their requirements may differ from the IRS requirements. Furthermore, even a tax-exempt organization may be required to file annual financial reports (IRS Form 990) at the state and federal levels. A tax-exempt organization's 990 forms are required to be available for public scrutiny. An example of a nonprofit organization in the US is Project Vote Smart.
Governance
The board of directors has ultimate control over the organization, but typically an executive director is hired. In some cases, the board is elected by a membership, but commonly, the board of directors is self-perpetuating. In these 'board-only' organizations, board members nominate new members and vote on their fellow directors' nominations.Dent, George W., Corporate Governance Without Shareholders: A Cautionary Lesson from Non-Profit Organizations (2014). Delaware Journal of Corporate Law (DJCL), Vol. 39, No. 1, 2014; Case Legal Studies Research Paper No. 2014-34. Available at SSRN Part VI, section A, question 7a of the Form 990 asks 'members, stockholders, or other persons who had the power to elect or appoint one or more members of the governing body?'
Accreditation
A nonprofit organization in the United States can receive an accreditation by undergoing a third-party review from the Standards for Excellence Institute to ensure efficient use of resources.
Founder's syndrome
Founder's syndrome is an issue organizations face as they grow. Dynamic founders, who have a strong vision of how to operate the project, try to retain control of the organization, even as new employees or volunteers want to expand the project's scope or change policy.
Resource mismanagement
Resource mismanagement is a particular problem with NPOs because the employees are not accountable to anybody who has a direct stake in the organization. For example, an employee may start a new program without disclosing its complete liabilities. The employee may be rewarded for improving the NPO's reputation, making other employees happy, and attracting new donors. Liabilities promised on the full faith and credit of the organization but not recorded anywhere constitute accounting fraud. But even indirect liabilities negatively affect the financial sustainability of the NPO, and the NPO will have financial problems unless strict controls are instated. Some commenters have argued that the receipt of significant funding from large for-profit corporations can ultimately alter the NPO's functions.
Competition for talent
Competition for employees with the public and private sector is another problem that nonprofit organizations inevitably face, particularly for management positions. There are reports of major talent shortages in the nonprofit sector today regarding newly graduated workers, and NPOs have for too long relegated hiring to a secondary priority,Maw, L. Winning the Talent Game http://www.ssireview.org/blog/entry/winning_the_talent_game which could be why they find themselves in the position many do. While many established NPOs are well-funded and comparative to their public sector competitors, many more are independent and must be creative with which incentives they use to attract and maintain vibrant personalities. The initial interest for many is the remuneration package, though many who have been questioned after leaving an NPO have reported that it was stressful work environments and implacable work that drove them away.Becchetti, Castriota, & Depedri. Working in the For-Profit versus Not-For-Profit Sector: What Difference Does it Make? http://icc.oxfordjournals.org/content/early/2013/11/28/icc.dtt044
Public- and private-sector employment has, for the most part, been able to offer more to their employees than most nonprofit agencies throughout history. Either in the form of higher wages, more comprehensive benefit packages, or less tedious work, the public and private sectors have enjoyed an advantage over NPOs in attracting employees. Traditionally, the NPO has attracted mission-driven individuals who want to assist their chosen cause. Compounding the issue is that some NPOs do not operate in a manner similar to most businesses, or only seasonally. This leads many young and driven employees to forego NPOs in favor of more stable employment. Today, however, nonprofit organizations are adopting methods used by their competitors and finding new means to retain their employees and attract the best of the newly minted workforce.Cohen, R. Nonprofit Salaries: Achieving Parity with the Private Sector https://nonprofitquarterly.org/management/5506-nonprofit-salaries-achieving-parity-with-the-private-sector.html
It has been mentioned that most nonprofits will never be able to match the pay of the private sectorCoffman, S. Nonprofits Can Compete with Employee Benefits http://www.bizjournals.com/columbus/stories/2002/12/23/focus4.html?page=all and therefore should focus their attention on benefits packages, incentives and implementing pleasurable work environments. A good environment is ranked higher than salary and pressure of work. NPOs are encouraged to pay as much as they are able and offer a low-stress work environment that the employee can associate him or herself positively with. Other incentives that should be implemented are generous vacation allowances or flexible work hours.Fox, T. How to Compete with the Private Sector for Young Workers. http://www.washingtonpost.com/blogs/on-leadership/wp/2014/03/18/how-to-compete-with-the-private-sector-for-young-workers/
Articles of association (examples)
thumb|Front building of the Bill & Melinda Gates Foundation in Seattle
In the United States, two of the wealthiest nonprofit organizations are the Bill and Melinda Gates Foundation, which has an endowment of US$38 billion, and the Howard Hughes Medical Institute originally funded by Hughes AircraftHughes after Howard / D.Kenneth Richardson (2011) ISBN 978-0-9708050-8-9. prior to divestiture, which has an endowment of approximately $14.8 billion. Outside the United States, another large NPO is the British Wellcome Trust, which is a 'charity' by British usage. See: List of wealthiest foundations. Note that this assessment excludes universities, at least a few of which have assets in the tens of billions of dollars. For example; List of U.S. colleges and universities by endowment.
Measuring an NPO by its monetary size has obvious limitations, as the power and significance of NPOs are defined by more qualitative measurements such as effectiveness at performing charitable missions.
Some NPOs that are particularly well known, often for the charitable or social nature of their activities performed over a long period, include Amnesty International, Oxfam, Rotary International, Kiwanis International, Carnegie Corporation of New York, Nourishing USA, DEMIRA Deutsche Minenräumer (German Mine Clearers), FIDH International Federation for Human Rights, Goodwill Industries, United Way, ACORN (now defunct), Habitat for Humanity, Teach For America, the Red Cross and Red Crescent organizations, UNESCO, IEEE, INCOSE, World Wide Fund for Nature, Heifer International, Translators Without Borders and SOS Children's Villages.
However, there are also millions of smaller NPOs that provide social services and relief efforts to people throughout the world. There are more than 1.6 million NPOs in the United States alone, and millions more informal, community-based entities, which Frumkin and others consider part of the nonprofit sector.
There are also examples, for instance in Ireland of NGO umbrella organizations bringing about a degree of self-regulation in the NGO sector.
Other information
Many NPOs often use the .org or .us (or the CCTLD of their respective country) or .edu top-level domain (TLD) when selecting a domain name to differentiate themselves from more commercial entities, which typically use the .com space.
In the traditional domain noted in RFC 1591, .org is for 'organizations that didn't fit anywhere else' in the naming system, which implies that it is the proper category for non-commercial organizations if they are not governmental, educational, or one of the other types with a specific TLD. It is not designated specifically for charitable organizations or any specific organizational or tax-law status, however; it encompasses anything that is not classifiable as another category. Currently, no restrictions are enforced on registration of .com or .org, so one can find organizations of all sorts in either of these domains, as well as other top-level domains including newer, more specific ones which may apply to particular sorts of organization including .museum for museums and .coop for cooperatives. Organizations might also register by the appropriate country code top-level domain for their country.
Conclusion
Instead of being defined by 'non' words, some organizations are suggesting new, positive-sounding terminology to describe the sector. The term 'civil society organization' (CSO) has been used by a growing number of organizations, including the Center for the Study of Global Governance.:Glasius, Marlies, Mary Kaldor and Helmut Anheier (eds.) "Global Civil Society 2006/7". London: Sage, 2005. The term 'citizen sector organization' (CSO) has also been advocated to describe the sector – as one of citizens, for citizens – by organizations including Ashoka: Innovators for the Public.Drayton, W: "Words Matter". Alliance Magazine, Vol. 12/No.2, June 2007. Advocates argue that these terms describe the sector in its own terms, without relying on terminology used for the government or business sectors. However, use of terminology by a nonprofit of self-descriptive language that is not legally compliant risks confusing the public about nonprofit abilities, capabilities and limitations.Alvarado, Elliott I.: "Nonprofit or Not-for-profit -- Which Are You?", page 6-7. Nonprofit World, Volume 18, Number 6, November/December 2000.
In some Spanish-language jurisdictions, nonprofit organizations are called 'civil associations'.
See also
Association without lucrative purpose
Community Organizations
Fundraising
Master of Nonprofit Organizations
Mutual organization
Non-commercial
Non-governmental organization (NGO)
Non-profit organizations and access to public information
Non-profit sector
Nonprofit technology
Occupational safety and health
Social economy
Supporting organization (charity)
United States non-profit laws
:Category:Nonprofit organizations
References
Further reading
Snyder, Gary R., Nonprofits: On the Brink : How Nonprofits have lost their way and some essentials to bring them back, 2006.
P. Hartigan, 2006, 'It's about people, not profits', Business Strategy Review, Winter 2006
External links
Aid for Change
Our Community resources for the Australian non-profit sector
Nonprofit Resources resources for the nonprofit sector
Nonprofits & Philanthropy Research at IssueLab
The Benefits of Nonprofit Organizations
Category:Types of organization
Category:Trade unions
Category:Television terminology
Category:Organizations by legal status
Category:Social economy | 72,487 | 2017-01 |
Philosophy of space and time | Philosophy of space and time is the branch of philosophy concerned with the issues surrounding the ontology, epistemology, and character of space and time. While such ideas have been central to philosophy from its inception, the philosophy of space and time was both an inspiration for and a central aspect of early analytic philosophy. The subject focuses on a number of basic issues, including whether or not time and space exist independently of the mind, whether they exist independently of one another, what accounts for time's apparently unidirectional flow, whether times other than the present moment exist, and questions about the nature of identity (particularly the nature of identity over time).
Ancient and medieval views
The earliest recorded Western philosophy of time was expounded by the ancient Egyptian thinker Ptahhotep (c. 2650–2600 BC), who said, "Do not lessen the time of following desire, for the wasting of time is an abomination to the spirit." The Vedas, the earliest texts on Indian philosophy and Hindu philosophy, dating back to the late 2nd millennium BC, describe ancient Hindu cosmology, in which the universe goes through repeated cycles of creation, destruction, and rebirth, with each cycle lasting 4,320,000 years. Extract of page 225 Ancient Greek philosophers, including Parmenides and Heraclitus, wrote essays on the nature of time.Dagobert Runes, Dictionary of Philosophy, p. 318
Incas regarded space and time as a single concept, named pacha (, ).Atuq Eusebio Manga Qespi, Instituto de lingüística y Cultura Amerindia de la Universidad de Valencia. Pacha: un concepto andino de espacio y tiempo. Revísta española de Antropología Americana, 24, pp. 155–189. Edit. Complutense, Madrid. 1994Stephen Hart, Peruvian Cultural Studies:Work in ProgressPaul Richard Steele, Catherine J. Allen, Handbook of Inca mythology, p. 86, (ISBN 1-57607-354-8)
Plato, in the Timaeus, identified time with the period of motion of the heavenly bodies, and space as that in which things come to be. Aristotle, in Book IV of his Physics, defined time as the number of changes with respect to before and after, and the place of an object as the innermost motionless boundary of that which surrounds it.
In Book 11 of St. Augustine's Confessions, he ruminates on the nature of time, asking, "What then is time? If no one asks me, I know: if I wish to explain it to one that asketh, I know not." He goes on to comment on the difficulty of thinking about time, pointing out the inaccuracy of common speech: "For but few things are there of which we speak properly; of most things we speak improperly, still the things intended are understood." St. Augustine, Confessions, Book 11. http://www.sacred-texts.com/chr/augconf/aug11.htm (Accessed 19/5/14). But Augustine presented the first philosophical argument for the reality of Creation (against Aristotle) in the context of his discussion of time, saying that knowledge of time depends on the knowledge of the movement of things, and therefore time cannot be where there are no creatures to measure its passing (Confessions Book XI ¶30; City of God Book XI ch.6).
In contrast to ancient Greek philosophers who believed that the universe had an infinite past with no beginning, medieval philosophers and theologians developed the concept of the universe having a finite past with a beginning, now known as Temporal finitism. The Christian philosopher John Philoponus presented early arguments, adopted by later Christian philosophers and theologians of the form "argument from the impossibility of the existence of an actual infinite", which states:
"An actual infinite cannot exist."
"An infinite temporal regress of events is an actual infinite."
"∴ An infinite temporal regress of events cannot exist."
In the early 11th century, the Muslim physicist Ibn al-Haytham (Alhacen or Alhazen) discussed space perception and its epistemological implications in his Book of Optics (1021), he also rejected Aristotle's definition of topos (Physics IV) by way of geometric demonstrations and defined place as a mathematical spatial extension.Nader El-Bizri, 'In Defence of the Sovereignty of Philosophy: al-Baghdadi's Critique of Ibn al-Haytham's Geometrisation of Place', Arabic Sciences and Philosophy 17 (2007), 57–80 His experimental proof of the intro-mission model of vision led to changes in the understanding of the visual perception of space, contrary to the previous emission theory of vision supported by Euclid and Ptolemy. In "tying the visual perception of space to prior bodily experience, Alhacen unequivocally rejected the intuitiveness of spatial perception and, therefore, the autonomy of vision. Without tangible notions of distance and size for
correlation, sight can tell us next to nothing about such things."
Realism and anti-realism
A traditional realist position in ontology is that time and space have existence apart from the human mind. Idealists, by contrast, deny or doubt the existence of objects independent of the mind. Some anti-realists, whose ontological position is that objects outside the mind do exist, nevertheless doubt the independent existence of time and space.
In 1781, Immanuel Kant published the Critique of Pure Reason, one of the most influential works in the history of the philosophy of space and time. He describes time as an a priori notion that, together with other a priori notions such as space, allows us to comprehend sense experience. Kant denies that either space or time are substance, entities in themselves, or learned by experience; he holds, rather, that both are elements of a systematic framework we use to structure our experience. Spatial measurements are used to quantify how far apart objects are, and temporal measurements are used to quantitatively compare the interval between (or duration of) events. Although space and time are held to be transcendentally ideal in this sense, they are also empirically real—that is, not mere illusions.
Idealist writers, such as J. M. E. McTaggart in The Unreality of Time, have argued that time is an illusion (see also The flow of time, below).
The writers discussed here are for the most part realists in this regard; for instance, Gottfried Leibniz held that his monads existed, at least independently of the mind of the observer.
Absolutism and relationalism
Leibniz and Newton
The great debate between defining notions of space and time as real objects themselves (absolute), or mere orderings upon actual objects (relational), began between physicists Isaac Newton (via his spokesman, Samuel Clarke) and Gottfried Leibniz in the papers of the Leibniz–Clarke correspondence.
Arguing against the absolutist position, Leibniz offers a number of thought experiments with the purpose of showing that there is contradiction in assuming the existence of facts such as absolute location and velocity. These arguments trade heavily on two principles central to his philosophy: the principle of sufficient reason and the identity of indiscernibles. The principle of sufficient reason holds that for every fact, there is a reason that is sufficient to explain what and why it is the way it is and not otherwise. The identity of indiscernibles states that if there is no way of telling two entities apart, then they are one and the same thing.
The example Leibniz uses involves two proposed universes situated in absolute space. The only discernible difference between them is that the latter is positioned five feet to the left of the first. The example is only possible if such a thing as absolute space exists. Such a situation, however, is not possible, according to Leibniz, for if it were, a universe's position in absolute space would have no sufficient reason, as it might very well have been anywhere else. Therefore, it contradicts the principle of sufficient reason, and there could exist two distinct universes that were in all ways indiscernible, thus contradicting the identity of indiscernibles.
Standing out in Clarke's (and Newton's) response to Leibniz's arguments is the bucket argument: Water in a bucket, hung from a rope and set to spin, will start with a flat surface. As the water begins to spin in the bucket, the surface of the water will become concave. If the bucket is stopped, the water will continue to spin, and while the spin continues, the surface will remain concave. The concave surface is apparently not the result of the interaction of the bucket and the water, since the surface is flat when the bucket first starts to spin, it becomes concave as the water starts to spin, and it remains concave as the bucket stops.
In this response, Clarke argues for the necessity of the existence of absolute space to account for phenomena like rotation and acceleration that cannot be accounted for on a purely relationalist account. Clarke argues that since the curvature of the water occurs in the rotating bucket as well as in the stationary bucket containing spinning water, it can only be explained by stating that the water is rotating in relation to the presence of some third thing—absolute space.
Leibniz describes a space that exists only as a relation between objects, and which has no existence apart from the existence of those objects. Motion exists only as a relation between those objects. Newtonian space provided the absolute frame of reference within which objects can have motion. In Newton's system, the frame of reference exists independently of the objects contained within it. These objects can be described as moving in relation to space itself. For many centuries, the evidence of a concave water surface held authority.
Mach
Another important figure in this debate is 19th-century physicist Ernst Mach. While he did not deny the existence of phenomena like that seen in the bucket argument, he still denied the absolutist conclusion by offering a different answer as to what the bucket was rotating in relation to: the fixed stars.
Mach suggested that thought experiments like the bucket argument are problematic. If we were to imagine a universe that only contains a bucket, on Newton's account, this bucket could be set to spin relative to absolute space, and the water it contained would form the characteristic concave surface. But in the absence of anything else in the universe, it would be difficult to confirm that the bucket was indeed spinning. It seems equally possible that the surface of the water in the bucket would remain flat.
Mach argued that, in effect, the water experiment in an otherwise empty universe would remain flat. But if another object were introduced into this universe, perhaps a distant star, there would now be something relative to which the bucket could be seen as rotating. The water inside the bucket could possibly have a slight curve. To account for the curve that we observe, an increase in the number of objects in the universe also increases the curvature in the water. Mach argued that the momentum of an object, whether angular or linear, exists as a result of the sum of the effects of other objects in the universe (Mach's Principle).
Einstein
Albert Einstein proposed that the laws of physics should be based on the principle of relativity. This principle holds that the rules of physics must be the same for all observers, regardless of the frame of reference that is used, and that light propagates at the same speed in all reference frames. This theory was motivated by Maxwell's equations, which show that electromagnetic waves propagate in a vacuum at the speed of light. However, Maxwell's equations give no indication of what this speed is relative to. Prior to Einstein, it was thought that this speed was relative to a fixed medium, called the luminiferous ether. In contrast, the theory of special relativity postulates that light propagates at the speed of light in all inertial frames, and examines the implications of this postulate.
All attempts to measure any speed relative to this ether failed, which can be seen as a confirmation of Einstein's postulate that light propagates at the same speed in all reference frames. Special relativity is a formalization of the principle of relativity that does not contain a privileged inertial frame of reference, such as the luminiferous ether or absolute space, from which Einstein inferred that no such frame exists.
Einstein generalized relativity to frames of reference that were non-inertial. He achieved this by positing the Equivalence Principle, which states that the force felt by an observer in a given gravitational field and that felt by an observer in an accelerating frame of reference are indistinguishable. This led to the conclusion that the mass of an object warps the geometry of the space-time surrounding it, as described in Einstein's field equations.
In classical physics, an inertial reference frame is one in which an object that experiences no forces does not accelerate. In general relativity, an inertial frame of reference is one that is following a geodesic of space-time. An object that moves against a geodesic experiences a force. An object in free fall does not experience a force, because it is following a geodesic. An object standing on the earth, however, will experience a force, as it is being held against the geodesic by the surface of the planet. In light of this, the bucket of water rotating in empty space will experience a force because it rotates with respect to the geodesic. The water will become concave, not because it is rotating with respect to the distant stars, but because it is rotating with respect to the geodesic.
Einstein partially advocates Mach's principle in that distant stars explain inertia because they provide the gravitational field against which acceleration and inertia occur. But contrary to Leibniz's account, this warped space-time is as integral a part of an object as are its other defining characteristics, such as volume and mass. If one holds, contrary to idealist beliefs, that objects exist independently of the mind, it seems that relativistics commits them to also hold that space and temporality have exactly the same type of independent existence.
Conventionalism
The position of conventionalism states that there is no fact of the matter as to the geometry of space and time, but that it is decided by convention. The first proponent of such a view, Henri Poincaré, reacting to the creation of the new non-Euclidean geometry, argued that which geometry applied to a space was decided by convention, since different geometries will describe a set of objects equally well, based on considerations from his sphere-world.
This view was developed and updated to include considerations from relativistic physics by Hans Reichenbach. Reichenbach's conventionalism, applying to space and time, focuses around the idea of coordinative definition.
Coordinative definition has two major features. The first has to do with coordinating units of length with certain physical objects. This is motivated by the fact that we can never directly apprehend length. Instead we must choose some physical object, say the Standard Metre at the Bureau International des Poids et Mesures (International Bureau of Weights and Measures), or the wavelength of cadmium to stand in as our unit of length. The second feature deals with separated objects. Although we can, presumably, directly test the equality of length of two measuring rods when they are next to one another, we can not find out as much for two rods distant from one another. Even supposing that two rods, whenever brought near to one another are seen to be equal in length, we are not justified in stating that they are always equal in length. This impossibility undermines our ability to decide the equality of length of two distant objects. Sameness of length, to the contrary, must be set by definition.
Such a use of coordinative definition is in effect, on Reichenbach's conventionalism, in the General Theory of Relativity where light is assumed, i.e. not discovered, to mark out equal distances in equal times. After this setting of coordinative definition, however, the geometry of spacetime is set.
As in the absolutism/relationalism debate, contemporary philosophy is still in disagreement as to the correctness of the conventionalist doctrine. While conventionalism still holds many proponents, cutting criticisms concerning the coherence of Reichenbach's doctrine of coordinative definition have led many to see the conventionalist view as untenable.
Structure of space-time
Building from a mix of insights from the historical debates of absolutism and conventionalism as well as reflecting on the import of the technical apparatus of the General Theory of Relativity, details as to the structure of space-time have made up a large proportion of discussion within the philosophy of space and time, as well as the philosophy of physics. The following is a short list of topics.
Relativity of simultaneity
According to special relativity each point in the universe can have a different set of events that compose its present instant. This has been used in the Rietdijk–Putnam argument to demonstrate that relativity predicts a block universe in which events are fixed in four dimensions.
Invariance vs. covariance
Bringing to bear the lessons of the absolutism/relationalism debate with the powerful mathematical tools invented in the 19th and 20th century, Michael Friedman draws a distinction between invariance upon mathematical transformation and covariance upon transformation.
Invariance, or symmetry, applies to objects, i.e. the symmetry group of a space-time theory designates what features of objects are invariant, or absolute, and which are dynamical, or variable.
Covariance applies to formulations of theories, i.e. the covariance group designates in which range of coordinate systems the laws of physics hold.
This distinction can be illustrated by revisiting Leibniz's thought experiment, in which the universe is shifted over five feet. In this example the position of an object is seen not to be a property of that object, i.e. location is not invariant. Similarly, the covariance group for classical mechanics will be any coordinate systems that are obtained from one another by shifts in position as well as other translations allowed by a Galilean transformation.
In the classical case, the invariance, or symmetry, group and the covariance group coincide, but, interestingly enough, they part ways in relativistic physics. The symmetry group of the general theory of relativity includes all differentiable transformations, i.e., all properties of an object are dynamical, in other words there are no absolute objects. The formulations of the general theory of relativity, unlike those of classical mechanics, do not share a standard, i.e., there is no single formulation paired with transformations. As such the covariance group of the general theory of relativity is just the covariance group of every theory.
Historical frameworks
A further application of the modern mathematical methods, in league with the idea of invariance and covariance groups, is to try to interpret historical views of space and time in modern, mathematical language.
In these translations, a theory of space and time is seen as a manifold paired with vector spaces, the more vector spaces the more facts there are about objects in that theory. The historical development of spacetime theories is generally seen to start from a position where many facts about objects are incorporated in that theory, and as history progresses, more and more structure is removed.
For example, Aristotelian space and time has both absolute position and special places, such as the center of the cosmos, and the circumference. Newtonian space and time has absolute position and is Galilean invariant, but does not have special positions.
Holes
With the general theory of relativity, the traditional debate between absolutism and relationalism has been shifted to whether or not spacetime is a substance, since the general theory of relativity largely rules out the existence of, e.g., absolute positions. One powerful argument against spacetime substantivalism, offered by John Earman is known as the "hole argument".
This is a technical mathematical argument but can be paraphrased as follows:
Define a function d as the identity function over all elements over the manifold M, excepting a small neighbourhood H belonging to M. Over H d comes to differ from identity by a smooth function.
With use of this function d we can construct two mathematical models, where the second is generated by applying d to proper elements of the first, such that the two models are identical prior to the time t=0, where t is a time function created by a foliation of spacetime, but differ after t=0.
These considerations show that, since substantivalism allows the construction of holes, that the universe must, on that view, be indeterministic. Which, Earman argues, is a case against substantivalism, as the case between determinism or indeterminism should be a question of physics, not of our commitment to substantivalism.
Direction of time
The problem of the direction of time arises directly from two contradictory facts. Firstly, the fundamental physical laws are time-reversal invariant; if a cinematographic film were taken of any process describable by means of the aforementioned laws and then played backwards, it would still portray a physically possible process. Secondly, our experience of time, at the macroscopic level, is not time-reversal invariant.Borchert, D.M. (2006) Encyclopedia of Philosophy, 2nd Ed. Vol. 9. MI: Cengage Learning. P. 468. Glasses can fall and break, but shards of glass cannot reassemble and fly up onto tables. We have memories of the past, and none of the future. We feel we can't change the past but can influence the future.
Causation solution
One solution to this problem takes a metaphysical view, in which the direction of time follows from an asymmetry of causation. We know more about the past because the elements of the past are causes for the effect that is our perception. We feel we can't affect the past and can affect the future because we can't affect the past and can affect the future.
There are two main objections to this view. First is the problem of distinguishing the cause from the effect in a non-arbitrary way. The use of causation in constructing a temporal ordering could easily become circular. The second problem with this view is its explanatory power. While the causation account, if successful, may account for some time-asymmetric phenomena like perception and action, it does not account for many others.
However, asymmetry of causation can be observed in a non-arbitrary way which is not metaphysical in the case of a human hand dropping a cup of water which smashes into fragments on a hard floor, spilling the liquid. In this order, the causes of the resultant pattern of cup fragments and water spill is easily attributable in terms of the trajectory of the cup, irregularities in its structure, angle of its impact on the floor, etc. However, applying the same event in reverse, it is difficult to explain why the various pieces of the cup should fly up into the human hand and reassemble precisely into the shape of a cup, or why the water should position itself entirely within the cup. The causes of the resultant structure and shape of the cup and the encapsulation of the water by the hand within the cup are not easily attributable, as neither hand nor floor can achieve such formations of the cup or water. This asymmetry is perceivable on account of two features: i) the relationship between the agent capacities of the human hand (i.e., what it is and is not capable of and what it is for) and non-animal agency (i.e., what floors are and are not capable of and what they are for) and ii) that the pieces of cup came to possess exactly the nature and number of those of a cup before assembling. In short, such asymmetry is attributable to the relationship between temporal direction on the one hand and the implications of form and functional capacity on the other.
The application of these ideas of form and functional capacity only dictates temporal direction in relation to complex scenarios involving specific, non-metaphysical agency which is not merely dependent on human perception of time. However, this last observation in itself is not sufficient to invalidate the implications of the example for the progressive nature of time in general.
Thermodynamics solution
The second major family of solutions to this problem, and by far the one that has generated the most literature, finds the existence of the direction of time as relating to the nature of thermodynamics.
The answer from classical thermodynamics states that while our basic physical theory is, in fact, time-reversal symmetric, thermodynamics is not. In particular, the second law of thermodynamics states that the net entropy of a closed system never decreases, and this explains why we often see glass breaking, but not coming back together.
But in statistical mechanics things become more complicated. On one hand, statistical mechanics is far superior to classical thermodynamics, in that thermodynamic behavior, such as glass breaking, can be explained by the fundamental laws of physics paired with a statistical postulate. But statistical mechanics, unlike classical thermodynamics, is time-reversal symmetric. The second law of thermodynamics, as it arises in statistical mechanics, merely states that it is overwhelmingly likely that net entropy will increase, but it is not an absolute law.
Current thermodynamic solutions to the problem of the direction of time aim to find some further fact, or feature of the laws of nature to account for this discrepancy.
Laws solution
A third type of solution to the problem of the direction of time, although much less represented, argues that the laws are not time-reversal symmetric. For example, certain processes in quantum mechanics, relating to the weak nuclear force, are not time-reversible, keeping in mind that when dealing with quantum mechanics time-reversibility comprises a more complex definition. But this type of solution is insufficient because 1) the time-asymmetric phenomena in quantum mechanics are too few to account for the uniformity of macroscopic time-asymmetry and 2) it relies on the assumption that quantum mechanics is the final or correct description of physical processes.
One recent proponent of the laws solution is Tim Maudlin who argues that the fundamental laws of physics are laws of temporal evolution (see Maudlin [2007]). However, elsewhere Maudlin argues: "[the] passage of time is an intrinsic asymmetry in the temporal structure of the world... It is the asymmetry that grounds the distinction between sequences that runs from past to future and sequences which run from future to past" [ibid, 2010 edition, p. 108]. Thus it is arguably difficult to assess whether Maudlin is suggesting that the direction of time is a consequence of the laws or is itself primitive.
Flow of time
The problem of the flow of time, as it has been treated in analytic philosophy, owes its beginning to a paper written by J. M. E. McTaggart. In this paper McTaggart proposes two "temporal series". The first series, which means to account for our intuitions about temporal becoming, or the moving Now, is called the A-series. The A-series orders events according to their being in the past, present or future, simpliciter and in comparison to each other. The B-series eliminates all reference to the present, and the associated temporal modalities of past and future, and orders all events by the temporal relations earlier than and later than.
McTaggart, in his paper "The Unreality of Time", argues that time is unreal since a) the A-series is inconsistent and b) the B-series alone cannot account for the nature of time as the A-series describes an essential feature of it.
Building from this framework, two camps of solution have been offered. The first, the A-theorist solution, takes becoming as the central feature of time, and tries to construct the B-series from the A-series by offering an account of how B-facts come to be out of A-facts. The second camp, the B-theorist solution, takes as decisive McTaggart's arguments against the A-series and tries to construct the A-series out of the B-series, for example, by temporal indexicals.
Dualities
Quantum field theory models have shown that it is possible for theories in two different space-time backgrounds, like AdS/CFT or T-duality, to be equivalent.
Presentism and eternalism
According to Presentism, time is an ordering of various realities. At a certain time some things exist and others do not. This is the only reality we can deal with and we cannot for example say that Homer exists because at the present time he does not. An Eternalist, on the other hand, holds that time is a dimension of reality on a par with the three spatial dimensions, and hence that all things—past, present, and future—can be said to be just as real as things in the present. According to this theory, then, Homer really does exist, though we must still use special language when talking about somebody who exists at a distant time—just as we would use special language when talking about something far away (the very words near, far, above, below, and such are directly comparable to phrases such as in the past, a minute ago, and so on).
Endurantism and perdurantism
The positions on the persistence of objects are somewhat similar. An endurantist holds that for an object to persist through time is for it to exist completely at different times (each instance of existence we can regard as somehow separate from previous and future instances, though still numerically identical with them). A perdurantist on the other hand holds that for a thing to exist through time is for it to exist as a continuous reality, and that when we consider the thing as a whole we must consider an aggregate of all its "temporal parts" or instances of existing. Endurantism is seen as the conventional view and flows out of our pre-philosophical ideas (when I talk to somebody I think I am talking to that person as a complete object, and not just a part of a cross-temporal being), but perdurantists have attacked this position. (An example of a perdurantist is David Lewis.) One argument perdurantists use to state the superiority of their view is that perdurantism is able to take account of change in objects.
The relations between these two questions mean that on the whole Presentists are also endurantists and Eternalists are also perdurantists (and vice versa), but this is not a necessary connection and it is possible to claim, for instance, that time's passage indicates a series of ordered realities, but that objects within these realities somehow exist outside of the reality as a whole, even though the realities as wholes are not related. However, such positions are rarely adopted.
See also
Arrow of time
Being and Time
Chronometry
Endurantism
Eternalism (philosophy of time)
Identity and change
Metaphysics
Milič Čapek
Perdurantism
Presentism (philosophy of time)
Process and Reality
Process philosophy
Temporal parts
Time geography
Time Reborn
Time travel in science and time travel in fiction
Quentin Smith
William Lane Craig
Zeno's paradoxes
Notes
References
Albert, David (2000) Time and Chance. Harvard Univ. Press.
Dainton, Barry (2010) Time and Space, Second Edition. McGill-Queens University Press. ISBN 978-0-7735-3747-7
Earman, John (1989) World Enough and Space-Time. MIT Press.
Friedman, Michael (1983) Foundations of Space-Time Theories. Princeton Univ. Press.
Adolf Grünbaum (1974) Philosophical Problems of Space and Time, 2nd ed. Boston Studies in the Philosophy of Science. Vol XII. D. Reidel Publishing
Horwich, Paul (1987) Asymmetries in Time. MIT Press.
Lucas, John Randolph, 1973. A Treatise on Time and Space. London: Methuen.
Mellor, D.H. (1998) Real Time II. Routledge.
Laura Mersini-Houghton; Rudy Vaas (eds.) (2012)
Hans Reichenbach (1958) The Philosophy of Space and Time. Dover
Hans Reichenbach (1991) The Direction of Time. University of California Press.
Rochelle, Gerald (1998) Behind Time. Ashgate.
Lawrence Sklar (1976) Space, Time, and Spacetime. University of California Press.
Turetzky, Philip (1998) Time. Routledge.
Bas van Fraassen, 1970. An Introduction to the Philosophy of Space and Time. Random House.
Gal-Or, Benjamin "Cosmology, Physics and Philosophy". Springer-Verlag, New York, 1981, 1983, 1987 ISBN 0-387-90581-2
External links
Stanford Encyclopedia of Philosophy:
"Time" by Ned Markosian;
"Being and Becoming in Modern Physics" by Steven Savitt;
"Absolute and Relational Theories of Space and Motion" by Nick Huggett and Carl Hoefer.
Internet Encyclopedia of Philosophy: "Time" by Bradley Dowden.
Brown, C.L., 2006, "What is Space?" A largely Wittgensteinian, approach towards a dissolution of the question: "What is space?"
Rea, M. C., "Four Dimensionalism" in The Oxford Handbook for Metaphysics. Oxford Univ. Press. Describes presentism and four-dimensionalism.
CEITT - Time and Temporality Research Center. "Time and Temporality".
http://www.exactspent.com/philosophy_of_space_and_time.htm and related subjects
"Gods and the Universe in Buddhist Perspective, Essays on Buddhist Cosmology" by Francis Story.
Category:Natural philosophy
Category:Metaphysics
Space
Category:Space | 655,002 | 2017-01 |
Pub | thumb|A thatched country pub, The Williams Arms, near Braunton, North Devon, England
thumb|A city pub, The World's End, Camden Town, London
thumb|right|A large selection of beers and ales in a traditional pub in London.
thumb|right|upright|The Ale-House Door (painting of c. 1790 by Henry Singleton)
A pub, or public house, is an establishment licensed to sell alcoholic drinks, which traditionally include beer, ale and other brewed alcoholic drinks. It is a relaxed, social drinking establishment and a prominent part of British culture,Public House Britannica.com; Subscription Required. Retrieved 3 July 2008. Irish culture, New Zealand culture and Australian culture.Australian Drinking Culture Convict Creations. Retrieved 24 April 2011. In many places, especially in villages, a pub is the focal point of the community. In his 17th century diary Samuel Pepys described the pub as "the heart of England."
Pubs can be traced back to Roman taverns, through the Anglo-Saxon alehouse to the development of the tied house system in the 19th century. In 1393, King Richard II of England introduced legislation that pubs had to display a sign outdoors to make them easily visible for passing ale tasters who would assess the quality of ale sold. Most pubs focus on offering beers, ales and similar drinks. As well, pubs often sell wines, spirits, and soft drinks, meals and snacks. The owner, tenant or manager (licensee) is known as the pub landlord or publican. Referred to as their "local" by regulars, pubs are typically chosen for their proximity to home or work, the availability of a particular beer or ale or a good selection, good food, a social atmosphere, the presence of friends and acquaintances, and the availability of recreational activities such as a darts team, a skittles team, and a pool or snooker table. The pub quiz was established in the UK in the 1970s.
Origins
The inhabitants of the British Isles have been drinking ale since the Bronze Age, but it was with the arrival of the Roman Empire on its shores in the 1st Century, and the construction of the Roman road networks that the first inns, called tabernae, in which travellers could obtain refreshment began to appear. After the departure of Roman authority in the 5th Century and the fall of the Romano-British kingdoms, the Anglo-Saxons established alehouses that grew out of domestic dwellings, the Anglo-Saxon alewife would put a green bush up on a pole to let people know her brew was ready. These alehouses quickly evolved into meeting houses for the folk to socially congregate, gossip and arrange mutual help within their communities. Herein lies the origin of the modern public house, or "Pub" as it is colloquially called in England. They rapidly spread across the Kingdom, becoming so commonplace that in 965 King Edgar decreed that there should be no more than one alehouse per village.
thumb|Ye Olde Fighting Cocks in St Albans, Hertfordshire, which holds the Guinness World Record for the oldest pub in England
A traveller in the early Middle Ages could obtain overnight accommodation in monasteries, but later a demand for hostelries grew with the popularity of pilgrimages and travel. The Hostellers of London were granted guild status in 1446 and in 1514 the guild became the Worshipful Company of Innholders. A survey in 1577 of drinking establishment in England and Wales for taxation purposesMonckton, Herbert Anthony (1966), A History of English Ale and Beer, Bodley Head (p. 101) recorded 14,202 alehouses, 1,631 inns, and 329 taverns, representing one pub for every 187 people.
Inns
thumb|Peasants before an Inn by Dutch artist Jan Steen c. 1653
Inns are buildings where travellers can seek lodging and, usually, food and drink. They are typically located in the country or along a highway. In Europe, they possibly first sprang up when the Romans built a system of roads two millennia ago. Some inns in Europe are several centuries old. In addition to providing for the needs of travellers, inns traditionally acted as community gathering places.
In Europe, it is the provision of accommodation,Pub Rooms, pub accommodation. if anything, that now distinguishes inns from taverns, alehouses and pubs. The latter tend to provide alcohol (and, in the UK, soft drinks and often food), but less commonly accommodation. Inns tend to be older and grander establishments: historically they provided not only food and lodging, but also stabling and fodder for the traveller's horse(s) and on some roads fresh horses for the mail coach. Famous London inns include The George, Southwark and The Tabard. There is however no longer a formal distinction between an inn and other kinds of establishment. Many pubs use "Inn" in their name, either because they are long established former coaching inns, or to summon up a particular kind of image, or in many cases simply as a pun on the word "in", as in "The Welcome Inn", the name of many pubs in Scotland.
The original services of an inn are now also available at other establishments, such as hotels, lodges, and motels, which focus more on lodging customers than on other services, although they usually provide meals; pubs, which are primarily alcohol-serving establishments; and restaurants and taverns, which serve food and drink. In North America, the lodging aspect of the word "inn" lives on in hotel brand names like Holiday Inn, and in some state laws that refer to lodging operators as innkeepers.
The Inns of Court and Inns of Chancery in London started as ordinary inns where barristers met to do business, but became institutions of the legal profession in England and Wales.
Beer houses and the 1830 Beer Act
Traditional English ale was made solely from fermented malt. The practice of adding hops to produce beer was introduced from the Netherlands in the early 15th century. Alehouses would each brew their own distinctive ale, but independent breweries began to appear in the late 17th century. By the end of the century almost all beer was brewed by commercial breweries.
The 18th century saw a huge growth in the number of drinking establishments, primarily due to the introduction of gin. Gin was brought to England by the Dutch after the Glorious Revolution of 1688 and became very popular after the government created a market for "cuckoo grain" or "cuckoo malt" that was unfit to be used in brewing and distilling by allowing unlicensed gin and beer production, while imposing a heavy duty on all imported spirits. As thousands of gin-shops sprang up all over England, brewers fought back by increasing the number of alehouses. By 1740 the production of gin had increased to six times that of beer and because of its cheapness it became popular with the poor, leading to the so-called Gin Craze. Over half of the 15,000 drinking establishments in London were gin shops.
The drunkenness and lawlessness created by gin was seen to lead to ruination and degradation of the working classes. The distinction was illustrated by William Hogarth in his engravings Beer Street and Gin Lane.Gin Lane British Museum The Gin Act 1736 imposed high taxes on retailers and led to riots in the streets. The prohibitive duty was gradually reduced and finally abolished in 1742. The Gin Act 1751 however was more successful. It forced distillers to sell only to licensed retailers and brought gin shops under the jurisdiction of local magistrates.
By the early 19th century, encouraged by lower duties on gin, the gin houses or "Gin Palaces" had spread from London to most cities and towns in Britain, with most of the new establishments illegal and unlicensed. These bawdy, loud and unruly drinking dens so often described by Charles Dickens in his Sketches by Boz (published 1835–1836) increasingly came to be held as unbridled cesspits of immorality or crime and the source of much ill-health and alcoholism among the working classes.
Under a banner of "reducing public drunkenness" the Beer Act of 1830 introduced a new lower tier of premises permitted to sell alcohol, the beer houses. At the time beer was viewed as harmless, nutritious and even healthy. Young children were often given what was described as small beer, which was brewed to have a low alcohol content, as the local water was often unsafe. Even the evangelical church and temperance movements of the day viewed the drinking of beer very much as a secondary evil and a normal accompaniment to a meal. The freely available beer was thus intended to wean the drinkers off the evils of gin, or so the thinking went.
thumb|right|A Victorian beer house, now a public house, in Rotherhithe, Greater London.
Under the 1830 Act any householder who paid rates could apply, with a one-off payment of two guineas (roughly equal in value to £ today), to sell beer or cider in his home (usually the front parlour) and even to brew his own on his premises. The permission did not extend to the sale of spirits and fortified wines, and any beer house discovered selling those items was closed down and the owner heavily fined. Beer houses were not permitted to open on Sundays. The beer was usually served in jugs or dispensed directly from tapped wooden barrels on a table in the corner of the room. Often profits were so high the owners were able to buy the house next door to live in, turning every room in their former home into bars and lounges for customers.
In the first year, 400 beer houses opened and within eight years there were 46,000 across the country, far outnumbering the combined total of long-established taverns, pubs, inns and hotels. Because it was so easy to obtain permission and the profits could be huge compared to the low cost of gaining permission, the number of beer houses was continuing to rise and in some towns nearly every other house in a street could be a beer house. Finally in 1869 the growth had to be checked by magisterial control and new licensing laws were introduced. Only then was it made harder to get a licence, and the licensing laws which operate today were formulated.
Although the new licensing laws prevented new beer houses from being created, those already in existence were allowed to continue and many did not close until nearly the end of the 19th century. A very small number remained into the 21st century. The vast majority of the beer houses applied for the new licences and became full pubs. These usually small establishments can still be identified in many towns, seemingly oddly located in the middle of otherwise terraced housing part way up a street, unlike purpose-built pubs that are usually found on corners or road junctions. Many of today's respected real ale micro-brewers in the UK started as home-based beer house brewers under the 1830 Act.
The beer houses tended to avoid the traditional pub names like The Crown, The Red Lion, The Royal Oak etc. and, if they did not simply name their place Smith's Beer House, they would apply topical pub names in an effort to reflect the mood of the times.
Licensing laws
thumb|The interior of a typical English pub
thumb|People drinking at Ye Olde Cock Tavern in London, England
There was already regulation on public drinking spaces in the 17th and 18th centuries, and the income earned from licences was beneficial to the crown. Tavern owners were required to possess a licence to sell ale, and a separate licence for distilled spirits.
From the mid-19th century on the opening hours of licensed premises in the UK were restricted. However licensing was gradually liberalised after the 1960s, until contested licensing applications became very rare, and the remaining administrative function was transferred to Local Authorities in 2005.
The Wine and Beerhouse Act 1869 reintroduced the stricter controls of the previous century. The sale of beers, wines or spirits required a licence for the premises from the local magistrates. Further provisions regulated gaming, drunkenness, prostitution and undesirable conduct on licensed premises, enforceable by prosecution or more effectively by the landlord under threat of forfeiting his licence. Licences were only granted, transferred or renewed at special Licensing Sessions courts, and were limited to respectable individuals. Often these were ex-servicemen or ex-policemen; retiring to run a pub was popular amongst military officers at the end of their service. Licence conditions varied widely, according to local practice. They would specify permitted hours, which might require Sunday closing, or conversely permit all-night opening near a market. Typically they might require opening throughout the permitted hours, and the provision of food or lavatories. Once obtained, licences were jealously protected by the licensees (who were expected to be generally present, not an absentee owner or company), and even "Occasional Licences" to serve drinks at temporary premises such as fêtes would usually be granted only to existing licensees. Objections might be made by the police, rival landlords or anyone else on the grounds of infractions such as serving drunks, disorderly or dirty premises, or ignoring permitted hours.
The Sunday Closing (Wales) Act 1881 required the closure of all public houses in Wales on Sundays, and was not repealed until 1961.
Detailed licensing records were kept, giving the Public House, its address, owner, licensee and misdemeanours of the licensees, often going back for hundreds of years. Many of these records survive and can be viewed, for example, at the London Metropolitan Archives centre.
The restrictions were tightened by the Defence of the Realm Act of August 1914, which, along with the introduction of rationing and the censorship of the press for wartime purposes, restricted pubs' opening hours to 12 noon–2:30 pm and 6:30 pm–9:30 pm. Opening for the full licensed hours was compulsory, and closing time was equally firmly enforced by the police; a landlord might lose his licence for infractions. Pubs were closed under the Act and compensation paid, for example in Pembrokeshire.
There was a special case established under the State Management Scheme where the brewery and licensed premises were bought and run by the state until 1973, most notably in Carlisle. During the 20th century elsewhere, both the licensing laws and enforcement were progressively relaxed, and there were differences between parishes; in the 1960s, at closing time in Kensington at 10:30 pm, drinkers would rush over the parish boundary to be in good time for "Last Orders" in Knightsbridge before 11 pm, a practice observed in many pubs adjoining licensing area boundaries. Some Scottish and Welsh parishes remained officially "dry" on Sundays (although often this merely required knocking at the back door of the pub). These restricted opening hours led to the tradition of lock-ins.
However, closing times were increasingly disregarded in the country pubs. In England and Wales by 2000 pubs could legally open from 11 am (12 noon on Sundays) through to 11 pm (10:30 pm on Sundays). That year was also the first to allow continuous opening for 36 hours from 11 am on New Year's Eve to 11 pm on New Year's Day. In addition, many cities had by-laws to allow some pubs to extend opening hours to midnight or 1 am, whilst nightclubs had long been granted late licences to serve alcohol into the morning. Pubs near London's Smithfield market, Billingsgate fish market and Covent Garden fruit and flower market could stay open 24 hours a day since Victorian times to provide a service to the shift working employees of the markets.
Scotland's and Northern Ireland's licensing laws have long been more flexible, allowing local authorities to set pub opening and closing times. In Scotland, this stemmed out of a late repeal of the wartime licensing laws, which stayed in force until 1976.
The Licensing Act 2003, which came into force on 24 November 2005, consolidated the many laws into a single Act. This allowed pubs in England and Wales to apply to the local council for the opening hours of their choice. It was argued that this would end the concentration of violence around 11.30 pm, when people had to leave the pub, making policing easier. In practice, alcohol-related hospital admissions rose following the change in the law, with alcohol involved in 207,800 admissions in 2006/7. Critics claimed that these laws would lead to "24-hour drinking". By the time the law came into effect, 60,326 establishments had applied for longer hours and 1,121 had applied for a licence to sell alcohol 24 hours a day. However nine months later many pubs had not changed their hours, although some stayed open longer at the weekend, but rarely beyond 1:00 am.
Lock-in
A "lock-in" is when a pub owner lets drinkers stay in the pub after the legal closing time, on the theory that once the doors are locked, it becomes a private party rather than a pub. Patrons may put money behind the bar before official closing time, and redeem their drinks during the lock-in so no drinks are technically sold after closing time. The origin of the British lock-in was a reaction to 1915 changes in the licensing laws in England and Wales, which curtailed opening hours to stop factory workers from turning up drunk and harming the war effort. Since 1915, the UK licensing laws had changed very little, with comparatively early closing times. The tradition of the lock-in therefore remained. Since the implementation of Licensing Act 2003, premises in England and Wales may apply to extend their opening hours beyond 11 pm, allowing round-the-clock drinking and removing much of the need for lock-ins. Since the smoking ban, some establishments operated a lock-in during which the remaining patrons could smoke without repercussions but, unlike drinking lock-ins, allowing smoking in a pub was still a prosecutable offence.
Indoor smoking ban
thumb|upright|Tobacco smoke in a pub
In March 2006, a law was introduced to forbid smoking in all enclosed public places in Scotland. Wales followed suit in April 2007, with England introducing the ban in July 2007. Pub landlords had raised concerns prior to the implementation of the law that a smoking ban would have a negative impact on sales. After two years, the impact of the ban was mixed; some pubs suffered declining sales, while others developed their food sales. The Wetherspoon pub chain reported in June 2009 that profits were at the top end of expectations; however, Scottish & Newcastle's takeover by Carlsberg and Heineken was reported in January 2008 as partly the result of its weakness following falling sales due to the ban. Similar bans are applied in Australian pubs with smoking only allowed in designated areas. The Republic of Ireland banned smoking in early 2004 in pubs and clubs.
Architecture
Saloon or lounge
right|thumb|upright|The Eagle, City Road, Islington, London, September 2005
thumb|right|An "estate" pub in Outer London
thumb|right|Breakfast Creek Hotel, one of Brisbane's most famous pubs
By the end of the 18th century a new room in the pub was established: the saloon. Beer establishments had always provided entertainment of some sort—singing, gaming or sport. Balls Pond Road in Islington was named after an establishment run by a Mr Ball that had a duck pond at the rear, where drinkers could, for a fee, go out and take a potshot at the ducks. More common, however, was a card room or a billiard room. The saloon was a room where for an admission fee or a higher price of drinks, singing, dancing, drama or comedy was performed and drinks would be served at the table. From this came the popular music hall form of entertainment—a show consisting of a variety of acts. A most famous London saloon was the Grecian Saloon in The Eagle, City Road, which is still famous because of a nursery rhyme: "Up and down the City Road / In and out The Eagle / That's the way the money goes / Pop goes the weasel." This meant that the customer had spent all his money at The Eagle, and needed to pawn his "weasel" to get some more.David Kemp (1992) The pleasures and treasures of Britain: a discerning traveller's companion p.158. Dundurn Press Ltd., 1992 The meaning of the "weasel" is unclear but the two most likely definitions are: a flat iron used for finishing clothing; or rhyming slang for a coat (weasel and stoat).
A few pubs have stage performances such as serious drama, stand-up comedy, musical bands, cabaret or striptease; however juke boxes, karaoke and other forms of pre-recorded music have otherwise replaced the musical tradition of a piano or guitar and singing.
Public bar
By the 20th century, the saloon, or lounge bar, had become a middle-class room—carpets on the floor, cushions on the seats, and a penny or two on the prices, while the public bar, or tap room, remained working class with bare boards, sometimes with sawdust to absorb the spitting and spillages (known as "spit and sawdust"), hard bench seats, and cheap beer. This bar was known as the four-ale bar from the days when the cheapest beer served there cost 4 pence (4d) a quart.
Later, the public bars gradually improved until sometimes almost the only difference was in the prices, so that customers could choose between economy and exclusivity (or youth and age, or a jukebox or dartboard). With the blurring of class divisions in the 1960s and 1970s, the distinction between the saloon and the public bar was often seen as archaic, and was frequently abolished, usually by the removal of the dividing wall or partition. While the names of saloon and public bar may still be seen on the doors of pubs, the prices (and often the standard of furnishings and decoration) are the same throughout the premises,Fox, Kate (1996). Passport to the Pub: tourist's guide to pub etiquette and many pubs now comprise one large room. However the modern importance of dining in pubs encourages some establishments to maintain distinct rooms or areas.
Snug
The "snug", sometimes called the smoke room, was typically a small, very private room with access to the bar that had a frosted glass external window, set above head height. A higher price was paid for beer in the snug and nobody could look in and see the drinkers. It was not only the wealthy visitors who would use these rooms. The snug was for patrons who preferred not to be seen in the public bar. Ladies would often enjoy a private drink in the snug in a time when it was frowned upon for women to be in a pub. The local police officer might nip in for a quiet pint, the parish priest for his evening whisky, or lovers for a rendezvous.
CAMRA have surveyed the 50,000 pubs in Britain and they believe that there are very few pubs that still have classic snugs. These are on a historic interiors list in order that they can be preserved.Derbyshire - Spondon, Malt Shovel, Heritagepubs, CAMRA, retrieved 27 August 2014.
Counter
It was the pub that first introduced the concept of the bar counter being used to serve the beer. Until that time beer establishments used to bring the beer out to the table or benches, as remains the practice in (for example) beer gardens and other drinking establishments in Germany. A bar might be provided for the manager to do paperwork while keeping an eye on his or her customers, but the casks of ale were kept in a separate taproom. When the first pubs were built, the main room was the public room with a large serving bar copied from the gin houses, the idea being to serve the maximum number of people in the shortest possible time. It became known as the public bar. The other, more private, rooms had no serving bar—they had the beer brought to them from the public bar. There are a number of pubs in the Midlands or the North which still retain this set up but these days the beer is fetched by the customer from the taproom or public bar. One of these is The Vine, known locally as The Bull and Bladder, in Brierley Hill near Birmingham, another the Cock at Broom, Bedfordshire a series of small rooms served drinks and food by waiting staff.The Cock at Broom - 01767 314411 One of England's Real Heritage Pubs In the Manchester district the public bar was known as the "vault", other rooms being the lounge and snug as usual elsewhere. By the early 1970s there was a tendency to change to one large drinking room and breweries were eager to invest in interior design and theming.Evans, David G., et al. (1975) The Manchester Pub Guide, Manchester and Salford City Centres. Manchester: Manchester Pub Surveys; pp. 1–4
Isambard Kingdom Brunel, the British engineer and railway builder, introduced the idea of a circular bar into the Swindon station pub in order that customers were served quickly and did not delay his trains. These island bars became popular as they also allowed staff to serve customers in several different rooms surrounding the bar.
Beer engine
A "beer engine" is a device for pumping beer, originally manually operated and typically used to dispense beer from a cask or container in a pub's basement or cellar.
The first beer pump known in England is believed to have been invented by John Lofting (b. Netherlands 1659-d. Great Marlow Buckinghamshire 1742) an inventor, manufacturer and merchant of London.
The London Gazette of 17 March 1691 published a patent in favour of John Lofting for a fire engine, but remarked upon and recommended another invention of his, for a beer pump:
"Whereas their Majesties have been Graciously Pleased to grant Letters patent to John Lofting of London Merchant for a New Invented Engine for Extinguishing Fires which said Engine have found every great encouragement. The said Patentee hath also projected a Very Useful Engine for starting of beer and other liquors which will deliver from 20 to 30 barrels an hour which are completely fixed with Brass Joints and Screws at Reasonable Rates. Any Person that hath occasion for the said Engines may apply themselves to the Patentee at his house near St Thomas Apostle London or to Mr. Nicholas Wall at the Workshoppe near Saddlers Wells at Islington or to Mr. William Tillcar, Turner, his agent at his house in Woodtree next door to the Sun Tavern London."
"Their Majesties" referred to were William and Mary, who had recently arrived from the Netherlands and had been appointed joint monarchs.
A further engine was invented in the late eighteenth century by the locksmith and hydraulic engineer Joseph Bramah (1748–1814).
Strictly the term refers to the pump itself, which is normally manually operated, though electrically powered and gas powered pumps are occasionally used. When manually powered, the term "handpump" is often used to refer to both the pump and the associated handle.
Companies
After the development of the large London Porter breweries in the 18th century, the trend grew for pubs to become tied houses which could only sell beer from one brewery (a pub not tied in this way was called a Free house). The usual arrangement for a tied house was that the pub was owned by the brewery but rented out to a private individual (landlord) who ran it as a separate business (even though contracted to buy the beer from the brewery). Another very common arrangement was (and is) for the landlord to own the premises (whether freehold or leasehold) independently of the brewer, but then to take a mortgage loan from a brewery, either to finance the purchase of the pub initially, or to refurbish it, and be required as a term of the loan to observe the solus tie.
A trend in the late 20th century was for breweries to run their pubs directly, using managers rather than tenants. Most such breweries, such as the regional brewery Shepherd Neame in Kent and Young's and Fuller's in London, control hundreds of pubs in a particular region of the UK, while a few, such as Greene King, are spread nationally. The landlord of a tied pub may be an employee of the brewery—in which case he/she would be a manager of a managed house, or a self-employed tenant who has entered into a lease agreement with a brewery, a condition of which is the legal obligation (trade tie) only to purchase that brewery's beer. The beer selection is mainly limited to beers brewed by that particular company. The Beer Orders, passed in 1989, were aimed at getting tied houses to offer at least one alternative beer, known as a guest beer, from another brewery. This law has now been repealed but while in force it dramatically altered the industry. Some pubs still offer a regularly changing selection of guest beers.
Organisations such as Wetherspoons, Punch Taverns and O'Neill's were formed in the UK in the wake of the Beer Orders. A PubCo is a company involved in the retailing but not the manufacture of beverages, while a Pub chain may be run either by a PubCo or by a brewery.
Pubs within a chain will usually have items in common, such as fittings, promotions, ambience and range of food and drink on offer. A pub chain will position itself in the marketplace for a target audience. One company may run several pub chains aimed at different segments of the market. Pubs for use in a chain are bought and sold in large units, often from regional breweries which are then closed down. Newly acquired pubs are often renamed by the new owners, and many people resent the loss of traditional names, especially if their favourite regional beer disappears at the same time.
In 2009 about half of Britain's pubs were owned by large pub companies.
Brewery tap
A brewery tap is the nearest outlet for a brewery's beers. This is usually a room or bar in the brewery itself, though the name may be applied to the nearest pub. The term is not applied to a brewpub which brews and sells its beer on the same premises.
Particular kinds
Country pubs
thumb|A family run pub in rural Ireland
thumb|The Crown Inn Chiddingfold
A "country pub" by tradition is a rural public house. However, the distinctive culture surrounding country pubs, that of functioning as a social centre for a village and rural community, has been changing over the last thirty or so years. In the past, many rural pubs provided opportunities for country folk to meet and exchange (often local) news, while others—especially those away from village centres—existed for the general purpose, before the advent of motor transport, of serving travellers as coaching inns.What the country pub is by tradition Southern Life (UK)
In more recent years, however, many country pubs have either closed down, or have been converted to establishments intent on providing seating facilities for the consumption of food, rather than a venue for members of the local community meeting and convivially drinking.The more recent developments of the country pub
Roadhouses
thumb|right|The Dutch House, a typical 1930s roadhouse on the busy A20 road in Eltham, Greater London.
The term "roadhouse" was originally applied to a coaching inn, but with the advent of popular travel by motor car in the 1920s and 1930s in the United Kingdom, a new type of roadhouse emerged, often located on the newly constructed arterial roads and bypasses. They were large establishments offering meals and refreshment and accommodation to motorists and parties travelling by charabanc. The largest roadhouses boasted facilities such as tennis courts and swimming pools. Their popularity ended with the outbreak of the Second World War when recreational road travel became impossible, and the advent of post-war drink driving legislation prevented their full recovery. Many of these establishments are now operated as pub restaurants or fast food outlets.
Theme pubs
Pubs that cater for a niche clientele, such as sports fans or people of certain nationalities are known as theme pubs. Examples of theme pubs include sports bars, rock pubs, biker pubs, Goth pubs, strip pubs, karaoke bars and Irish pubs.
Micropubs
The micropub movement was in Britain was started by Martyn Hiller. Micropubs are small community pubs with limited opening hours, and focusing strongly on local cask ale.New statesman on Micropubs It became easier to start a small pub after the passing of the 2003 Licensing Act, which became effective in 2005.
Signs
thumb|upright|The pub sign of The George, Southwark, depicting St George slaying a Dragon
In 1393, King Richard II of England compelled landlords to erect signs outside their premises. The legislation stated "Whosoever shall brew ale in the town with intention of selling it must hang out a sign, otherwise he shall forfeit his ale." This was to make alehouses easily visible to passing inspectors, borough ale tasters, who would decide the quality of the ale they provided. William Shakespeare's father, John Shakespeare, was one such inspector.
Another important factor was that during the Middle Ages a large proportion of the population would have been illiterate and so pictures on a sign were more useful than words as a means of identifying a public house. For this reason there was often no reason to write the establishment's name on the sign and inns opened without a formal written name, the name being derived later from the illustration on the pub's sign.
thumb|left|upright|The Robin Hood Inn, Rowland's Castle, Shropshire
The earliest signs were often not painted but consisted, for example, of paraphernalia connected with the brewing process such as bunches of hops or brewing implements, which were suspended above the door of the pub. In some cases local nicknames, farming terms and puns were used. Local events were often commemorated in pub signs. Simple natural or religious symbols such as 'The Sun', 'The Star' and 'The Cross' were incorporated into pub signs, sometimes being adapted to incorporate elements of the heraldry (e.g. the coat of arms) of the local lords who owned the lands upon which the pub stood. Some pubs have Latin inscriptions.
thumb|upright|The Penny Black pub in Oxfordshire, depicting the first postage stamp which featured a profile of Queen Victoria
Other subjects that lent themselves to visual depiction included the name of battles (e.g. Trafalgar), explorers, local notables, discoveries, sporting heroes and members of the royal family. Some pub signs are in the form of a pictorial pun or rebus. For example, a pub in Crowborough, East Sussex called The Crow and Gate has an image of a crow with gates as wings. A British Pathe News film of 1956 shows artist Michael Farrar-Bell at work producing inn signs.Video of artist Michael Farrar-Bell producing inn signs from British Pathe News
Most British pubs still have decorated signs hanging over their doors, and these retain their original function of enabling the identification of the pub. Today's pub signs almost always bear the name of the pub, both in words and in pictorial representation. The more remote country pubs often have stand-alone signs directing potential customers to their door.
Names
Pub names are used to identify and differentiate each pub. Modern names are sometimes a marketing ploy or attempt to create "brand awareness", frequently using a comic theme thought to be memorable, Slug and Lettuce for a pub chain being an example. Interesting origins are not confined to old or traditional names, however. Names and their origins can be broken up into a relatively small number of categories.
As many pubs are centuries old, many of their early customers were unable to read, and pictorial signs could be readily recognised when lettering and words could not be read.
Pubs often have traditional names. A common name is the "Marquis of Granby". These pubs were named after John Manners, Marquess of Granby, who was the son of John Manners, 3rd Duke of Rutland and a general in the 18th century British Army. He showed a great concern for the welfare of his men, and on their retirement, provided funds for many of them to establish taverns, which were subsequently named after him. All pubs granted their licence in 1780 were called the Royal George , after King George III, and the twentieth anniversary of his coronation.
Many names for pubs that appear nonsensical may have come from corruptions of old slogans or phrases, such as "The Bag o'Nails" (Bacchanals), "The Goat and Compasses" (God Encompasseth Us),Brewer, E. Cobham (1989) Brewer's Dictionary of Phrase and Fable; 14th ed., by Ivor H. Evans. London: Cassell; p. 482 where it is thought unlikely, and two other suggestions are given "The Cat and the Fiddle" (Chaton Fidèle: Faithful Kitten) and "The Bull and Bush", which purportedly celebrates the victory of Henry VIII at "Boulogne Bouche" or Boulogne-sur-Mer Harbour.
Entertainment
thumb|right|Indoor Quoits being played at a pub in Parkend, Gloucestershire.
Traditional games are played in pubs, ranging from the well-known darts, skittles, dominoes, cards and bar billiards, to the more obscure Aunt Sally, Nine Men's Morris and ringing the bull. In the UK betting is legally limited to certain games such as cribbage or dominoes, played for small stakes. In recent decades the game of pool (both the British and American versions) has increased in popularity as well as other table based games such as snooker or Table Football becoming common.
Increasingly, more modern games such as video games and slot machines are provided. Pubs hold special events, from tournaments of the aforementioned games to karaoke nights to pub quizzes. Some play pop music and hip-hop (dance bar), or show football and rugby union on big screen televisions (sports bar). Shove ha'penny and Bat and trap were also popular in pubs south of London.
Some pubs in the UK also have football teams composed of regular customers. Many of these teams are in leagues that play matches on Sundays, hence the term "Sunday League Football". Bowling is found in association with pubs in some parts of the country and the local team will play matches against teams invited from elsewhere on the pub's bowling green.
Pubs may be venues for pub songs and live music. During the 1970s pubs provided an outlet for a number of bands, such as Kilburn and the High Roads, Dr. Feelgood and The Kursaal Flyers, who formed a musical genre called Pub rock that was a precursor to Punk music.
Food
thumb|right|Pub grub – a pie, along with a pint
thumb|Black olives along with a pint of beer in a Montreal pub
Some pubs have a long tradition of serving food, dating back to their historic usage as inns and hotels where travellers would stay.
Many pubs were drinking establishments, and little emphasis was placed on the serving of food, other than sandwiches and "bar snacks", such as pork scratchings, pickled eggs, salted crisps and peanuts which helped to increase beer sales. In South East England (especially London) it was common until recent times for vendors selling cockles, whelks, mussels, and other shellfish to sell to customers during the evening and at closing time. Many mobile shellfish stalls would set up near pubs, a practice that continues in London's East End. Otherwise, pickled cockles and mussels may be offered by the pub in jars or packets.
In the 1950s some British pubs would offer "a pie and a pint", with hot individual steak and ale pies made easily on the premises by the proprietor's wife during the lunchtime opening hours. The ploughman's lunch became popular in the late 1960s. In the late 1960s "chicken in a basket", a portion of roast chicken with chips, served on a napkin, in a wicker basket became popular due to its convenience.
Family chain pubs which served food in the evenings gained popularity in the 1970s, and included Berni Inn and Beefeater.
Quality dropped but variety increased with the introduction of microwave ovens and freezer food. "Pub grub" expanded to include British food items such as steak and ale pie, shepherd's pie, fish and chips, bangers and mash, Sunday roast, ploughman's lunch, and pasties. In addition, dishes such as burgers, chicken wings, lasagne and chilli con carne are often served. Some pubs offer elaborate hot and cold snacks free to customers at Sunday lunchtimes, to prevent them getting hungry and leaving for their lunch at home.
Since the 1990s food has become a more important part of a pub's trade, and today most pubs serve lunches and dinners at the table in addition to (or instead of) snacks consumed at the bar. They may have a separate dining room. Some pubs serve meals to a higher standard, to match good restaurant standards; these are sometimes termed gastropubs.
Gastropub
thumb|The Listers Arms, a gastropub in Malham, North Yorkshire
A gastropub concentrates on quality food. The name is a portmanteau of pub and gastronomy and was coined in 1991 when David Eyre and Mike Belben took over The Eagle pub in Clerkenwell, London. The concept of a restaurant in a pub reinvigorated both pub culture and British dining, though has occasionally attracted criticism for potentially removing the character of traditional pubs.
In 2011 The Good Food Guide suggested that the term has become irrelevant.
Listed
CAMRA maintains a "National Inventory" of historical notability and of architecturally and decoratively notable pubs. The National Trust owns thirty-six public houses of historic interest including the George Inn, Southwark, London and The Crown Liquor Saloon, Belfast, Northern Ireland.Evans, Jeff (2004) The Book of Beer Knowledge: essential wisdom for the discerning drinker. St Albans: CAMRA Books ISBN 1-85249-198-1
Records
thumb|The Sun Inn, Herefordshire. One of the few remaining parlour pubs
thumb|'The Crooked House', Himley, is known for the extreme lean of the building, caused by subsidence produced by mining
thumb|Ye Olde Man & Scythe, Bolton
Highest and remotest
The highest pub in the United Kingdom is the Tan Hill Inn, Yorkshire, at above sea level. The remotest pub on the British mainland is The Old Forge in the village of Inverie, Lochaber, Scotland. There is no road access and it may only be reached by an walk over mountains, or a sea crossing. Likewise, The Berney Arms in Norfolk has no road access. It may be reached by foot or by boat, and by train as it is served by the nearby Berney Arms railway station, which likewise has no road access and serves no other settlement.
Smallest
Contenders for the smallest public house in the UK include:
The Nutshell – Bury St Edmunds, Suffolk
The Lakeside Inn – Southport, Lancashire
The Little Gem – Aylesford, Kent
The Smiths Arms – Godmanstone, Dorset
The Signal Box Inn – Cleethorpes, Lincolnshire
The list includes a small number of parlour pubs, one of which is the Sun Inn in Leintwardine, Herefordshire.
Largest
The largest pub in the UK is The Moon Under Water, Manchester; as are many Wetherspoons pubs it is in a converted cinema.
Oldest
A number of pubs claim to be the oldest surviving establishment in the United Kingdom, although in several cases original buildings have been demolished and replaced on the same site. Others are ancient buildings that saw uses other than as a pub during their history. Ye Olde Fighting Cocks in St Albans, Hertfordshire, holds the Guinness World Record for the oldest pub in England, as it is an 11th-century structure on an 8th-century site. Ye Olde Trip to Jerusalem in Nottingham is claimed to be the "oldest inn in England". It has a claimed date of 1189, based on the fact it is constructed on the site of the Nottingham Castle brewhouse; the present building dates from around 1650. Likewise, The Nags Head in Burntwood, Staffordshire only dates back to the 16th century, but there has been a pub on the site since at least 1086, as it is mentioned in the Domesday Book.
There is archaeological evidence that parts of the foundations of The Old Ferryboat Inn in Holywell may date to AD 460, and there is evidence of ale being served as early as AD 560.
The Bingley Arms, Bardsey, Yorkshire, is claimed to date to 905 AD. Ye Olde Salutation Inn in Nottingham dates from 1240, although the building served as a tannery and a private residence before becoming an inn sometime before the English Civil War. The Adam and Eve in Norwich was first recorded in 1249, when it was an alehouse for the workers constructing nearby Norwich Cathedral. Ye Olde Man & Scythe in Bolton, Greater Manchester, is mentioned by name in a charter of 1251, but the current building is dated 1631. Its cellars are the only surviving part of the older structure.
Longest and shortest name
The town of Stalybridge in Greater Manchester is thought to have the pubs with both the longest and shortest names in the United Kingdom — The Old 13th Cheshire Rifleman Corps Inn and the Q Inn.
Statistics
United Kingdom
The average retail price of a pint of beer is £3.23 of which 45p is duty and 54p is VAT (2014).
26.9 million barrels of beer are sold annually (Jan–Dec 2013).
There were 48,000 pubs in 2013, compared with 67,800 in 1982 and 60,100 in 2002.British Beer and Pub Association - Statistics, http://www.beerandpub.com/statistics
Decline
thumb|The currently closed "The First and Last" Pub, next to the closed Omagh railway station, in County Tyrone, Northern Ireland.
The number of pubs in the UK has declined year on year, at least since 1982. Various reasons are put forward for this, such as the failure of some establishments to keep up with customer requirements. Others claim the smoking ban of 2007, intense competition from gastro-pubs, the availability of cheap alcohol in supermarkets or the general economic climate are either to blame, or are factors in the decline. Changes in demographics may be an additional factor.
In 2015 the rate of pub closures came under the scrutiny of Parliament in the UK, with a promise of legislation to improve relations between owners and tenants. The Lost Pubs Project listed 31,301 closed English pubs on 19 July 2016, with photographs of over 16,000.
Cultural associations
Inns and taverns feature throughout English literature and poetry, from The Tabard Inn in Chaucer's Canterbury Tales onwards.
thumb|right|Jamaica Inn in Cornwall inspired a novel and a film.
The highwayman Dick Turpin used the Swan Inn at Woughton-on-the-Green in Buckinghamshire as his base. Jamaica Inn near Bolventor in Cornwall gave its name to a 1936 novel by Daphne du Maurier and a 1939 film directed by Alfred Hitchcock. In the 1920s John Fothergill (1876–1957) was the innkeeper of the Spread Eagle in Thame, Berkshire, and published his autobiography: An Innkeeper's Diary (London: Chatto & Windus, 1931).My Three Inns, 1949, includes those he kept in Ascot and Market Harborough. There are more recent editions of the diary. During his idiosyncratic occupancy many famous people came to stay, such as H. G. Wells. United States president George W. Bush fulfilled his lifetime ambition of visiting a 'genuine British pub' during his November 2003 state visit to the UK when he had lunch and a pint of non-alcoholic lager (Bush being a teetotaler) with British Prime Minister Tony Blair at the Dun Cow pub in Sedgefield, County Durham in Blair's home constituency. There were approximately 53,500 public houses in 2009 in the United Kingdom. This number has been declining every year, so that nearly half of the smaller villages no longer have a local pub.
London
Many of London's pubs are known to have been used by famous people, but in some cases, such as the association between Samuel Johnson and Ye Olde Cheshire Cheese, this is speculative, based on little more than the fact that the person is known to have lived nearby. However, Charles Dickens is known to have visited the Cheshire Cheese, the Prospect of Whitby, Ye Olde Cock Tavern and many others. Samuel Pepys is also associated with the Prospect of Whitby and the Cock Tavern.
The Fitzroy TavernFitzroy Tavern, Fitzrovia, London W1T 2NA. is a pub situated at 16 Charlotte Street in the Fitzrovia district, to which it gives its name. It became famous (or according to others, infamous) during a period spanning the 1920s to the mid-1950s as a meeting place for many of London's artists, intellectuals and bohemians such as Dylan Thomas, Augustus John, and George Orwell. Several establishments in Soho, London, have associations with well-known, post-war literary and artistic figures, including the Pillars of Hercules, The Colony Room and the Coach and Horses. The Canonbury Tavern, Canonbury, was the prototype for Orwell's ideal English pub, The Moon Under Water.
thumb|right|The Red Lion in Whitehall is close to the Houses of Parliament and is frequented by Members of Parliament and political journalists.
The Red Lion in Whitehall is close to the Palace of Westminster and is consequently used by political journalists and Members of Parliament. The pub is equipped with a Division bell that summons MPs back to the chamber when they are required to take part in a vote. The Punch Bowl, Mayfair was at one time jointly owned by Madonna and Guy Ritchie. The Coleherne public house in Earls Court was a well-known gay pub from the 1950s. It attracted many well-known patrons, such as Freddie Mercury, Kenny Everett and Rudolph Nureyev. It was used by the serial-killer Colin Ireland to pick up victims.
In 1966 The Blind Beggar in Whitechapel became infamous as the scene of a murder committed by gangster Ronnie Kray. The Ten Bells is associated with several of the victims of Jack the Ripper. In 1955, Ruth Ellis, the last woman executed in the United Kingdom, shot David Blakely as he emerged from The Magdala in South Hill Park, Hampstead,"The Magdala" FancyaPint.com (Retrieved 13 February 2010) the bullet holes can still be seen in the walls outside. It is said that Vladimir Lenin and a young Joseph Stalin met in the Crown and Anchor pub (now known as The Crown Tavern) on Clerkenwell Green when the latter was visiting London in 1903.
The Angel, Islington was formerly a coaching inn, the first on the route northwards out of London, where Thomas Paine is believed to have written much of The Rights of Man. It was mentioned by Charles Dickens, became a Lyons Corner House, and is now a Co-operative Bank.
Oxford and Cambridge
The Eagle and Child and the Lamb and Flag, Oxford, were regular meeting places of the Inklings, a writers' group which included J. R. R. Tolkien and C. S. Lewis. The Eagle in Cambridge is where Francis Crick interrupted patrons' lunchtime on 28 February 1953 to announce that he and James Watson had "discovered the secret of life" after they had come up with their proposal for the structure of DNA.Regis, Ed (2009) What Is Life?: investigating the nature of life in the age of synthetic biology. Oxford: Oxford University Press ISBN 0-19-538341-9; p. 52 The anecdote is related in Watson's book The Double Helix. and commemorated with a blue plaque on the outside wall.
Fictional pubs
Soap operas
thumb|The fictitious Queen Victoria pub, EastEnders, London
The major soap operas on British television each feature a pub, and these pubs have become household names.Soap box or soft soap? audience attitudes to the British soap opera; by Andrea Millwood Hargrave with Lucy Gatfield, May 2002, Broadcasting Standards Commission; p. 20. Retrieved 21 July 2009. The Rovers Return is the pub in Coronation Street, the British soap broadcast on ITV. The Queen Vic (short for the Queen Victoria) is the pub in EastEnders, the major soap on BBC One and the Woolpack in ITV's Emmerdale. The sets of each of the three major television soap operas have been visited by some of the members of the royal family, including Queen Elizabeth II. The centrepiece of each visit was a trip into the Rovers, the Queen Vic, or the Woolpack to be offered a drink.
The Bull in the BBC Radio 4 soap opera The Archers is an important meeting point.
Outside Great Britain
right|thumb|A Swedish pub serving Irish beer
thumb|A pub in Russia
Although "British" pubs found outside of Britain and its former colonies are often themed bars owing little to the original British pub, a number of "true" pubs may be found around the world.
In Denmark—a country, like Britain, with a long tradition of brewing—a number of pubs have opened which eschew "theming", and which instead focus on the business of providing carefully conditioned beer, often independent of any particular brewery or chain, in an environment which would not be unfamiliar to a British pub-goer. Some import British cask ale, rather than beer in kegs, to provide the full British real ale experience to their customers. This newly established Danish interest in British cask beer and the British pub tradition is reflected by the fact that some 56 British cask beers were available at the 2008 European Beer Festival in Copenhagen, which was attended by more than 20,000 people.
In Ireland, pubs are known for their atmosphere or "craic". In Irish, a pub is referred to as teach tábhairne ("tavernhouse") or teach óil ("drinkinghouse"). Live music, either sessions of traditional Irish music or varieties of modern popular music, is frequently featured in the pubs of Ireland. Pubs in Northern Ireland are largely identical to their counterparts in the Republic of Ireland except for the lack of spirit grocers. A side effect of "The Troubles" was that the lack of a tourist industry meant that a higher proportion of traditional bars have survived the wholesale refitting of Irish pub interiors in the 'English style' in the 1950s and 1960s. New Zealand sports a number of Irish pubs.
The most popular term in English-speaking Canada used for a drinking establishment was "tavern", until the 1970s when the term "bar" became widespread as in the United States. In the 1800s the term used was "public house" as in England but "pub culture" did not spread to Canada. A fake "English looking" pub trend started in the 1990s, built into existing storefronts, like regular bars. Most universities in Canada have campus pubs which are central to student life, as it would be bad form just to serve alcohol to students without providing some type of basic food. Often these pubs are run by the student's union. The gastropub concept has caught on, as traditional British influences are to be found in many Canadian dishes. On 16 March 2012, Malcolm McDowell (with fellow English actor Gary Oldman in attendance to pay tribute) received a star on the Hollywood Walk of Fame, aptly outside the Pig n’ Whistle British pub on Hollywood Boulevard."It's about time! Movie veteran Malcolm McDowell finally awarded a star on the Hollywood Walk of Fame". Daily Mail.
See also
Tavern
Bar
Campaign for Real Ale
Pub crawl
Public houses in Ireland
List of award-winning pubs in London
List of microbreweries
List of public house topics
List of public houses in Australia
References
Bibliography
Cornell, Martyn (2003). Beer: the story of the pint. London: Headline. ISBN 978-0-7553-1165-1.
Haydon, Peter (2001). Beer and Britannia: an inebriated history of Britain. Stroud: Sutton. ISBN 978-0-7509-2748-2.
Jackson, Michael & Smyth, Frank (1976). The English Pub. London: Collins. ISBN 0-00-216210-5.
www.breweryartists.co.uk A history of the Brewery Artists Inn Sign studio
Further reading
Burke, Thomas (1927). The Book of the Inn: being two hundred pictures of the English inn from the earliest times to the coming of the railway hotel; selected and edited by Thomas Burke. London: Constable.
Burke, Thomas (1930). The English Inn. (English Heritage.) London: Herbert Jenkins.
Burke, Thomas (1947). The English Inn (Revised ed.). (The Country Books.) London: Herbert Jenkins.
Clark, Peter (1983). The English Alehouse: a social history, 1200–1830. Harlow: Longman. ISBN 0-582-50835-5.
Clark, Peter (1978). "The Alehouse and the Alternative Society", in: Puritans and Revolutionaries: essays in seventeenth-century history presented to Christopher Hill; ed. D. H. Pennington & Keith Thomas. Oxford: Clarendon Press, 1978; pp. 47–72.
Douch, H. L. (1966). Old Cornish Inns and their place in the social history of the County. Truro: D. Bradford Barton.
Everitt, Alan (1985). "The English Urban Inn", in his: Landscape and Community in England. London: Hambledon Press ISBN 0-907628-42-7. (The Oxford Companion to Local and Family History (ed. David Hey), 1996, describes this as "the starting point for modern studies [of inns]"; Everitt described most of the previous literature on the topic as "a wretched farrago of romantic legends, facetious humour and irritating errors".)
Hackwood, Frederick W. (1910). Inns, Ales and Drinking Customs of Old England. London: T. Fisher Unwin.
Reissued: London: Bracken Books, 1985. ISBN 0-946495-25-4.
Martin, John (1993). Stanley Chew's Pub Signs: a celebration of the art and heritage of British pub signs. Worcester: John Martin. ISBN 1-85421-225-7.
Monson-Fitzjohn, G. J. (1926) Quaint Signs of Olde Inns. London: Herbert Jenkins (reissued by Senate, London, 1994 ISBN 1-85958-028-9).
Richardson, A. E. (1934). The Old Inns of England. London: B. T. Batsford.
External links
Lost Pubs Project - archive of closed English pubs
Category:Bartending
Category:Types of drinking establishment
Category:Types of restaurants
Category:British culture
Category:Local cultures | 24,578 | 2017-01 |
National Archives and Records Administration | The National Archives and Records Administration (NARA) is an independent agency of the United States government charged with preserving and documenting government and historical records and with increasing public access to those documents, which comprise the National Archives. NARA is officially responsible for maintaining and publishing the legally authentic and authoritative copies of acts of Congress, presidential proclamations and executive orders, and federal regulations. The NARA also transmits votes of the Electoral College to Congress.
The chief administrator of NARA is the Archivist of the United States.
Organization
The Archivist of the United States is the chief official overseeing the operation of the National Archives and Records Administration. The Archivist not only maintains the official documentation of the passage of amendments to the U.S. Constitution by state legislatures, but has the authority to declare when the constitutional threshold for passage has been reached, and therefore when an act has become an amendment.
The Office of the Federal Register publishes the Federal Register, Code of Federal Regulations, and United States Statutes at Large, among others. It also administers the Electoral College.
The National Historical Publications and Records Commission (NHPRC)—the agency's grant-making arm—awards funds to state and local governments, public and private archives, colleges and universities, and other nonprofit organizations to preserve and publish historical records. Since 1964, the NHPRC has awarded some 4,500 grants.
The Office of Government Information Services (OGIS) is a Freedom of Information Act (FOIA) resource for the public and the government. Congress has charged NARA with reviewing FOIA policies, procedures and compliance of Federal agencies and to recommend changes to FOIA. NARA's mission also includes resolving FOIA disputes between Federal agencies and requesters.
History
thumb|left|Rotunda of the National Archives Building
Originally, each branch and agency of the U.S. government was responsible for maintaining its own documents, which often resulted in the loss and destruction of records. Congress established the National Archives Establishment in 1934 to centralize federal record keeping, with the Archivist of the United States as chief administrator.Act of June 19, 1934, Pub. L. No. 73-432, 48 Stat. 1122 (establishing the National Archives). The National Archives was incorporated with GSA in 1949; in 1985 it became an independent agency as NARA (National Archives and Records Administration).
The first Archivist, R.D.W. Connor, began serving in 1934, when the National Archives was established by Congress. As a result of a first Hoover Commission recommendation, in 1949 the National Archives was placed within the newly formed General Services Administration (GSA). The Archivist served as a subordinate official to the GSA Administrator until the National Archives and Records Administration became an independent agency on April 1, 1985.
In March 2006, it was revealed by the Archivist of the United States in a public hearing that a memorandum of understanding between NARA and various government agencies existed to "reclassify", i.e., withdraw from public access, certain documents in the name of national security, and to do so in a manner such that researchers would not be likely to discover the process (the U.S. reclassification program). An audit indicated that more than one third withdrawn since 1999 did not contain sensitive information. The program was originally scheduled to end in 2007.
In 2010, Executive Order 13526 created the National Declassification Center to coordinate declassification practices across agencies, provide secure document services to other agencies, and review records in NARA custody for declassification.
In 2011, a retired employee pleaded guilty to stealing original sound recordings from the archives.
Archival Recovery Teams investigate the theft of records.
Records
NARA's holdings are classed into "record groups" reflecting the governmental department or agency from which they originated. Records include paper documents, microfilm, still pictures, motion pictures, and electronic media.
Archival descriptions of the permanent holdings of the federal government in the custody of NARA are stored in the National Archives Catalog. The archival descriptions include information on traditional paper holdings, electronic records, and artifacts. As of December 2012, the catalog consisted of about 10 billion logical data records describing 527,000 artifacts and encompassing 81% of NARA's records. There are also 922,000 digital copies of already digitized materials.
Most records at NARA are in the public domain, as works of the federal government are excluded from copyright protection. However, records from other sources may still be protected by copyright or donor agreements. Executive Order 13526 directs originating agencies to declassify documents if possible before shipment to NARA for long-term storage,Section 3.2 (d) but NARA also stores some classified documents until they can be declassified. Its Information Security Oversight Office monitors and sets policy for the U.S. government's security classification system.
Many of NARA's most requested records are frequently used for genealogy research. This includes census records from 1790 to 1940, ships' passenger lists, and naturalization records.
Facilities and exhibition spaces
National Archives Building
thumb|The National Archives Building from Constitution Avenue
The National Archives Building, known informally as Archives I, located north of the National Mall on Constitution Avenue in Washington, D.C., opened as its original headquarters in 1935. It holds the original copies of the three main formative documents of the United States and its government: the Declaration of Independence, the Constitution, and the Bill of Rights. It also hosts a copy of the 1297 Magna Carta confirmed by Edward I. These are displayed to the public in the main chamber of the National Archives, which is called the Rotunda for the Charters of Freedom. The National Archives Building also exhibits other important American historical documents such as the Louisiana Purchase Treaty, the Emancipation Proclamation, and collections of photography and other historically and culturally significant American artifacts.
Once inside the Rotunda for the Charters of Freedom, there are no lines to see the individual documents and visitors are allowed to walk from document to document as they wish. For over 30 years the National Archives have forbidden flash photography but the advent of cameras with automatic flashes have made the rules increasingly difficult to enforce. As a result, all filming, photographing, and videotaping by the public in the exhibition areas has been prohibited since February 25, 2010.
left|thumb|A student from American University scans in War of 1812 military records to be filed online and available for public use
An Innovation Hub provides facilities for the public to access NARA documents and provide metadata.
National Archives at College Park
thumb|right|NARA facility near the University of Maryland, College Park
Because of space constraints, NARA opened a second facility, known informally as Archives II, in 1994 near the University of Maryland, College Park campus (8601 Adelphi Road, College Park, MD, 20740-6001). Largely because of this proximity, NARA and the University of Maryland engage in cooperative initiatives. The College Park campus includes an archaeological site that was listed on the National Register of Historic Places in 1996.
Washington National Records Center
The Washington National Records Center (WNRC), located in Suitland, Maryland is a large warehouse type facility which stores federal records which are still under the control of the creating agency. Federal government agencies pay a yearly fee for storage at the facility. In accordance with federal records schedules, documents at WNRC are transferred to the legal custody of the National Archives after a certain point (this usually involves a relocation of the records to College Park). Temporary records at WNRC are either retained for a fee or destroyed after retention times has elapsed. WNRC also offers research services and maintains a small research room.
Affiliated facilities
The National Archives Building in downtown Washington holds record collections such as all existing federal census records, ships' passenger lists, military unit records from the American Revolution to the Philippine–American War, records of the Confederate government, the Freedmen's Bureau records, and pension and land records.
There are also ten Affiliated Archives locations across the U.S. which hold, by formal, written agreement with NARA, accessioned records.
Oklahoma Historical Society, Oklahoma City, Oklahoma
Pennsylvania State Archives, Bureau of Archives and History, Harrisburg, Pennsylvania
Prints and Photographs Division, Library of Congress, Washington, D.C.
State Records Center and Archives, Santa Fe, New Mexico
U.S. Government Printing Office, Washington, D.C.
U.S. Military Academy Archives, West Point, New York
University of North Texas Libraries, Denton, Texas
William W. Jeffries Memorial Archives, U.S. Naval Academy, Annapolis, Maryland
Yellowstone National Park Archives, Wyoming
Regional facilities
thumb|right|The National Archives at Atlanta facility in Morrow, Georgia
There are facilities across the country with research rooms, archival holdings, and microfilms of documents of federal agencies and courts pertinent to each region.
Atlanta, Georgia, Southeast Region; NARA at Atlanta is located in Morrow, Georgia
Boston, Massachusetts, Northeast Region; NARA at Boston is located in Waltham, Massachusetts
Chicago, Illinois, Great Lakes Region
Denver, Colorado, Rocky Mountain Region; NARA at Denver is located in Broomfield, Colorado
Fort Worth, Texas, Southwest Region
Kansas City, Missouri, Central Plains Region
New York City, New York, Northeast Region
Philadelphia, Pennsylvania, Mid Atlantic Region
Riverside, California, Pacific Region
San Francisco, California, Pacific Region; NARA at San Francisco is located in San Bruno, California
Seattle, Washington, Pacific Alaska Region
Two offices in the St. Louis, Missouri area comprise the National Personnel Records Center.
Spanish Lake, Missouri, Military Personnel Records Center
Valmeyer, Illinois, Civilian Personnel Records Center,
In addition, Federal Records Centers exist in each region that house materials owned by Federal agencies. Federal Records Centers are not open for public research. For example, the FRC in Lenexa, Kansas holds items from the treatment of John F. Kennedy after his fatal shooting in 1963.
Presidential libraries
NARA also maintains the Presidential Library system, a nationwide network of libraries for preserving and making available the documents of U.S. presidents since Herbert Hoover. The Presidential Libraries include:
Herbert Hoover Presidential Library in West Branch, Iowa
Franklin D. Roosevelt Presidential Library in Hyde Park, New York
Harry S. Truman Presidential Library in Independence, Missouri
Dwight D. Eisenhower Presidential Library in Abilene, Kansas
John F. Kennedy Presidential Library in Boston, Massachusetts
Lyndon B. Johnson Presidential Library in Austin, Texas
Richard Nixon Presidential Library and Museum in Yorba Linda, California
Gerald R. Ford Presidential Library in Ann Arbor, Michigan
Gerald R. Ford Presidential Museum in Grand Rapids, Michigan
Jimmy Carter Presidential Library in Atlanta, Georgia
Ronald Reagan Presidential Library in Simi Valley, California
George Bush Presidential Library in College Station, Texas
William J. Clinton Presidential Library in Little Rock, Arkansas
George W. Bush Presidential Library in Dallas, Texas
Libraries and museums have been established for other presidents, but they are not part of the NARA presidential library system, and are operated by private foundations, historical societies, or state governments, including the Abraham Lincoln, Rutherford B. Hayes, William McKinley, Woodrow Wilson and Calvin Coolidge libraries. For example, the Abraham Lincoln Presidential Library and Museum is owned and operated by the state of Illinois.
thumb|right|The broad range of material which NARA preserves at the Presidential libraries is exemplified by the President's VH-3A "Sea King" helicopter at the Richard Nixon Presidential Library and Museum.
Public–private partnerships
In an effort to make its holdings more widely available and more easily accessible, the National Archives began entering into public–private partnerships in 2006. A joint venture with Google will digitize and offer NARA video online. When announcing the agreement, Archivist Allen Weinstein said that this pilot program is
… an important step for the National Archives to achieve its goal of becoming an archive without walls. Our new strategic plan emphasizes the importance of providing access to records anytime, anywhere. This is one of many initiatives that we are launching to make our goal a reality. For the first time, the public will be able to view this collection of rare and unusual films on the Internet."
On January 10, 2007, the National Archives and Fold3.com (formerly Footnote) launched a pilot project to digitize historic documents from the National Archives holdings. Allen Weinstein explained that this partnership would "allow much greater access to approximately 4.5 million pages of important documents that are currently available only in their original format or on microfilm" and "would also enhance NARA's efforts to preserve its original records."
In July 2007, the National Archives announced it would make its collection of Universal Newsreels from 1929 to 1967 available for purchase through CreateSpace, an Amazon.com subsidiary. During the announcement, Weinstein noted that the agreement would "... reap major benefits for the public-at-large and for the National Archives." Adding, "While the public can come to our College Park, MD research room to view films and even copy them at no charge, this new program will make our holdings much more accessible to millions of people who cannot travel to the Washington, DC area." The agreement also calls for CreateSpace partnership to provide the National Archives with digital reference and preservation copies of the films as part of NARA's preservation program.
In May 2008, the National Archives announced a five-year agreement to digitize selected records including the complete U.S. Federal Census Collection, 1790–1930, passenger lists from 1820–1960 and World War I and World War II draft registration cards. The partnership agreement allows for exclusive use of the digitized records by Ancestry.com for a 5-year embargo period at which point the digital records will be turned over to the National Archives.
Social media and Web 2.0
The National Archives currently utilizes social media and Web 2.0 technologies in an attempt to communicate better with the public.
On June 18, 2009, the National Archives announced the launching of a YouTube channel "to showcase popular archived films, inform the public about upcoming events around the country, and bring National Archives exhibits to the people." Also in 2009, the National Archives launched a Flickr photostream to share portions of its photographic holdings with the general public.
A new teaching with documents website premiered in 2010 and was developed by the education team. The websitedocsteach.org features 3,000 documents, images, and recordings from the holdings of the Archives. The site also features lesson plans and tools for creating new classroom activities and lessons.
In 2011 the National Archives initiated a Wikiproject on the English Wikipedia to expand collaboration in making its holdings widely available through Wikimedia.
See also
1973 National Archives Fire
Sandy Berger, National Security Advisor to President Bill Clinton, pleaded guilty to 2004 theft in Archives
List of U.S. state libraries and archives
Digital preservation
Electronic Records Archives
National Digital Library Program (NDLP)
List of national archives
National Digital Information Infrastructure and Preservation Program
National Security Archive
U.S. Constitution
White House Millennium Council (time capsule)
Archival Recovery Team
Apollo 11 lunar sample display
Apollo 17 lunar sample display
Notes
References
Further reading
The statue Gladiator commissioned for the main national archive building in Washington DC in 1935.
External links
Federal Register.gov: National Archives and Records Administration
The National Archives Catalog — of the National Archives and Records Administration.
Outdoor sculptures at the National Archives Building
Footnote.com: NARA
FamilySearch.org: NARA−National Archives and Records Administration — research Wiki for genealogists.
National Archives and Records Administration's Our Archives wiki — information about NARA + its archived records.
Roaminghistorian.com: Visiting the National Archives
Category:Archives in the United States
Category:Library of Congress
United States
Category:Photo archives
Category:University of Maryland, College Park
Category:Government agencies established in 1985
Category:1985 establishments in the United States
Category:World Digital Library partners
Category:Records management | 70,667 | 2017-01 |
Middle Ages | thumb|upright=1.35|The Cross of Mathilde, a crux gemmata made for Mathilde, Abbess of Essen (973–1011), who is shown kneeling before the Virgin and Child in the enamel plaque. The body of Christ is slightly later. Probably made in Cologne or Essen, the cross demonstrates several medieval techniques: cast figurative sculpture, filigree, enamelling, gem polishing and setting, and the reuse of Classical cameos and engraved gems.
In the history of Europe, the Middle Ages or medieval period lasted from the 5th to the 15th century. It began with the fall of the Western Roman Empire and merged into the Renaissance and the Age of Discovery. The Middle Ages is the middle period of the three traditional divisions of Western history: classical antiquity, the medieval period, and the modern period. The medieval period is itself subdivided into the Early, High, and Late Middle Ages.
Population decline, counterurbanisation, invasion, and movement of peoples, which had begun in Late Antiquity, continued in the Early Middle Ages. The large-scale movements of the Migration Period, including various Germanic peoples, formed new kingdoms in what remained of the Western Roman Empire. In the seventh century, North Africa and the Middle East—once part of the Byzantine Empire—came under the rule of the Umayyad Caliphate, an Islamic empire, after conquest by Muhammad's successors. Although there were substantial changes in society and political structures, the break with classical antiquity was not complete. The still-sizeable Byzantine Empire survived in the east and remained a major power. The empire's law code, the Corpus Juris Civilis or "Code of Justinian", was rediscovered in Northern Italy in 1070 and became widely admired later in the Middle Ages. In the West, most kingdoms incorporated the few extant Roman institutions. Monasteries were founded as campaigns to Christianise pagan Europe continued. The Franks, under the Carolingian dynasty, briefly established the Carolingian Empire during the later 8th and early 9th century. It covered much of Western Europe but later succumbed to the pressures of internal civil wars combined with external invasions—Vikings from the north, Hungarians from the east, and Saracens from the south.
During the High Middle Ages, which began after 1000, the population of Europe increased greatly as technological and agricultural innovations allowed trade to flourish and the Medieval Warm Period climate change allowed crop yields to increase. Manorialism, the organisation of peasants into villages that owed rent and labour services to the nobles, and feudalism, the political structure whereby knights and lower-status nobles owed military service to their overlords in return for the right to rent from lands and manors, were two of the ways society was organised in the High Middle Ages. The Crusades, first preached in 1095, were military attempts by Western European Christians to regain control of the Holy Land from Muslims. Kings became the heads of centralised nation states, reducing crime and violence but making the ideal of a unified Christendom more distant. Intellectual life was marked by scholasticism, a philosophy that emphasised joining faith to reason, and by the founding of universities. The theology of Thomas Aquinas, the paintings of Giotto, the poetry of Dante and Chaucer, the travels of Marco Polo, and the Gothic architecture of cathedrals such as Chartres are among the outstanding achievements toward the end of this period and into the Late Middle Ages.
The Late Middle Ages was marked by difficulties and calamities including famine, plague, and war, which significantly diminished the population of Europe; between 1347 and 1350, the Black Death killed about a third of Europeans. Controversy, heresy, and the Western Schism within the Catholic Church paralleled the interstate conflict, civil strife, and peasant revolts that occurred in the kingdoms. Cultural and technological developments transformed European society, concluding the Late Middle Ages and beginning the early modern period.
Terminology and periodisation
The Middle Ages is one of the three major periods in the most enduring scheme for analysing European history: classical civilisation, or Antiquity; the Middle Ages; and the Modern Period.Power Central Middle Ages p. 304
Medieval writers divided history into periods such as the "Six Ages" or the "Four Empires", and considered their time to be the last before the end of the world.Mommsen "Petrarch's Conception of the 'Dark Ages'" Speculum pp. 236–237 When referring to their own times, they spoke of them as being "modern".Singman Daily Life p. x In the 1330s, the humanist and poet Petrarch referred to pre-Christian times as antiqua (or "ancient") and to the Christian period as nova (or "new").Knox "History of the Idea of the Renaissance" Leonardo Bruni was the first historian to use tripartite periodisation in his History of the Florentine People (1442).Bruni History of the Florentine people p. xvii Bruni and later historians argued that Italy had recovered since Petrarch's time, and therefore added a third period to Petrarch's two. The "Middle Ages" first appears in Latin in 1469 as media tempestas or "middle season".Miglio "Curial Humanism" Interpretations of Renaissance Humanism p. 112 In early usage, there were many variants, including medium aevum, or "middle age", first recorded in 1604,Albrow Global Age p. 205 and media saecula, or "middle ages", first recorded in 1625. The alternative term "medieval" (or occasionally "mediaeval" or "mediæval")"Mediaeval" Compact Edition of the Oxford English Dictionary derives from medium aevum.Flexner (ed.) Random House Dictionary p. 1194 Tripartite periodisation became standard after the German 17th-century historian Christoph Cellarius divided history into three periods: Ancient, Medieval, and Modern.Murray "Should the Middle Ages Be Abolished?" Essays in Medieval Studies p. 4
The most commonly given starting point for the Middle Ages is 476,"Middle Ages" Dictionary.com first used by Bruni. For Europe as a whole, 1500 is often considered to be the end of the Middle Ages,See the titles of Watts Making of Polities Europe 1300–1500 or Epstein Economic History of Later Medieval Europe 1000–1500 or the end date used in Holmes (ed.) Oxford History of Medieval Europe but there is no universally agreed upon end date. Depending on the context, events such as Christopher Columbus's first voyage to the Americas in 1492, the conquest of Constantinople by the Turks in 1453, or the Protestant Reformation in 1517 are sometimes used. English historians often use the Battle of Bosworth Field in 1485 to mark the end of the period.See the title of Saul Companion to Medieval England 1066–1485 For Spain, dates commonly used are the death of King Ferdinand II in 1516, the death of Queen Isabella I of Castile in 1504, or the conquest of Granada in 1492.Kamen Spain 1469–1714 p. 29 Historians from Romance-speaking countries tend to divide the Middle Ages into two parts: an earlier "High" and later "Low" period. English-speaking historians, following their German counterparts, generally subdivide the Middle Ages into three intervals: "Early", "High", and "Late". In the 19th century, the entire Middle Ages were often referred to as the "Dark Ages",Mommsen "Petrarch's Conception of the 'Dark Ages'" Speculum p. 226 but with the adoption of these subdivisions, use of this term was restricted to the Early Middle Ages, at least among historians.
Later Roman Empire
right|thumb|A late Roman sculpture depicting the four Tetrarchs, now in VeniceTansey, et al. Gardner's Art Through the Ages p. 242
The Roman Empire reached its greatest territorial extent during the second century AD; the following two centuries witnessed the slow decline of Roman control over its outlying territories.Cunliffe Europe Between the Oceans pp. 391–393 Economic issues, including inflation, and external pressure on the frontiers combined to create the Crisis of the Third Century, with emperors coming to the throne only to be rapidly replaced by new usurpers.Collins Early Medieval Europe pp. 3–5 Military expenses increased steadily during the third century, mainly in response to the war with the Sasanian Empire, which revived in the middle of the third century. The army doubled in size, and cavalry and smaller units replaced the Roman legion as the main tactical unit.Brown World of Late Antiquity pp. 24–25 The need for revenue led to increased taxes and a decline in numbers of the curial, or landowning, class, and decreasing numbers of them willing to shoulder the burdens of holding office in their native towns.Heather Fall of the Roman Empire p. 111 More bureaucrats were needed in the central administration to deal with the needs of the army, which led to complaints from civilians that there were more tax-collectors in the empire than tax-payers.
The Emperor Diocletian (r. 284–305) split the empire into separately administered eastern and western halves in 286; the empire was not considered divided by its inhabitants or rulers, as legal and administrative promulgations in one division were considered valid in the other.Collins Early Medieval Europe p. 9 In 330, after a period of civil war, Constantine the Great (r. 306–337) refounded the city of Byzantium as the newly renamed eastern capital, Constantinople.Collins Early Medieval Europe p. 24 Diocletian's reforms strengthened the governmental bureaucracy, reformed taxation, and strengthened the army, which bought the empire time but did not resolve the problems it was facing: excessive taxation, a declining birthrate, and pressures on its frontiers, among others.Cunliffe Europe Between the Oceans pp. 405–406 Civil war between rival emperors became common in the middle of the 4th century, diverting soldiers from the empire's frontier forces and allowing invaders to encroach.Collins Early Medieval Europe pp. 31–33 For much of the 4th century, Roman society stabilised in a new form that differed from the earlier classical period, with a widening gulf between the rich and poor, and a decline in the vitality of the smaller towns.Brown World of Late Antiquity p. 34 Another change was the Christianisation, or conversion of the empire to Christianity, a gradual process that lasted from the 2nd to the 5th centuries.Brown World of Late Antiquity pp. 65–68Brown World of Late Antiquity pp. 82–94
thumb|upright=1.6|left|Map of the approximate political boundaries in Europe around 450
In 376, the Goths, fleeing from the Huns, received permission from Emperor Valens (r. 364–378) to settle in the Roman province of Thracia in the Balkans. The settlement did not go smoothly, and when Roman officials mishandled the situation, the Goths began to raid and plunder. Valens, attempting to put down the disorder, was killed fighting the Goths at the Battle of Adrianople on 9 August 378.Bauer History of the Medieval World pp. 47–49 As well as the threat from such tribal confederacies from the north, internal divisions within the empire, especially within the Christian Church, caused problems.Bauer History of the Medieval World pp. 56–59 In 400, the Visigoths invaded the Western Roman Empire and, although briefly forced back from Italy, in 410 sacked the city of Rome.Bauer History of the Medieval World pp. 80–83 In 406 the Alans, Vandals, and Suevi crossed into Gaul; over the next three years they spread across Gaul and in 409 crossed the Pyrenees Mountains into modern-day Spain.Collins Early Medieval Europe pp. 59–60 The Migration Period began, when various peoples, initially largely Germanic peoples, moved across Europe. The Franks, Alemanni, and the Burgundians all ended up in northern Gaul while the Angles, Saxons, and Jutes settled in Britain, and the Vandals went on to cross the strait of Gibraltar after which they conquered the province of Africa.Collins Early Medieval Europe p. 80 In the 430s the Huns began invading the empire; their king Attila (r. 434–453) led invasions into the Balkans in 442 and 447, Gaul in 451, and Italy in 452.James Europe's Barbarians pp. 67–68 The Hunnic threat remained until Attila's death in 453, when the Hunnic confederation he led fell apart.Bauer History of the Medieval World pp. 117–118 These invasions by the tribes completely changed the political and demographic nature of what had been the Western Roman Empire.Cunliffe Europe Between the Oceans p. 417
By the end of the 5th century the western section of the empire was divided into smaller political units, ruled by the tribes that had invaded in the early part of the century.Wickham Inheritance of Rome p. 79 The deposition of the last emperor of the west, Romulus Augustulus, in 476 has traditionally marked the end of the Western Roman Empire.Wickham Inheritance of Rome p. 86 By 493 the Italian peninsula was conquered by the Ostrogoths.Collins Early Medieval Europe pp. 107–109 The Eastern Roman Empire, often referred to as the Byzantine Empire after the fall of its western counterpart, had little ability to assert control over the lost western territories. The Byzantine emperors maintained a claim over the territory, but while none of the new kings in the west dared to elevate himself to the position of emperor of the west, Byzantine control of most of the Western Empire could not be sustained; the reconquest of the Mediterranean periphery and the Italian Peninsula (Gothic War) in the reign of Justinian (r. 527–565) was the sole, and temporary, exception.Collins Early Medieval Europe pp. 116–134
Early Middle Ages
New societies
The political structure of Western Europe changed with the end of the united Roman Empire. Although the movements of peoples during this period are usually described as "invasions", they were not just military expeditions but migrations of entire peoples into the empire. Such movements were aided by the refusal of the western Roman elites to support the army or pay the taxes that would have allowed the military to suppress the migration.Brown, World of Late Antiquity, pp. 122–124 The emperors of the 5th century were often controlled by military strongmen such as Stilicho (d. 408), Aetius (d. 454), Aspar (d. 471), Ricimer (d. 472), or Gundobad (d. 516), who were partly or fully of non-Roman background. When the line of western emperors ceased, many of the kings who replaced them were from the same background. Intermarriage between the new kings and the Roman elites was common.Wickham, Inheritance of Rome, pp. 95–98 This led to a fusion of Roman culture with the customs of the invading tribes, including the popular assemblies that allowed free male tribal members more say in political matters than was common in the Roman state.Wickham, Inheritance of Rome, pp. 100–101 Material artefacts left by the Romans and the invaders are often similar, and tribal items were often modelled on Roman objects.Collins, Early Medieval Europe, p. 100 Much of the scholarly and written culture of the new kingdoms was also based on Roman intellectual traditions.Collins, Early Medieval Europe, pp. 96–97 An important difference was the gradual loss of tax revenue by the new polities. Many of the new political entities no longer supported their armies through taxes, instead relying on granting them land or rents. This meant there was less need for large tax revenues and so the taxation systems decayed.Wickham, Inheritance of Rome, pp. 102–103 Warfare was common between and within the kingdoms. Slavery declined as the supply weakened, and society became more rural.Backman, Worlds of Medieval Europe, pp. 86–91
thumb|A coin of the Ostrogothic leader Theoderic the Great, struck in Milan, circa AD 491–501..
Between the 5th and 8th centuries, new peoples and individuals filled the political void left by Roman centralised government. The Ostrogoths, a Gothic tribe, settled in Roman Italy in the late fifth century under Theoderic the Great (d. 526) and set up a kingdom marked by its co-operation between the Italians and the Ostrogoths, at least until the last years of Theodoric's reign.James Europe's Barbarians pp. 82–88 The Burgundians settled in Gaul, and after an earlier realm was destroyed by the Huns in 436 formed a new kingdom in the 440s. Between today's Geneva and Lyon, it grew to become the realm of Burgundy in the late 5th and early 6th centuries.James Europe's Barbarians pp. 77–78 Elsewhere in Gaul, the Franks and Celtic Britons set up small polities. Francia was centred in northern Gaul, and the first king of whom much is known is Childeric I (d. 481). His grave was discovered in 1653 and is remarkable for its grave goods, which included weapons and a large quantity of gold.James Europe's Barbarians pp. 79–80
Under Childeric's son Clovis I (r. 509–511), the founder of the Merovingian dynasty, the Frankish kingdom expanded and converted to Christianity. The Britons, related to the natives of Britannia — modern-day Great Britain — settled in what is now Brittany.James Europe's Barbarians pp. 78–81 Other monarchies were established by the Visigothic Kingdom in the Iberian Peninsula, the Suebi in northwestern Iberia, and the Vandal Kingdom in North Africa. In the sixth century, the Lombards settled in Northern Italy, replacing the Ostrogothic kingdom with a grouping of duchies that occasionally selected a king to rule over them all. By the late sixth century, this arrangement had been replaced by a permanent monarchy, the Kingdom of the Lombards.Collins Early Medieval Europe pp. 196–208
The invasions brought new ethnic groups to Europe, although some regions received a larger influx of new peoples than others. In Gaul for instance, the invaders settled much more extensively in the north-east than in the south-west. Slavs settled in Central and Eastern Europe and the Balkan Peninsula. The settlement of peoples was accompanied by changes in languages. The Latin of the Western Roman Empire was gradually replaced by languages based on, but distinct from, Latin, collectively known as Romance languages. These changes from Latin to the new languages took many centuries. Greek remained the language of the Byzantine Empire, but the migrations of the Slavs added Slavic languages to Eastern Europe.Davies Europe pp. 235–238
Byzantine survival
A mosaic showing Justinian with the bishop of Ravenna, bodyguards, and courtiers.Adams History of Western Art pp. 158–159|thumb|right
As Western Europe witnessed the formation of new kingdoms, the Eastern Roman Empire remained intact and experienced an economic revival that lasted into the early 7th century. There were fewer invasions of the eastern section of the empire; most occurred in the Balkans. Peace with the Sasanian Empire, the traditional enemy of Rome, lasted throughout most of the 5th century. The Eastern Empire was marked by closer relations between the political state and Christian Church, with doctrinal matters assuming an importance in eastern politics that they did not have in Western Europe. Legal developments included the codification of Roman law; the first effort—the Codex Theodosianus—was completed in 438.Wickham Inheritance of Rome pp. 81–83 Under Emperor Justinian (r. 527–565), another compilation took place—the Corpus Juris Civilis.Bauer History of the Medieval World pp. 200–202 Justinian also oversaw the construction of the Hagia Sophia in Constantinople and the reconquest of North Africa from the Vandals and Italy from the Ostrogoths, under Belisarius (d. 565).Collins Early Medieval Europe pp. 126, 130 The conquest of Italy was not complete, as a deadly outbreak of plague in 542 led to the rest of Justinian's reign concentrating on defensive measures rather than further conquests.Bauer History of the Medieval World pp. 206–213
At the Emperor's death, the Byzantines had control of most of Italy, North Africa, and a small foothold in southern Spain. Justinian's reconquests have been criticised by historians for overextending his realm and setting the stage for the early Muslim conquests, but many of the difficulties faced by Justinian's successors were due not just to over-taxation to pay for his wars but to the essentially civilian nature of the empire, which made raising troops difficult.Brown "Transformation of the Roman Mediterranean" Oxford Illustrated History of Medieval Europe pp. 8–9
In the Eastern Empire the slow infiltration of the Balkans by the Slavs added a further difficulty for Justinian's successors. It began gradually, but by the late 540s Slavic tribes were in Thrace and Illyrium, and had defeated an imperial army near Adrianople in 551. In the 560s the Avars began to expand from their base on the north bank of the Danube; by the end of the 6th century they were the dominant power in Central Europe and routinely able to force the eastern emperors to pay tribute. They remained a strong power until 796.James Europe's Barbarians pp. 95–99
An additional problem to face the empire came as a result of the involvement of Emperor Maurice (r. 582–602) in Persian politics when he intervened in a succession dispute. This led to a period of peace, but when Maurice was overthrown, the Persians invaded and during the reign of Emperor Heraclius (r. 610–641) controlled large chunks of the empire, including Egypt, Syria, and Anatolia until Heraclius' successful counterattack. In 628 the empire secured a peace treaty and recovered all of its lost territories.Collins Early Medieval Europe pp. 140–143
Western society
In Western Europe, some of the older Roman elite families died out while others became more involved with Church than secular affairs. Values attached to Latin scholarship and education mostly disappeared, and while literacy remained important, it became a practical skill rather than a sign of elite status. In the 4th century, Jerome (d. 420) dreamed that God rebuked him for spending more time reading Cicero than the Bible. By the 6th century, Gregory of Tours (d. 594) had a similar dream, but instead of being chastised for reading Cicero, he was chastised for learning shorthand.Brown World of Late Antiquity pp. 174–175 By the late 6th century, the principal means of religious instruction in the Church had become music and art rather than the book.Brown World of Late Antiquity p. 181 Most intellectual efforts went towards imitating classical scholarship, but some original works were created, along with now-lost oral compositions. The writings of Sidonius Apollinaris (d. 489), Cassiodorus (d. c. 585), and Boethius (d. c. 525) were typical of the age.Brown "Transformation of the Roman Mediterranean" Oxford Illustrated History of Medieval Europe pp. 45–49
Changes also took place among laymen, as aristocratic culture focused on great feasts held in halls rather than on literary pursuits. Clothing for the elites was richly embellished with jewels and gold. Lords and kings supported entourages of fighters who formed the backbone of the military forces. Family ties within the elites were important, as were the virtues of loyalty, courage, and honour. These ties led to the prevalence of the feud in aristocratic society, examples of which included those related by Gregory of Tours that took place in Merovingian Gaul. Most feuds seem to have ended quickly with the payment of some sort of compensation.Wickham Inheritance of Rome pp. 189–193 Women took part in aristocratic society mainly in their roles as wives and mothers of men, with the role of mother of a ruler being especially prominent in Merovingian Gaul. In Anglo-Saxon society the lack of many child rulers meant a lesser role for women as queen mothers, but this was compensated for by the increased role played by abbesses of monasteries. Only in Italy does it appear that women were always considered under the protection and control of a male relative.Wickham Inheritance of Rome pp. 195–199
Reconstruction of an early medieval peasant village in Bavaria|thumb|left
Peasant society is much less documented than the nobility. Most of the surviving information available to historians comes from archaeology; few detailed written records documenting peasant life remain from before the 9th century. Most the descriptions of the lower classes come from either law codes or writers from the upper classes.Wickham Inheritance of Rome p. 204 Landholding patterns in the West were not uniform; some areas had greatly fragmented landholding patterns, but in other areas large contiguous blocks of land were the norm. These differences allowed for a wide variety of peasant societies, some dominated by aristocratic landholders and others having a great deal of autonomy.Wickham Inheritance of Rome pp. 205–210 Land settlement also varied greatly. Some peasants lived in large settlements that numbered as many as 700 inhabitants. Others lived in small groups of a few families and still others lived on isolated farms spread over the countryside. There were also areas where the pattern was a mix of two or more of those systems.Wickham Inheritance of Rome pp. 211–212 Unlike in the late Roman period, there was no sharp break between the legal status of the free peasant and the aristocrat, and it was possible for a free peasant's family to rise into the aristocracy over several generations through military service to a powerful lord.Wickham Inheritance of Rome p. 215
Roman city life and culture changed greatly in the early Middle Ages. Although Italian cities remained inhabited, they contracted significantly in size. Rome, for instance, shrank from a population of hundreds of thousands to around 30,000 by the end of the 6th century. Roman temples were converted into Christian churches and city walls remained in use.Brown "Transformation of the Roman Mediterranean" Oxford Illustrated History of Medieval Europe pp. 24–26 In Northern Europe, cities also shrank, while civic monuments and other public buildings were raided for building materials. The establishment of new kingdoms often meant some growth for the towns chosen as capitals.Gies and Gies Life in a Medieval City pp. 3–4 Although there had been Jewish communities in many Roman cities, the Jews suffered periods of persecution after the conversion of the empire to Christianity. Officially they were tolerated, if subject to conversion efforts, and at times were even encouraged to settle in new areas.Loyn "Jews" Middle Ages p. 191
Rise of Islam
upright=1.3|thumb|The early Muslim conquests.
Religious beliefs in the Eastern Empire and Iran were in flux during the late sixth and early seventh centuries. Judaism was an active proselytising faith, and at least one Arab political leader converted to it. Christianity had active missions competing with the Persians' Zoroastrianism in seeking converts, especially among residents of the Arabian Peninsula. All these strands came together with the emergence of Islam in Arabia during the lifetime of Muhammad (d. 632).Collins Early Medieval Europe pp. 143–145 After his death, Islamic forces conquered much of the Eastern Empire and Persia, starting with Syria in 634–635 and reaching Egypt in 640–641, Persia between 637 and 642, North Africa in the later seventh century, and the Iberian Peninsula in 711.Collins Early Medieval Europe pp. 149–151 By 714, Islamic forces controlled much of the peninsula in a region they called Al-Andalus.Reilly Medieval Spains pp. 52–53
The Islamic conquests reached their peak in the mid-eighth century. The defeat of Muslim forces at the Battle of Tours in 732 led to the reconquest of southern France by the Franks, but the main reason for the halt of Islamic growth in Europe was the overthrow of the Umayyad Caliphate and its replacement by the Abbasid Caliphate. The Abbasids moved their capital to Baghdad and were more concerned with the Middle East than Europe, losing control of sections of the Muslim lands. Umayyad descendants took over the Iberian Peninsula, the Aghlabids controlled North Africa, and the Tulunids became rulers of Egypt.Brown "Transformation of the Roman Mediterranean" Oxford Illustrated History of Medieval Europe p. 15 By the middle of the 8th century, new trading patterns were emerging in the Mediterranean; trade between the Franks and the Arabs replaced the old Roman economy. Franks traded timber, furs, swords and slaves in return for silks and other fabrics, spices, and precious metals from the Arabs.Cunliffe Europe Between the Oceans pp. 427–428
Trade and economy
The migrations and invasions of the 4th and 5th centuries disrupted trade networks around the Mediterranean. African goods stopped being imported into Europe, first disappearing from the interior and by the 7th century found only in a few cities such as Rome or Naples. By the end of the 7th century, under the impact of the Muslim conquests, African products were no longer found in Western Europe. The replacement of goods from long-range trade with local products was a trend throughout the old Roman lands that happened in the Early Middle Ages. This was especially marked in the lands that did not lie on the Mediterranean, such as northern Gaul or Britain. Non-local goods appearing in the archaeological record are usually luxury goods. In the northern parts of Europe, not only were the trade networks local, but the goods carried were simple, with little pottery or other complex products. Around the Mediterranean, pottery remained prevalent and appears to have been traded over medium-range networks, not just produced locally.Wickham Inheritance of Rome pp. 218–219
The various Germanic states in the west all had coinages that imitated existing Roman and Byzantine forms. Gold continued to be minted until the end of the 7th century, when it was replaced by silver coins. The basic Frankish silver coin was the denarius or denier, while the Anglo-Saxon version was called a penny. From these areas, the denier or penny spread throughout Europe during the centuries from 700 to 1000. Copper or bronze coins were not struck, nor were gold except in Southern Europe. No silver coins denominated in multiple units were minted.Grierson "Coinage and currency" Middle Ages
Church and monasticism
thumb|right|upright|left|An 11th-century illustration of Gregory the Great dictating to a secretary
Christianity was a major unifying factor between Eastern and Western Europe before the Arab conquests, but the conquest of North Africa sundered maritime connections between those areas. Increasingly the Byzantine Church differed in language, practices, and liturgy from the western Church. The eastern church used Greek instead of the western Latin. Theological and political differences emerged, and by the early and middle 8th century issues such as iconoclasm, clerical marriage, and state control of the church had widened to the extent that the cultural and religious differences were greater than the similarities.Collins Early Medieval Europe pp. 218–233 The formal break came in 1054, when the papacy and the patriarchy of Constantinople clashed over papal supremacy and excommunicated each other, which led to the division of Christianity into two churches—the western branch became the Roman Catholic Church and the eastern branch the Orthodox Church.Davies Europe pp. 328–332
The ecclesiastical structure of the Roman Empire survived the movements and invasions in the west mostly intact, but the papacy was little regarded, and few of the western bishops looked to the bishop of Rome for religious or political leadership. Many of the popes prior to 750 were more concerned with Byzantine affairs and eastern theological controversies. The register, or archived copies of the letters, of Pope Gregory the Great (pope 590–604) survived, and of those more than 850 letters, the vast majority were concerned with affairs in Italy or Constantinople. The only part of Western Europe where the papacy had influence was Britain, where Gregory had sent the Gregorian mission in 597 to convert the Anglo-Saxons to Christianity.Wickham Inheritance of Rome pp. 170–172 Irish missionaries were most active in Western Europe between the 5th and the 7th centuries, going first to England and Scotland and then on to the continent. Under such monks as Columba (d. 597) and Columbanus (d. 615), they founded monasteries, taught in Latin and Greek, and authored secular and religious works.Colish Medieval Foundations pp. 62–63
The Early Middle Ages witnessed the rise of monasticism in the West. The shape of European monasticism was determined by traditions and ideas that originated with the Desert Fathers of Egypt and Syria. Most European monasteries were of the type that focuses on community experience of the spiritual life, called cenobitism, which was pioneered by Pachomius (d. 348) in the 4th century. Monastic ideals spread from Egypt to Western Europe in the 5th and 6th centuries through hagiographical literature such as the Life of Anthony.Lawrence Medieval Monasticism pp. 10–13 Benedict of Nursia (d. 547) wrote the Benedictine Rule for Western monasticism during the 6th century, detailing the administrative and spiritual responsibilities of a community of monks led by an abbot.Lawrence Medieval Monasticism pp. 18–24 Monks and monasteries had a deep effect on the religious and political life of the Early Middle Ages, in various cases acting as land trusts for powerful families, centres of propaganda and royal support in newly conquered regions, and bases for missions and proselytisation.Wickham Inheritance of Rome pp. 185–187 They were the main and sometimes only outposts of education and literacy in a region. Many of the surviving manuscripts of the Latin classics were copied in monasteries in the Early Middle Ages.Hamilton Religion in the Medieval West pp. 43–44 Monks were also the authors of new works, including history, theology, and other subjects, written by authors such as Bede (d. 735), a native of northern England who wrote in the late 7th and early 8th centuries.Colish Medieval Foundations pp. 64–65
Carolingian Europe
thumb|right|upright= 1.5|Map showing growth of Frankish power from 481 to 814
The Frankish kingdom in northern Gaul split into kingdoms called Austrasia, Neustria, and Burgundy during the 6th and 7th centuries, all of them ruled by the Merovingian dynasty, who were descended from Clovis. The 7th century was a tumultuous period of wars between Austrasia and Neustria.Bauer History of the Medieval World pp. 246–253 Such warfare was exploited by Pippin (d. 640), the Mayor of the Palace for Austrasia who became the power behind the Austrasian throne. Later members of his family inherited the office, acting as advisers and regents. One of his descendants, Charles Martel (d. 741), won the Battle of Poitiers in 732, halting the advance of Muslim armies across the Pyrenees.Bauer History of the Medieval World pp. 347–349 Great Britain was divided into small states dominated by the kingdoms of Northumbria, Mercia, Wessex, and East Anglia, which were descended from the Anglo-Saxon invaders. Smaller kingdoms in present-day Wales and Scotland were still under the control of the native Britons and Picts.Wickham Inheritance of Rome pp. 158–159 Ireland was divided into even smaller political units, usually known as tribal kingdoms, under the control of kings. There were perhaps as many as 150 local kings in Ireland, of varying importance.Wickham Inheritance of Rome pp. 164–165
The Carolingian dynasty, as the successors to Charles Martel are known, officially took control of the kingdoms of Austrasia and Neustria in a coup of 753 led by (r. 752–768). A contemporary chronicle claims that Pippin sought, and gained, authority for this coup from Pope (pope 752–757). Pippin's takeover was reinforced with propaganda that portrayed the Merovingians as inept or cruel rulers, exalted the accomplishments of Charles Martel, and circulated stories of the family's great piety. At the time of his death in 768, Pippin left his kingdom in the hands of his two sons, Charles (r. 768–814) and Carloman (r. 768–771). When Carloman died of natural causes, Charles blocked the succession of Carloman's young son and installed himself as the king of the united Austrasia and Neustria. Charles, more often known as Charles the Great or Charlemagne, embarked upon a programme of systematic expansion in 774 that unified a large portion of Europe, eventually controlling modern-day France, northern Italy, and Saxony. In the wars that lasted beyond 800, he rewarded allies with war booty and command over parcels of land.Bauer History of the Medieval World pp. 371–378 In 774, Charlemagne conquered the Lombards, which freed the papacy from the fear of Lombard conquest and marked the beginnings of the Papal States.Brown "Transformation of the Roman Mediterranean" Oxford Illustrated History of Medieval Europe p. 20
thumb|upright|Charlemagne's palace chapel at Aachen, completed in 805Stalley Early Medieval Architecture p. 73
The coronation of Charlemagne as emperor on Christmas Day 800 is regarded as a turning point in medieval history, marking a return of the Western Roman Empire, since the new emperor ruled over much of the area previously controlled by the western emperors.Backman Worlds of Medieval Europe p. 109 It also marks a change in Charlemagne's relationship with the Byzantine Empire, as the assumption of the imperial title by the Carolingians asserted their equivalence to the Byzantine state.Backman Worlds of Medieval Europe pp. 117–120 There were several differences between the newly established Carolingian Empire and both the older Western Roman Empire and the concurrent Byzantine Empire. The Frankish lands were rural in character, with only a few small cities. Most of the people were peasants settled on small farms. Little trade existed and much of that was with the British Isles and Scandinavia, in contrast to the older Roman Empire with its trading networks centred on the Mediterranean. The empire was administered by an itinerant court that travelled with the emperor, as well as approximately 300 imperial officials called counts, who administered the counties the empire had been divided into. Clergy and local bishops served as officials, as well as the imperial officials called missi dominici, who served as roving inspectors and troubleshooters.Davies Europe p. 302
Carolingian Renaissance
Charlemagne's court in Aachen was the centre of the cultural revival sometimes referred to as the "Carolingian Renaissance". Literacy increased, as did development in the arts, architecture and jurisprudence, as well as liturgical and scriptural studies. The English monk Alcuin (d. 804) was invited to Aachen and brought the education available in the monasteries of Northumbria. Charlemagne's chancery—or writing office—made use of a new script today known as Carolingian minuscule, allowing a common writing style that advanced communication across much of Europe. Charlemagne sponsored changes in church liturgy, imposing the Roman form of church service on his domains, as well as the Gregorian chant in liturgical music for the churches. An important activity for scholars during this period was the copying, correcting, and dissemination of basic works on religious and secular topics, with the aim of encouraging learning. New works on religious topics and schoolbooks were also produced.Colish Medieval Foundations pp. 66–70 Grammarians of the period modified the Latin language, changing it from the Classical Latin of the Roman Empire into a more flexible form to fit the needs of the church and government. By the reign of Charlemagne, the language had so diverged from the classical that it was later called Medieval Latin.Loyn "Language and dialect" Middle Ages p. 204
Breakup of the Carolingian Empire
Charlemagne planned to continue the Frankish tradition of dividing his kingdom between all his heirs, but was unable to do so as only one son, Louis the Pious (r. 814–840), was still alive by 813. Just before Charlemagne died in 814, he crowned Louis as his successor. Louis's reign of 26 years was marked by numerous divisions of the empire among his sons and, after 829, civil wars between various alliances of father and sons over the control of various parts of the empire. Eventually, Louis recognised his eldest son (d. 855) as emperor and gave him Italy. Louis divided the rest of the empire between Lothair and Charles the Bald (d. 877), his youngest son. Lothair took East Francia, comprising both banks of the Rhine and eastwards, leaving Charles West Francia with the empire to the west of the Rhineland and the Alps. Louis the German (d. 876), the middle child, who had been rebellious to the last, was allowed to keep Bavaria under the suzerainty of his elder brother. The division was disputed. of Aquitaine (d. after 864), the emperor's grandson, rebelled in a contest for Aquitaine, while Louis the German tried to annex all of East Francia. Louis the Pious died in 840, with the empire still in chaos.Bauer History of the Medieval World pp. 427–431
A three-year civil war followed his death. By the Treaty of Verdun (843), a kingdom between the Rhine and Rhone rivers was created for Lothair to go with his lands in Italy, and his imperial title was recognised. Louis the German was in control of Bavaria and the eastern lands in modern-day Germany. Charles the Bald received the western Frankish lands, comprising most of modern-day France. Charlemagne's grandsons and great-grandsons divided their kingdoms between their descendants, eventually causing all internal cohesion to be lost.Backman Worlds of Medieval Europe p. 139 In 987 the Carolingian dynasty was replaced in the western lands, with the crowning of Hugh Capet (r. 987–996) as king. In the eastern lands the dynasty had died out earlier, in 911, with the death of Louis the Child,Collins Early Medieval Europe pp. 360–361 and the selection of the unrelated Conrad I (r. 911–918) as king.Collins Early Medieval Europe p. 397
The breakup of the Carolingian Empire was accompanied by invasions, migrations, and raids by external foes. The Atlantic and northern shores were harassed by the Vikings, who also raided the British Isles and settled there as well as in Iceland. In 911, the Viking chieftain Rollo (d. c. 931) received permission from the Frankish King Charles the Simple (r. 898–922) to settle in what became Normandy.Backman Worlds of Medieval Europe pp. 141–144 The eastern parts of the Frankish kingdoms, especially Germany and Italy, were under continual Magyar assault until the invader's defeat at the Battle of Lechfeld in 955.Backman Worlds of Medieval Europe pp. 144–145 The breakup of the Abbasid dynasty meant that the Islamic world fragmented into smaller political states, some of which began expanding into Italy and Sicily, as well as over the Pyrenees into the southern parts of the Frankish kingdoms.Bauer History of the Medieval World pp. 147–149
New kingdoms and Byzantine revival
thumb|290px|Europe in 814
Efforts by local kings to fight the invaders led to the formation of new political entities. In Anglo-Saxon England, King Alfred the Great (r. 871–899) came to an agreement with the Viking invaders in the late 9th century, resulting in Danish settlements in Northumbria, Mercia, and parts of East Anglia.Collins Early Medieval Europe pp. 378–385 By the middle of the 10th century, Alfred's successors had conquered Northumbria, and restored English control over most of the southern part of Great Britain.Collins Early Medieval Europe p. 387 In northern Britain, Kenneth MacAlpin (d. c. 860) united the Picts and the Scots into the Kingdom of Alba.Davies Europe p. 309 In the early 10th century, the Ottonian dynasty had established itself in Germany, and was engaged in driving back the Magyars. Its efforts culminated in the coronation in 962 of (r. 936–973) as Holy Roman Emperor.Collins Early Medieval Europe pp. 394–404 In 972, he secured recognition of his title by the Byzantine Empire, which he sealed with the marriage of his son Otto II (r. 967–983) to Theophanu (d. 991), daughter of an earlier Byzantine Emperor Romanos II (r. 959–963).Davies Europe p. 317 By the late 10th century Italy had been drawn into the Ottonian sphere after a period of instability;Wickham Inheritance of Rome pp. 435–439 Otto III (r. 996–1002) spent much of his later reign in the kingdom.Whitton "Society of Northern Europe" Oxford Illustrated History of Medieval Europe p. 152 The western Frankish kingdom was more fragmented, and although kings remained nominally in charge, much of the political power devolved to the local lords.Wickham Inheritance of Rome pp. 439–444
Missionary efforts to Scandinavia during the 9th and 10th centuries helped strengthen the growth of kingdoms such as Sweden, Denmark, and Norway, which gained power and territory. Some kings converted to Christianity, although not all by 1000. Scandinavians also expanded and colonised throughout Europe. Besides the settlements in Ireland, England, and Normandy, further settlement took place in what became Russia and in Iceland. Swedish traders and raiders ranged down the rivers of the Russian steppe, and even attempted to seize Constantinople in 860 and 907.Collins Early Medieval Europe pp. 385–389 Christian Spain, initially driven into a small section of the peninsula in the north, expanded slowly south during the 9th and 10th centuries, establishing the kingdoms of Asturias and León.Wickham Inheritance of Rome pp. 500–505
10th-century Ottonian ivory plaque depicting Christ receiving a church from |thumb|left|150px
In Eastern Europe, Byzantium revived its fortunes under Emperor Basil I (r. 867–886) and his successors Leo VI (r. 886–912) and Constantine VII (r. 913–959), members of the Macedonian dynasty. Commerce revived and the emperors oversaw the extension of a uniform administration to all the provinces. The military was reorganised, which allowed the emperors John I (r. 969–976) and Basil II (r. 976–1025) to expand the frontiers of the empire on all fronts. The imperial court was the centre of a revival of classical learning, a process known as the Macedonian Renaissance. Writers such as John Geometres (fl. early 10th century) composed new hymns, poems, and other works.Davies Europe pp. 318–320 Missionary efforts by both eastern and western clergy resulted in the conversion of the Moravians, Bulgars, Bohemians, Poles, Magyars, and Slavic inhabitants of the Kievan Rus'. These conversions contributed to the founding of political states in the lands of those peoples—the states of Moravia, Bulgaria, Bohemia, Poland, Hungary, and the Kievan Rus'.Davies Europe pp. 321–326 Bulgaria, which was founded around 680, at its height reached from Budapest to the Black Sea and from the Dnieper River in modern Ukraine to the Adriatic Sea.Crampton Concise History of Bulgaria p. 12 By 1018, the last Bulgarian nobles had surrendered to the Byzantine Empire.Curta Southeastern Europe pp. 246–247
Art and architecture
thumb|right|upright|A page from the Book of Kells, an illuminated manuscript created in the British Isles in the late 8th or early 9th centuryNees Early Medieval Art p. 145
Few large stone buildings were constructed between the Constantinian basilicas of the 4th century and the 8th century, although many smaller ones were built during the 6th and 7th centuries. By the beginning of the 8th century, the Carolingian Empire revived the basilica form of architecture.Stalley Early Medieval Architecture pp. 29–35 One feature of the basilica is the use of a transept,Stalley Early Medieval Architecture pp. 43–44 or the "arms" of a cross-shaped building that are perpendicular to the long nave.Cosman Medieval Wordbook p. 247 Other new features of religious architecture include the crossing tower and a monumental entrance to the church, usually at the west end of the building.Stalley Early Medieval Architecture pp. 45, 49
Carolingian art was produced for a small group of figures around the court, and the monasteries and churches they supported. It was dominated by efforts to regain the dignity and classicism of imperial Roman and Byzantine art, but was also influenced by the Insular art of the British Isles. Insular art integrated the energy of Irish Celtic and Anglo-Saxon Germanic styles of ornament with Mediterranean forms such as the book, and established many characteristics of art for the rest of the medieval period. Surviving religious works from the Early Middle Ages are mostly illuminated manuscripts and carved ivories, originally made for metalwork that has since been melted down.Kitzinger Early Medieval Art pp. 36–53, 61–64Henderson Early Medieval pp. 18–21, 63–71 Objects in precious metals were the most prestigious form of art, but almost all are lost except for a few crosses such as the Cross of Lothair, several reliquaries, and finds such as the Anglo-Saxon burial at Sutton Hoo and the hoards of Gourdon from Merovingian France, Guarrazar from Visigothic Spain and Nagyszentmiklós near Byzantine territory. There are survivals from the large brooches in fibula or penannular form that were a key piece of personal adornment for elites, including the Irish Tara Brooch.Henderson Early Medieval pp. 36–42, 49–55, 103, 143, 204–208 Highly decorated books were mostly Gospel Books and these have survived in larger numbers, including the Insular Book of Kells, the Book of Lindisfarne, and the imperial Codex Aureus of St. Emmeram, which is one of the few to retain its "treasure binding" of gold encrusted with jewels.Benton Art of the Middle Ages pp. 41–49 Charlemagne's court seems to have been responsible for the acceptance of figurative monumental sculpture in Christian art,Lasko Ars Sacra pp. 16–18 and by the end of the period near life-sized figures such as the Gero Cross were common in important churches.Henderson Early Medieval pp. 233–238
Military and technological developments
During the later Roman Empire, the principal military developments were attempts to create an effective cavalry force as well as the continued development of highly specialised types of troops. The creation of heavily armoured cataphract-type soldiers as cavalry was an important feature of the 5th-century Roman military. The various invading tribes had differing emphasis on types of soldiers—ranging from the primarily infantry Anglo-Saxon invaders of Britain to the Vandals and Visigoths, who had a high proportion of cavalry in their armies.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom pp. 28–29 During the early invasion period, the stirrup had not been introduced into warfare, which limited the usefulness of cavalry as shock troops because it was not possible to put the full force of the horse and rider behind blows struck by the rider.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 30 The greatest change in military affairs during the invasion period was the adoption of the Hunnic composite bow in place of the earlier, and weaker, Scythian composite bow.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom pp. 30–31 Another development was the increasing use of longswordsNicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 34 and the progressive replacement of scale armour by mail armour and lamellar armour.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 39
The importance of infantry and light cavalry began to decline during the early Carolingian period, with a growing dominance of elite heavy cavalry. The use of militia-type levies of the free population declined over the Carolingian period.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom pp. 58–59 Although much of the Carolingian armies were mounted, a large proportion during the early period appear to have been mounted infantry, rather than true cavalry.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 76 One exception was Anglo-Saxon England, where the armies were still composed of regional levies, known as the fyrd, which were led by the local elites.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 67 In military technology, one of the main changes was the return of the crossbow, which had been known in Roman times and reappeared as a military weapon during the last part of the Early Middle Ages.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 80 Another change was the introduction of the stirrup, which increased the effectiveness of cavalry as shock troops. A technological advance that had implications beyond the military was the horseshoe, which allowed horses to be used in rocky terrain.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom pp. 88–91
High Middle Ages
Society and economic life
Medieval French manuscript illustration of the three classes of medieval society: those who prayed—the clergy, those who fought—the knights, and those who worked—the peasantry.Whitton "Society of Northern Europe" Oxford Illustrated History of Medieval Europe p. 134 The relationship between these classes was governed by feudalism and manorialism.Gainty and Ward Sources of World Societies p. 352 (Li Livres dou Sante, 13th century)|thumb|left
The High Middle Ages was a period of tremendous expansion of population. The estimated population of Europe grew from 35 to 80 million between 1000 and 1347, although the exact causes remain unclear: improved agricultural techniques, the decline of slaveholding, a more clement climate and the lack of invasion have all been suggested.Jordan Europe in the High Middle Ages pp. 5–12Backman Worlds of Medieval Europe p. 156 As much as 90 per cent of the European population remained rural peasants. Many were no longer settled in isolated farms but had gathered into small communities, usually known as manors or villages. These peasants were often subject to noble overlords and owed them rents and other services, in a system known as manorialism. There remained a few free peasants throughout this period and beyond,Backman Worlds of Medieval Europe pp. 164–165 with more of them in the regions of Southern Europe than in the north. The practice of assarting, or bringing new lands into production by offering incentives to the peasants who settled them, also contributed to the expansion of population.Epstein Economic and Social History pp. 52–53
Other sections of society included the nobility, clergy, and townsmen. Nobles, both the titled nobility and simple knights, exploited the manors and the peasants, although they did not own lands outright but were granted rights to the income from a manor or other lands by an overlord through the system of feudalism. During the 11th and 12th centuries, these lands, or fiefs, came to be considered hereditary, and in most areas they were no longer divisible between all the heirs as had been the case in the early medieval period. Instead, most fiefs and lands went to the eldest son.Barber Two Cities pp. 37–41 The dominance of the nobility was built upon its control of the land, its military service as heavy cavalry, control of castles, and various immunities from taxes or other impositions. Castles, initially in wood but later in stone, began to be constructed in the 9th and 10th centuries in response to the disorder of the time, and provided protection from invaders as well as allowing lords defence from rivals. Control of castles allowed the nobles to defy kings or other overlords.Davies Europe pp. 311–315 Nobles were stratified; kings and the highest-ranking nobility controlled large numbers of commoners and large tracts of land, as well as other nobles. Beneath them, lesser nobles had authority over smaller areas of land and fewer people. Knights were the lowest level of nobility; they controlled but did not own land, and had to serve other nobles.Singman Daily Life p. 3
The clergy was divided into two types: the secular clergy, who lived out in the world, and the regular clergy, who lived under a religious rule and were usually monks.Hamilton Religion on the Medieval West p. 33 Throughout the period monks remained a very small proportion of the population, usually less than one per cent.Singman Daily Life p. 143 Most of the regular clergy were drawn from the nobility, the same social class that served as the recruiting ground for the upper levels of the secular clergy. The local parish priests were often drawn from the peasant class.Barber Two Cities pp. 33–34 Townsmen were in a somewhat unusual position, as they did not fit into the traditional three-fold division of society into nobles, clergy, and peasants. During the 12th and 13th centuries, the ranks of the townsmen expanded greatly as existing towns grew and new population centres were founded.Barber Two Cities pp. 48–49 But throughout the Middle Ages the population of the towns probably never exceeded 10 per cent of the total population.Singman Daily Life p. 171
right|upright|thumb|13th-century illustration of a Jew (in pointed Jewish hat) and the Christian Petrus Alphonsi debating
Jews also spread across Europe during the period. Communities were established in Germany and England in the 11th and 12th centuries, but Spanish Jews, long settled in Spain under the Muslims, came under Christian rule and increasing pressure to convert to Christianity. Most Jews were confined to the cities, as they were not allowed to own land or be peasants.Epstein Economic and Social History p. 54 Besides the Jews, there were other non-Christians on the edges of Europe—pagan Slavs in Eastern Europe and Muslims in Southern Europe.Singman Daily Life p. 13
Women in the Middle Ages were officially required to be subordinate to some male, whether their father, husband, or other kinsman. Widows, who were often allowed much control over their own lives, were still restricted legally. Women's work generally consisted of household or other domestically inclined tasks. Peasant women were usually responsible for taking care of the household, child-care, as well as gardening and animal husbandry near the house. They could supplement the household income by spinning or brewing at home. At harvest-time, they were also expected to help with field-work.Singman Daily Life pp. 14–15 Townswomen, like peasant women, were responsible for the household, and could also engage in trade. What trades were open to women varied by country and period.Singman Daily Life pp. 177–178 Noblewomen were responsible for running a household, and could occasionally be expected to handle estates in the absence of male relatives, but they were usually restricted from participation in military or government affairs. The only role open to women in the Church was that of nuns, as they were unable to become priests.
In central and northern Italy and in Flanders, the rise of towns that were to a degree self-governing stimulated economic growth and created an environment for new types of trade associations. Commercial cities on the shores of the Baltic entered into agreements known as the Hanseatic League, and the Italian Maritime republics such as Venice, Genoa, and Pisa expanded their trade throughout the Mediterranean. Great trading fairs were established and flourished in northern France during the period, allowing Italian and German merchants to trade with each other as well as local merchants.Epstein Economic and Social History pp. 82–83 In the late 13th century new land and sea routes to the Far East were pioneered, famously described in The Travels of Marco Polo written by one of the traders, Marco Polo (d. 1324).Barber Two Cities pp. 60–67 Besides new trading opportunities, agricultural and technological improvements enabled an increase in crop yields, which in turn allowed the trade networks to expand.Backman Worlds of Medieval Europe p. 160 Rising trade brought new methods of dealing with money, and gold coinage was again minted in Europe, first in Italy and later in France and other countries. New forms of commercial contracts emerged, allowing risk to be shared among merchants. Accounting methods improved, partly through the use of double-entry bookkeeping; letters of credit also appeared, allowing easy transmission of money.Barber Two Cities pp. 74–76
Rise of state power
Europe and the Mediterranean Sea in 1190|thumb|upright=1.3
The High Middle Ages was the formative period in the history of the modern Western state. Kings in France, England, and Spain consolidated their power, and set up lasting governing institutions.Backman Worlds of Medieval Europe pp. 283–284 New kingdoms such as Hungary and Poland, after their conversion to Christianity, became Central European powers.Barber Two Cities pp. 365–380 The Magyars settled Hungary around 900 under King Árpád (d. c. 907) after a series of invasions in the 9th century.Davies Europe p. 296 The papacy, long attached to an ideology of independence from secular kings, first asserted its claim to temporal authority over the entire Christian world; the Papal Monarchy reached its apogee in the early 13th century under the pontificate of (pope 1198–1216).Backman Worlds of Medieval Europe pp. 262–279 Northern Crusades and the advance of Christian kingdoms and military orders into previously pagan regions in the Baltic and Finnic north-east brought the forced assimilation of numerous native peoples into European culture.Barber Two Cities pp. 371–372
During the early High Middle Ages, Germany was ruled by the Ottonian dynasty, which struggled to control the powerful dukes ruling over territorial duchies tracing back to the Migration period. In 1024, they were replaced by the Salian dynasty, who famously clashed with the papacy under Emperor (r. 1084–1105) over church appointments as part of the Investiture Controversy.Backman Worlds of Medieval Europe pp. 181–186 His successors continued to struggle against the papacy as well as the German nobility. A period of instability followed the death of Emperor (r. 1111–25), who died without heirs, until Barbarossa (r. 1155–90) took the imperial throne.Jordan Europe in the High Middle Ages pp. 143–147 Although he ruled effectively, the basic problems remained, and his successors continued to struggle into the 13th century.Jordan Europe in the High Middle Ages pp. 250–252 Barbarossa's grandson Frederick II (r. 1220–1250), who was also heir to the throne of Sicily through his mother, clashed repeatedly with the papacy. His court was famous for its scholars and he was often accused of heresy.Denley "Mediterranean" Oxford Illustrated History of Medieval Europe pp. 235–238 He and his successors faced many difficulties, including the invasion of the Mongols into Europe in the mid-13th century. Mongols first shattered the Kievan Rus' principalities and then invaded Eastern Europe in 1241, 1259, and 1287.Davies Europe p. 364
The Bayeux Tapestry (detail) showing William the Conqueror (centre), his half-brothers Robert, Count of Mortain (right) and Odo, Bishop of Bayeux in the Duchy of Normandy (left)|thumb|left
Under the Capetian dynasty the French monarchy slowly began to expand its authority over the nobility, growing out of the Île-de-France to exert control over more of the country in the 11th and 12th centuries.Backman Worlds of Medieval Europe pp. 187–189 They faced a powerful rival in the Dukes of Normandy, who in 1066 under William the Conqueror (duke 1035–1087), conquered England (r. 1066–87) and created a cross-channel empire that lasted, in various forms, throughout the rest of the Middle Ages.Jordan Europe in the High Middle Ages pp. 59–61Backman Worlds of Medieval Europe pp. 189–196 Normans also settled in Sicily and southern Italy, when Robert Guiscard (d. 1085) landed there in 1059 and established a duchy that later became the Kingdom of Sicily.Davies Europe p. 294 Under the Angevin dynasty of (r. 1154–89) and his son Richard I (r. 1189–99), the kings of England ruled over England and large areas of France,Backman Worlds of Medieval Europe p. 263 brought to the family by Henry II's marriage to Eleanor of Aquitaine (d. 1204), heiress to much of southern France.Loyn "Eleanor of Aquitaine" Middle Ages p. 122 Richard's younger brother John (r. 1199–1216) lost Normandy and the rest of the northern French possessions in 1204 to the French King Philip II Augustus (r. 1180–1223). This led to dissension among the English nobility, while John's financial exactions to pay for his unsuccessful attempts to regain Normandy led in 1215 to Magna Carta, a charter that confirmed the rights and privileges of free men in England. Under (r. 1216–72), John's son, further concessions were made to the nobility, and royal power was diminished.Backman Worlds of Medieval Europe pp. 286–289 The French monarchy continued to make gains against the nobility during the late 12th and 13th centuries, bringing more territories within the kingdom under the king's personal rule and centralising the royal administration.Backman Worlds of Medieval Europe pp. 289–293 Under Louis IX (r. 1226–70), royal prestige rose to new heights as Louis served as a mediator for most of Europe.Davies Europe pp. 355–357
In Iberia, the Christian states, which had been confined to the north-western part of the peninsula, began to push back against the Islamic states in the south, a period known as the Reconquista.Davies Europe p. 345 By about 1150, the Christian north had coalesced into the five major kingdoms of León, Castile, Aragon, Navarre, and Portugal.Barber Two Cities p. 341 Southern Iberia remained under control of Islamic states, initially under the Caliphate of Córdoba, which broke up in 1031 into a shifting number of petty states known as taifas, who fought with the Christians until the Almohad Caliphate re-established centralised rule over Southern Iberia in the 1170s.Barber Two Cities pp. 350–351 Christian forces advanced again in the early 13th century, culminating in the capture of Seville in 1248.Barber Two Cities pp. 353–355
Crusades
thumb|Krak des Chevaliers was built during the Crusades for the Knights Hospitallers.Kaufmann and Kaufmann Medieval Fortress pp. 268–269
In the 11th century, the Seljuk Turks took over much of the Middle East, occupying Persia during the 1040s, Armenia in the 1060s, and Jerusalem in 1070. In 1071, the Turkish army defeated the Byzantine army at the Battle of Manzikert and captured the Byzantine Emperor Romanus IV (r. 1068–71). The Turks were then free to invade Asia Minor, which dealt a dangerous blow to the Byzantine Empire by seizing a large part of its population and its economic heartland. Although the Byzantines regrouped and recovered somewhat, they never fully regained Asia Minor and were often on the defensive. The Turks also had difficulties, losing control of Jerusalem to the Fatimids of Egypt and suffering from a series of internal civil wars.Davies Europe pp. 332–333 The Byzantines also faced a revived Bulgaria, which in the late 12th and 13th centuries spread throughout the Balkans.Davies Europe pp. 386–387
The crusades were intended to seize Jerusalem from Muslim control. The First Crusade was proclaimed by Pope Urban II (pope 1088–99) at the Council of Clermont in 1095 in response to a request from the Byzantine Emperor Alexios I Komnenos (r. 1081–1118) for aid against further Muslim advances. Urban promised indulgence to anyone who took part. Tens of thousands of people from all levels of society mobilised across Europe and captured Jerusalem in 1099. One feature of the crusades was the pogroms against local Jews that often took place as the crusaders left their countries for the East. These were especially brutal during the First Crusade, when the Jewish communities in Cologne, Mainz, and Worms were destroyed, and other communities in cities between the rivers Seine and Rhine suffered destruction.Lock Routledge Companion to the Crusades pp. 397–399 Another outgrowth of the crusades was the foundation of a new type of monastic order, the military orders of the Templars and Hospitallers, which fused monastic life with military service.
The crusaders consolidated their conquests into crusader states. During the 12th and 13th centuries, there were a series of conflicts between those states and the surrounding Islamic states. Appeals from those states to the papacy led to further crusades,Riley-Smith "Crusades" Middle Ages pp. 106–107 such as the Third Crusade, called to try to regain Jerusalem, which had been captured by Saladin (d. 1193) in 1187.Payne Dream and the Tomb pp. 204–205 In 1203, the Fourth Crusade was diverted from the Holy Land to Constantinople, and captured the city in 1204, setting up a Latin Empire of ConstantinopleLock Routledge Companion to the Crusades pp. 156–161 and greatly weakening the Byzantine Empire. The Byzantines recaptured the city in 1261, but never regained their former strength.Backman Worlds of Medieval Europe pp. 299–300 By 1291 all the crusader states had been captured or forced from the mainland, although a titular Kingdom of Jerusalem survived on the island of Cyprus for several years afterwards.Lock Routledge Companion to the Crusades p. 122
Popes called for crusades to take place elsewhere besides the Holy Land: in Spain, southern France, and along the Baltic. The Spanish crusades became fused with the Reconquista of Spain from the Muslims. Although the Templars and Hospitallers took part in the Spanish crusades, similar Spanish military religious orders were founded, most of which had become part of the two main orders of Calatrava and Santiago by the beginning of the 12th century.Lock Routledge Companion to the Crusades pp. 205–213 Northern Europe also remained outside Christian influence until the 11th century or later, and became a crusading venue as part of the Northern Crusades of the 12th to 14th centuries. These crusades also spawned a military order, the Order of the Sword Brothers. Another order, the Teutonic Knights, although originally founded in the crusader states, focused much of its activity in the Baltic after 1225, and in 1309 moved its headquarters to Marienburg in Prussia.Lock Routledge Companion to the Crusades pp. 213–224
Intellectual life
During the 11th century, developments in philosophy and theology led to increased intellectual activity. There was debate between the realists and the nominalists over the concept of "universals". Philosophical discourse was stimulated by the rediscovery of Aristotle and his emphasis on empiricism and rationalism. Scholars such as Peter Abelard (d. 1142) and Peter Lombard (d. 1164) introduced Aristotelian logic into theology. In the late 11th and early 12th centuries cathedral schools spread throughout Western Europe, signalling the shift of learning from monasteries to cathedrals and towns.Backman Worlds of Medieval Europe pp. 232–237 Cathedral schools were in turn replaced by the universities established in major European cities.Backman Worlds of Medieval Europe pp. 247–252 Philosophy and theology fused in scholasticism, an attempt by 12th- and 13th-century scholars to reconcile authoritative texts, most notably Aristotle and the Bible. This movement tried to employ a systemic approach to truth and reasonLoyn "Scholasticism" Middle Ages pp. 293–294 and culminated in the thought of Thomas Aquinas (d. 1274), who wrote the Summa Theologica, or Summary of Theology.Colish Medieval Foundations pp. 295–301
thumb|left|upright|A medieval scholar making precise measurements in a 14th-century manuscript illustration
Chivalry and the ethos of courtly love developed in royal and noble courts. This culture was expressed in the vernacular languages rather than Latin, and comprised poems, stories, legends, and popular songs spread by troubadours, or wandering minstrels. Often the stories were written down in the chansons de geste, or "songs of great deeds", such as The Song of Roland or The Song of Hildebrand.Backman Worlds of Medieval Europe pp. 252–260 Secular and religious histories were also produced.Davies Europe p. 349 Geoffrey of Monmouth (d. c. 1155) composed his Historia Regum Britanniae, a collection of stories and legends about Arthur.Saul Companion to Medieval England pp. 113–114 Other works were more clearly history, such as Otto von Freising's (d. 1158) Gesta Friderici Imperatoris detailing the deeds of Emperor Frederick Barbarossa, or William of Malmesbury's (d. c. 1143) Gesta Regum on the kings of England.
Legal studies advanced during the 12th century. Both secular law and canon law, or ecclesiastical law, were studied in the High Middle Ages. Secular law, or Roman law, was advanced greatly by the discovery of the Corpus Juris Civilis in the 11th century, and by 1100 Roman law was being taught at Bologna. This led to the recording and standardisation of legal codes throughout Western Europe. Canon law was also studied, and around 1140 a monk named Gratian (fl. 12th century), a teacher at Bologna, wrote what became the standard text of canon law—the Decretum.Backman Worlds of Medieval Europe pp. 237–241
Among the results of the Greek and Islamic influence on this period in European history was the replacement of Roman numerals with the decimal positional number system and the invention of algebra, which allowed more advanced mathematics. Astronomy advanced following the translation of Ptolemy's Almagest from Greek into Latin in the late 12th century. Medicine was also studied, especially in southern Italy, where Islamic medicine influenced the school at Salerno.Backman Worlds of Medieval Europe pp. 241–246
Technology and military
thumb|Portrait of Cardinal Hugh of Saint-Cher by Tommaso da Modena, 1352, the first known depiction of spectaclesIlardi, Renaissance Vision, pp. 18–19
In the 12th and 13th centuries, Europe produced economic growth and innovations in methods of production. Major technological advances included the invention of the windmill, the first mechanical clocks, the manufacture of distilled spirits, and the use of the astrolabe.Backman Worlds of Medieval Europe p. 246 Concave spectacles were invented around 1286 by an unknown Italian artisan, probably working in or near Pisa.Ilardi, Renaissance Vision, pp. 4–5, 49
The development of a three-field rotation system for planting crops increased the usage of land from one half in use each year under the old two-field system to two-thirds under the new system, with a consequent increase in production. The development of the heavy plough allowed heavier soils to be farmed more efficiently, aided by the spread of the horse collar, which led to the use of draught horses in place of oxen. Horses are faster than oxen and require less pasture, factors that aided the implementation of the three-field system.Backman Worlds of Medieval Europe pp. 156–159
The construction of cathedrals and castles advanced building technology, leading to the development of large stone buildings. Ancillary structures included new town halls, houses, bridges, and tithe barns.Barber Two Cities p. 68 Shipbuilding improved with the use of the rib and plank method rather than the old Roman system of mortise and tenon. Other improvements to ships included the use of lateen sails and the stern-post rudder, both of which increased the speed at which ships could be sailed.Barber Two Cities p. 73
In military affairs, the use of infantry with specialised roles increased. Along with the still-dominant heavy cavalry, armies often included mounted and infantry crossbowmen, as well as sappers and engineers.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 125 Crossbows, which had been known in Late Antiquity, increased in use partly because of the increase in siege warfare in the 10th and 11th centuries. The increasing use of crossbows during the 12th and 13th centuries led to the use of closed-face helmets, heavy body armour, as well as horse armour.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 130 Gunpowder was known in Europe by the mid-13th century with a recorded use in European warfare by the English against the Scots in 1304, although it was merely used as an explosive and not as a weapon. Cannon were being used for sieges in the 1320s, and hand-held guns were in use by the 1360s.
Architecture, art, and music
thumb|The Romanesque Church of Maria Laach, Germany
In the 10th century the establishment of churches and monasteries led to the development of stone architecture that elaborated vernacular Roman forms, from which the term "Romanesque" is derived. Where available, Roman brick and stone buildings were recycled for their materials. From the tentative beginnings known as the First Romanesque, the style flourished and spread across Europe in a remarkably homogeneous form. Just before 1000 there was a great wave of building stone churches all over Europe.Benton Art of the Middle Ages p. 55 Romanesque buildings have massive stone walls, openings topped by semi-circular arches, small windows, and, particularly in France, arched stone vaults.Adams History of Western Art pp. 181–189 The large portal with coloured sculpture in high relief became a central feature of façades, especially in France, and the capitals of columns were often carved with narrative scenes of imaginative monsters and animals.Benton Art of the Middle Ages pp. 58–60, 65–66, 73–75 According to art historian C. R. Dodwell, "virtually all the churches in the West were decorated with wall-paintings", of which few survive.Dodwell Pictorial Arts of the West p. 37 Simultaneous with the development in church architecture, the distinctive European form of the castle was developed, and became crucial to politics and warfare.Benton Art of the Middle Ages pp. 295–299
Romanesque art, especially metalwork, was at its most sophisticated in Mosan art, in which distinct artistic personalities including Nicholas of Verdun (d. 1205) become apparent, and an almost classical style is seen in works such as a font at Liège,Lasko Ars Sacra pp. 240–250 contrasting with the writhing animals of the exactly contemporary Gloucester Candlestick. Large illuminated bibles and psalters were the typical forms of luxury manuscripts, and wall-painting flourished in churches, often following a scheme with a Last Judgement on the west wall, a Christ in Majesty at the east end, and narrative biblical scenes down the nave, or in the best surviving example, at Saint-Savin-sur-Gartempe, on the barrel-vaulted roof.Benton Art of the Middle Ages pp. 91–92
The Gothic interior of Laon Cathedral, France|thumb|left
From the early 12th century, French builders developed the Gothic style, marked by the use of rib vaults, pointed arches, flying buttresses, and large stained glass windows. It was used mainly in churches and cathedrals, and continued in use until the 16th century in much of Europe. Classic examples of Gothic architecture include Chartres Cathedral and Reims Cathedral in France as well as Salisbury Cathedral in England.Adams History of Western Art pp. 195–216 Stained glass became a crucial element in the design of churches, which continued to use extensive wall-paintings, now almost all lost.Benton Art of the Middle Ages pp. 185–190; 269–271
During this period the practice of manuscript illumination gradually passed from monasteries to lay workshops, so that according to Janetta Benton "by 1300 most monks bought their books in shops",Benton Art of the Middle Ages p. 250 and the book of hours developed as a form of devotional book for lay-people. Metalwork continued to be the most prestigious form of art, with Limoges enamel a popular and relatively affordable option for objects such as reliquaries and crosses.Benton Art of the Middle Ages pp. 135–139, 245–247 In Italy the innovations of Cimabue and Duccio, followed by the Trecento master Giotto (d. 1337), greatly increased the sophistication and status of panel painting and fresco.Benton Art of the Middle Ages pp. 264–278 Increasing prosperity during the 12th century resulted in greater production of secular art; many carved ivory objects such as gaming-pieces, combs, and small religious figures have survived.Benton Art of the Middle Ages pp. 248–250
Church life
thumb|right|upright|Francis of Assisi, depicted by Bonaventura Berlinghieri in 1235, founded the Franciscan Order.Hamilton Religion in the Medieval West p. 47
Monastic reform became an important issue during the 11th century, as elites began to worry that monks were not adhering to the rules binding them to a strictly religious life. Cluny Abbey, founded in the Mâcon region of France in 909, was established as part of the Cluniac Reforms, a larger movement of monastic reform in response to this fear.Rosenwein Rhinoceros Bound pp. 40–41 Cluny quickly established a reputation for austerity and rigour. It sought to maintain a high quality of spiritual life by placing itself under the protection of the papacy and by electing its own abbot without interference from laymen, thus maintaining economic and political independence from local lords.Barber Two Cities pp. 143–144
Monastic reform inspired change in the secular church. The ideals that it was based upon were brought to the papacy by Pope Leo IX (pope 1049–1054), and provided the ideology of the clerical independence that led to the Investiture Controversy in the late 11th century. This involved Pope Gregory VII (pope 1073–85) and Emperor Henry IV, who initially clashed over episcopal appointments, a dispute that turned into a battle over the ideas of investiture, clerical marriage, and simony. The emperor saw the protection of the Church as one of his responsibilities as well as wanting to preserve the right to appoint his own choices as bishops within his lands, but the papacy insisted on the Church's independence from secular lords. These issues remained unresolved after the compromise of 1122 known as the Concordat of Worms. The dispute represents a significant stage in the creation of a papal monarchy separate from and equal to lay authorities. It also had the permanent consequence of empowering German princes at the expense of the German emperors.
Sénanque Abbey, Gordes, France|thumb|left
The High Middle Ages was a period of great religious movements. Besides the Crusades and monastic reforms, people sought to participate in new forms of religious life. New monastic orders were founded, including the Carthusians and the Cistercians. The latter especially expanded rapidly in their early years under the guidance of Bernard of Clairvaux (d. 1153). These new orders were formed in response to the feeling of the laity that Benedictine monasticism no longer met the needs of the laymen, who along with those wishing to enter the religious life wanted a return to the simpler hermetical monasticism of early Christianity, or to live an Apostolic life.Barber Two Cities pp. 145–149 Religious pilgrimages were also encouraged. Old pilgrimage sites such as Rome, Jerusalem, and Compostela received increasing numbers of visitors, and new sites such as Monte Gargano and Bari rose to prominence.Morris "Northern Europe" Oxford Illustrated History of Medieval Europe p. 199
In the 13th century mendicant orders—the Franciscans and the Dominicans—who swore vows of poverty and earned their living by begging, were approved by the papacy.Barber Two Cities pp. 155–167 Religious groups such as the Waldensians and the Humiliati also attempted to return to the life of early Christianity in the middle 12th and early 13th centuries, but they were condemned as heretical by the papacy. Others joined the Cathars, another heretical movement condemned by the papacy. In 1209, a crusade was preached against the Cathars, the Albigensian Crusade, which in combination with the medieval Inquisition, eliminated them.Barber Two Cities pp. 185–192
Late Middle Ages
War, famine, and plague
The first years of the 14th century were marked by famines, culminating in the Great Famine of 1315–17.Loyn "Famine" Middle Ages p. 128 The causes of the Great Famine included the slow transition from the Medieval Warm Period to the Little Ice Age, which left the population vulnerable when bad weather caused crop failures.Backman Worlds of Medieval Europe pp. 373–374 The years 1313–14 and 1317–21 were excessively rainy throughout Europe, resulting in widespread crop failures.Epstein Economic and Social History p. 41 The climate change—which resulted in a declining average annual temperature for Europe during the 14th century—was accompanied by an economic downturn.Backman Worlds of Medieval Europe p. 370
right|thumb|upright|Execution of some of the ringleaders of the jacquerie, from a 14th-century manuscript of the Chroniques de France ou de St Denis
These troubles were followed in 1347 by the Black Death, a pandemic that spread throughout Europe during the following three years.Schove "Plague" Middle Ages p. 269 The death toll was probably about 35 million people in Europe, about one-third of the population. Towns were especially hard-hit because of their crowded conditions. Large areas of land were left sparsely inhabited, and in some places fields were left unworked. Wages rose as landlords sought to entice the reduced number of available workers to their fields. Further problems were lower rents and lower demand for food, both of which cut into agricultural income. Urban workers also felt that they had a right to greater earnings, and popular uprisings broke out across Europe.Backman Worlds of Medieval Europe pp. 374–380 Among the uprisings were the jacquerie in France, the Peasants' Revolt in England, and revolts in the cities of Florence in Italy and Ghent and Bruges in Flanders. The trauma of the plague led to an increased piety throughout Europe, manifested by the foundation of new charities, the self-mortification of the flagellants, and the scapegoating of Jews.Davies Europe pp. 412–413 Conditions were further unsettled by the return of the plague throughout the rest of the 14th century; it continued to strike Europe periodically during the rest of the Middle Ages.
Society and economy
Society throughout Europe was disturbed by the dislocations caused by the Black Death. Lands that had been marginally productive were abandoned, as the survivors were able to acquire more fertile areas.Epstein Economic and Social History pp. 184–185 Although serfdom declined in Western Europe it became more common in Eastern Europe, as landlords imposed it on those of their tenants who had previously been free.Epstein Economic and Social History pp. 246–247 Most peasants in Western Europe managed to change the work they had previously owed to their landlords into cash rents. The percentage of serfs amongst the peasantry declined from a high of 90 to closer to 50 per cent by the end of the period. Landlords also became more conscious of common interests with other landholders, and they joined together to extort privileges from their governments. Partly at the urging of landlords, governments attempted to legislate a return to the economic conditions that existed before the Black Death.Keen Pelican History of Medieval Europe pp. 234–237 Non-clergy became increasingly literate, and urban populations began to imitate the nobility's interest in chivalry.Vale "Civilization of Courts and Cities" Oxford Illustrated History of Medieval Europe pp. 346–349
Jewish communities were expelled from England in 1290 and from France in 1306. Although some were allowed back into France, most were not, and many Jews emigrated eastwards, settling in Poland and Hungary.Loyn "Jews" Middle Ages p. 192 The Jews were expelled from Spain in 1492, and dispersed to Turkey, France, Italy, and Holland. The rise of banking in Italy during the 13th century continued throughout the 14th century, fuelled partly by the increasing warfare of the period and the needs of the papacy to move money between kingdoms. Many banking firms loaned money to royalty, at great risk, as some were bankrupted when kings defaulted on their loans.Keen Pelican History of Medieval Europe pp. 237–239
State resurgence
thumb|upright= 1.5|Map of Europe in 1360|left
Strong, royalty-based nation states rose throughout Europe in the Late Middle Ages, particularly in England, France, and the Christian kingdoms of the Iberian Peninsula: Aragon, Castile, and Portugal. The long conflicts of the period strengthened royal control over their kingdoms and were extremely hard on the peasantry. Kings profited from warfare that extended royal legislation and increased the lands they directly controlled.Watts Making of Polities pp. 201–219 Paying for the wars required that methods of taxation become more effective and efficient, and the rate of taxation often increased.Watts Making of Polities pp. 224–233 The requirement to obtain the consent of taxpayers allowed representative bodies such as the English Parliament and the French Estates General to gain power and authority.Watts Making of Polities pp. 233–238
thumb|upright|Joan of Arc in a 15th-century depiction
Throughout the 14th century, French kings sought to expand their influence at the expense of the territorial holdings of the nobility.Watts Making of Polities p. 166 They ran into difficulties when attempting to confiscate the holdings of the English kings in southern France, leading to the Hundred Years' War,Watts Making of Polities p. 169 waged from 1337 to 1453.Loyn "Hundred Years' War" Middle Ages p. 176 Early in the war the English under Edward III (r. 1327–77) and his son Edward, the Black Prince (d. 1376), won the battles of Crécy and Poitiers, captured the city of Calais, and won control of much of France. The resulting stresses almost caused the disintegration of the French kingdom during the early years of the war.Watts Making of Polities pp. 180–181 In the early 15th century, France again came close to dissolving, but in the late 1420s the military successes of Joan of Arc (d. 1431) led to the victory of the French and the capture of the last English possessions in southern France in 1453.Watts Making of Polities pp. 317–322 The price was high, as the population of France at the end of the Wars was likely half what it had been at the start of the conflict. Conversely, the Wars had a positive effect on English national identity, doing much to fuse the various local identities into a national English ideal. The conflict with France also helped create a national culture in England separate from French culture, which had previously been the dominant influence.Davies Europe p. 423 The dominance of the English longbow began during early stages of the Hundred Years' War,Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 186 and cannon appeared on the battlefield at Crécy in 1346.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom pp. 296–298
In modern-day Germany, the Holy Roman Empire continued to rule, but the elective nature of the imperial crown meant there was no enduring dynasty around which a strong state could form.Watts Making of Polities pp. 170–171 Further east, the kingdoms of Poland, Hungary, and Bohemia grew powerful.Watts Making of Polities pp. 173–175 In Iberia, the Christian kingdoms continued to gain land from the Muslim kingdoms of the peninsula;Watts Making of Polities p. 173 Portugal concentrated on expanding overseas during the 15th century, while the other kingdoms were riven by difficulties over royal succession and other concerns.Watts Making of Polities pp. 327–332Watts Making of Polities p. 340 After losing the Hundred Years' War, England went on to suffer a long civil war known as the Wars of the Roses, which lasted into the 1490s and only ended when Henry Tudor (r. 1485–1509 as Henry VII) became king and consolidated power with his victory over Richard III (r. 1483–85) at Bosworth in 1485.Davies Europe pp. 425–426 In Scandinavia, Margaret I of Denmark (r. in Denmark 1387–1412) consolidated Norway, Denmark, and Sweden in the Union of Kalmar, which continued until 1523. The major power around the Baltic Sea was the Hanseatic League, a commercial confederation of city states that traded from Western Europe to Russia.Davies Europe p. 431 Scotland emerged from English domination under Robert the Bruce (r. 1306–29), who secured papal recognition of his kingship in 1328.Davies Europe pp. 408–409
Collapse of Byzantium
Although the Palaeologi emperors recaptured Constantinople from the Western Europeans in 1261, they were never able to regain control of much of the former imperial lands. They usually controlled only a small section of the Balkan Peninsula near Constantinople, the city itself, and some coastal lands on the Black Sea and around the Aegean Sea. The former Byzantine lands in the Balkans were divided between the new Kingdom of Serbia, the Second Bulgarian Empire and the city-state of Venice. The power of the Byzantine emperors was threatened by a new Turkish tribe, the Ottomans, who established themselves in Anatolia in the 13th century and steadily expanded throughout the 14th century. The Ottomans expanded into Europe, reducing Bulgaria to a vassal state by 1366 and taking over Serbia after its defeat at the Battle of Kosovo in 1389. Western Europeans rallied to the plight of the Christians in the Balkans and declared a new crusade in 1396; a great army was sent to the Balkans, where it was defeated at the Battle of Nicopolis.Davies Europe pp. 385–389 Constantinople was finally captured by the Ottomans in 1453.Davies Europe p. 446
Controversy within the Church
thumb|left|Guy of Boulogne crowning Pope Gregory XI in a miniature from Froissart's Chroniques
During the tumultuous 14th century, disputes within the leadership of the Church led to the Avignon Papacy of 1305–78,Thomson Western Church pp. 170–171 also called the "Babylonian Captivity of the Papacy" (a reference to the Babylonian captivity of the Jews),Loyn "Avignon" Middle Ages p. 45 and then to the Great Schism, lasting from 1378 to 1418, when there were two and later three rival popes, each supported by several states.Loyn "Great Schism" Middle Ages p. 153 Ecclesiastical officials convened at the Council of Constance in 1414, and in the following year the council deposed one of the rival popes, leaving only two claimants. Further depositions followed, and in November 1417 the council elected Martin V (pope 1417–31) as pope.Thomson Western Church pp. 184–187
Besides the schism, the western church was riven by theological controversies, some of which turned into heresies. John Wycliffe (d. 1384), an English theologian, was condemned as a heretic in 1415 for teaching that the laity should have access to the text of the Bible as well as for holding views on the Eucharist that were contrary to church doctrine.Thomson Western Church pp. 197–199 Wycliffe's teachings influenced two of the major heretical movements of the later Middle Ages: Lollardy in England and Hussitism in Bohemia.Thomson Western Church p. 218 The Bohemian movement initiated with the teaching of Jan Hus, who was burned at the stake in 1415 after being condemned as a heretic by the Council of Constance. The Hussite church, although the target of a crusade, survived beyond the Middle Ages.Thomson Western Church pp. 213–217 Other heresies were manufactured, such as the accusations against the Knights Templar that resulted in their suppression in 1312 and the division of their great wealth between the French King Philip IV (r. 1285–1314) and the Hospitallers.Loyn "Knights of the Temple (Templars)" Middle Ages pp. 201–202
The papacy further refined the practice in the Mass in the Late Middle Ages, holding that the clergy alone was allowed to partake of the wine in the Eucharist. This further distanced the secular laity from the clergy. The laity continued the practices of pilgrimages, veneration of relics, and belief in the power of the Devil. Mystics such as Meister Eckhart (d. 1327) and Thomas à Kempis (d. 1471) wrote works that taught the laity to focus on their inner spiritual life, which laid the groundwork for the Protestant Reformation. Besides mysticism, belief in witches and witchcraft became widespread, and by the late 15th century the Church had begun to lend credence to populist fears of witchcraft with its condemnation of witches in 1484 and the publication in 1486 of the Malleus Maleficarum, the most popular handbook for witch-hunters.Davies Europe pp. 436–437
Scholars, intellectuals, and exploration
During the Later Middle Ages, theologians such as John Duns Scotus (d. 1308) and William of Ockham (d. c. 1348), led a reaction against scholasticism, objecting to the application of reason to faith. Their efforts undermined the prevailing Platonic idea of "universals". Ockham's insistence that reason operates independently of faith allowed science to be separated from theology and philosophy.Davies Europe pp. 433–434 Legal studies were marked by the steady advance of Roman law into areas of jurisprudence previously governed by customary law. The lone exception to this trend was in England, where the common law remained pre-eminent. Other countries codified their laws; legal codes were promulgated in Castile, Poland, and Lithuania.Davies Europe pp. 438–439
thumb|Clerics studying astronomy and geometry, French, early 15th century
Education remained mostly focused on the training of future clergy. The basic learning of the letters and numbers remained the province of the family or a village priest, but the secondary subjects of the trivium—grammar, rhetoric, logic—were studied in cathedral schools or in schools provided by cities. Commercial secondary schools spread, and some Italian towns had more than one such enterprise. Universities also spread throughout Europe in the 14th and 15th centuries. Lay literacy rates rose, but were still low; one estimate gave a literacy rate of ten per cent of males and one per cent of females in 1500.Singman Daily Life p. 224
The publication of vernacular literature increased, with Dante (d. 1321), Petrarch (d. 1374) and Giovanni Boccaccio (d. 1375) in 14th-century Italy, Geoffrey Chaucer (d. 1400) and William Langland (d. c. 1386) in England, and François Villon (d. 1464) and Christine de Pizan (d. c. 1430) in France. Much literature remained religious in character, and although a great deal of it continued to be written in Latin, a new demand developed for saints' lives and other devotional tracts in the vernacular languages. This was fed by the growth of the Devotio Moderna movement, most prominently in the formation of the Brethren of the Common Life, but also in the works of German mystics such as Meister Eckhart and Johannes Tauler (d. 1361).Keen Pelican History of Medieval Europe pp. 282–283 Theatre also developed in the guise of miracle plays put on by the Church. At the end of the period, the development of the printing press in about 1450 led to the establishment of publishing houses throughout Europe by 1500.Davies Europe p. 445
In the early 15th century, the countries of the Iberian peninsula began to sponsor exploration beyond the boundaries of Europe. Prince Henry the Navigator of Portugal (d. 1460) sent expeditions that discovered the Canary Islands, the Azores, and Cape Verde during his lifetime. After his death, exploration continued; Bartolomeu Dias (d. 1500) went around the Cape of Good Hope in 1486 and Vasco da Gama (d. 1524) sailed around Africa to India in 1498.Davies Europe p. 451 The combined Spanish monarchies of Castile and Aragon sponsored the voyage of exploration by Christopher Columbus (d. 1506) in 1492 that discovered the Americas.Davies Europe pp. 454–455 The English crown under Henry VII sponsored the voyage of John Cabot (d. 1498) in 1497, which landed on Cape Breton Island.Davies Europe p. 511
Technological and military developments
thumb|Agricultural calendar, c. 1470, from a manuscript of Pietro de Crescenzi
One of the major developments in the military sphere during the Late Middle Ages was the increased use of infantry and light cavalry.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 180 The English also employed longbowmen, but other countries were unable to create similar forces with the same success.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 183 Armour continued to advance, spurred by the increasing power of crossbows, and plate armour was developed to protect soldiers from crossbows as well as the hand-held guns that were developed.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 188 Pole arms reached new prominence with the development of the Flemish and Swiss infantry armed with pikes and other long spears.Nicolle Medieval Warfare Source Book: Warfare in Western Christendom p. 185
In agriculture, the increased usage of sheep with long-fibred wool allowed a stronger thread to be spun. In addition, the spinning wheel replaced the traditional distaff for spinning wool, tripling production.Epstein Economic and Social History pp. 193–194 A less technological refinement that still greatly affected daily life was the use of buttons as closures for garments, which allowed for better fitting without having to lace clothing on the wearer.Singman Daily Life p. 38 Windmills were refined with the creation of the tower mill, allowing the upper part of the windmill to be spun around to face the direction from which the wind was blowing.Epstein Economic and Social History pp. 200–201 The blast furnace appeared around 1350 in Sweden, increasing the quantity of iron produced and improving its quality.Epstein Economic and Social History pp. 203–204 The first patent law in 1447 in Venice protected the rights of inventors to their inventions.Epstein Economic and Social History p. 213
Late medieval art and architecture
thumb|upright|left|February scene from the 15th-century illuminated manuscript Très Riches Heures du Duc de Berry
The Late Middle Ages in Europe as a whole correspond to the Trecento and Early Renaissance cultural periods in Italy. Northern Europe and Spain continued to use Gothic styles, which became increasingly elaborate in the 15th century, until almost the end of the period. International Gothic was a courtly style that reached much of Europe in the decades around 1400, producing masterpieces such as the Très Riches Heures du Duc de Berry.Benton Art of the Middle Ages pp. 253–256 All over Europe secular art continued to increase in quantity and quality, and in the 15th century the mercantile classes of Italy and Flanders became important patrons, commissioning small portraits of themselves in oils as well as a growing range of luxury items such as jewellery, ivory caskets, cassone chests, and maiolica pottery. These objects also included the Hispano-Moresque ware produced by mostly Mudéjar potters in Spain. Although royalty owned huge collections of plate, little survives except for the Royal Gold Cup.Lightbown Secular Goldsmiths' Work p. 78 Italian silk manufacture developed, so that western churches and elites no longer needed to rely on imports from Byzantium or the Islamic world. In France and Flanders tapestry weaving of sets like The Lady and the Unicorn became a major luxury industry.Benton Art of the Middle Ages pp. 257–262
The large external sculptural schemes of Early Gothic churches gave way to more sculpture inside the building, as tombs became more elaborate and other features such as pulpits were sometimes lavishly carved, as in the Pulpit by Giovanni Pisano in Sant'Andrea. Painted or carved wooden relief altarpieces became common, especially as churches created many side-chapels. Early Netherlandish painting by artists such as Jan van Eyck (d. 1441) and Rogier van der Weyden (d. 1464) rivalled that of Italy, as did northern illuminated manuscripts, which in the 15th century began to be collected on a large scale by secular elites, who also commissioned secular books, especially histories. From about 1450 printed books rapidly became popular, though still expensive. There were around 30,000 different editions of incunabula, or works printed before 1500,British Library Staff "Incunabula Short Title Catalogue" British Library by which time illuminated manuscripts were commissioned only by royalty and a few others. Very small woodcuts, nearly all religious, were affordable even by peasants in parts of Northern Europe from the middle of the 15th century. More expensive engravings supplied a wealthier market with a variety of images.Griffiths Prints and Printmaking pp. 17–18; 39–46
Modern perceptions
thumb|right|upright|Medieval illustration of the spherical Earth in a 14th-century copy of L'Image du monde
The medieval period is frequently caricatured as a "time of ignorance and superstition" that placed "the word of religious authorities over personal experience and rational activity."Lindberg "Medieval Church Encounters" When Science & Christianity Meet p. 8 This is a legacy from both the Renaissance and Enlightenment, when scholars favourably contrasted their intellectual cultures with those of the medieval period. Renaissance scholars saw the Middle Ages as a period of decline from the high culture and civilisation of the Classical world; Enlightenment scholars saw reason as superior to faith, and thus viewed the Middle Ages as a time of ignorance and superstition.Davies Europe pp. 291–293
Others argue that reason was generally held in high regard during the Middle Ages. Science historian Edward Grant writes, "If revolutionary rational thoughts were expressed [in the 18th century], they were only made possible because of the long medieval tradition that established the use of reason as one of the most important of human activities".Grant God and Reason p. 9 Also, contrary to common belief, David Lindberg writes, "the late medieval scholar rarely experienced the coercive power of the church and would have regarded himself as free (particularly in the natural sciences) to follow reason and observation wherever they led".Quoted in Peters "Science and Religion" Encyclopedia of Religion p. 8182
The caricature of the period is also reflected in some more specific notions. One misconception, first propagated in the 19th centuryRussell Inventing the Flat Earth pp. 49–58 and still very common, is that all people in the Middle Ages believed that the Earth was flat. This is untrue, as lecturers in the medieval universities commonly argued that evidence showed the Earth was a sphere.Grant Planets, Stars, & Orbs pp. 626–630 Lindberg and Ronald Numbers, another scholar of the period, state that there "was scarcely a Christian scholar of the Middle Ages who did not acknowledge [Earth's] sphericity and even know its approximate circumference".Lindberg and Numbers "Beyond War and Peace" Church History p. 342 Other misconceptions such as "the Church prohibited autopsies and dissections during the Middle Ages", "the rise of Christianity killed off ancient science", or "the medieval Christian church suppressed the growth of natural philosophy", are all cited by Numbers as examples of widely popular myths that still pass as historical truth, although they are not supported by current historical research.Numbers "Myths and Truths in Science and Religion: A historical perspective" Lecture archive
Notes
Citations
References
Further reading
External links
ORB The Online Reference Book of Medieval Studies Academic peer reviewed articles and encyclopedia.
The Labyrinth Resources for Medieval Studies.
NetSERF The Internet Connection for Medieval Resources.
De Re Militari: The Society for Medieval Military History
Medievalmap.org Interactive maps of the Medieval era (Flash plug-in required).
Medieval Realms Learning resources from the British Library including studies of beautiful medieval manuscripts.
Medievalists.net News and articles about the period.
Category:History of Europe by period
Category:Wikipedia pages semi-protected against vandalism
Category:Featured articles
Category:5th century in Europe
Category:6th century in Europe
Category:7th century in Europe
Category:8th century in Europe
Category:9th century in Europe
Category:10th century in Europe
Category:11th century in Europe
Category:12th century in Europe
Category:13th century in Europe
Category:14th century in Europe
Category:15th century in Europe
Category:Christianization | 18,836 | 2017-01 |
Szlachta | 266px|thumb|Szlachta in costumes of the Voivodeships of the Crown of the Kingdom of Poland, Grand Duchy of Lithuania and the Polish-Lithuanian Commonwealth in the 17th and 18th century.
266px|thumb|right|Journey of a Polish Lord during the times of King Augustus III of Poland, by Jan Chełmiński, 1880.
thumb|right|266px|Stanisław Poniatowski, nobleman, politician, Grand Treasurer and an important figure in the country during the Age of Enlightenment.
The szlachta (, exonym: Nobility) was a legally privileged noble class with origins in the Kingdom of Poland. It gained considerable institutional privileges between 1333 and 1370 during the reign of King Casimir III the Great. In 1413, following a series of tentative personal unions between the Grand Duchy of Lithuania and the Crown Kingdom of Poland, the existing Lithuanian nobility formally joined this class. As the Polish-Lithuanian Commonwealth (1569–1795) evolved and expanded in territory, its membership grew to include the leaders of Ducal Prussia, Podolian and Ruthenian lands.
The origins of the szlachta are shrouded in obscurity and mystery and have been the subject of a variety of theories. Traditionally, its members were owners of landed property, often in the form of "manor farms" or so-called folwarks. The nobility negotiated substantial and increasing political and legal privileges for itself throughout its entire history until the decline of the Polish Commonwealth in the late 18th century.
During the Partitions of Poland from 1772 to 1795, its members began to lose these legal privileges and social status. From that point until 1918, the legal status of the nobility was essentially dependent upon the policies of the three partitioning powers: the Russian Empire, the Kingdom of Prussia, and the Habsburg Monarchy. The legal privileges of the szlachta were legally abolished in the Second Polish Republic by the March Constitution of 1921.
The notion that all Polish nobles were social equals, regardless of their financial status or offices held, is enshrined in a traditional Polish saying:
—which may roughly be rendered:
or "the tenant farmer noble stands equal to the noble army commander."
History
Etymology
thumb|upright|A Polish Count - Kazimierz Skarżyński by Stefan Norblin.
The term szlachta is derived from the Old High German word slahta (modern German Geschlecht), which means "(noble) family", much as many other Polish words pertaining to the nobility derive from German words—e.g., the Polish "rycerz" ("knight", cognate of the German "Ritter") and the Polish "herb" ("coat of arms", from the German "Erbe", "heritage").
Poles of the 17th century assumed that "szlachta" came from the German "schlachten" ("to slaughter" or "to butcher"); also suggestive is the German "Schlacht" ("battle"). Early Polish historians thought the term may have derived from the name of the legendary proto-Polish chief, Lech, mentioned in Polish and Czech writings.
Some powerful Polish nobles were referred to as "magnates" (Polish singular: "magnat", plural: "magnaci") and "możny" ("magnate", "oligarch"; plural: "możni"); see Magnates of Poland and Lithuania.
The Polish term "szlachta" designated the formalized, hereditary noble class of Polish-Lithuanian Commonwealth. In official Latin documents of the old Commonwealth, hereditary szlachta are referred to as "nobilitas" and are indeed the equivalent in legal status of the English nobility.
Today the word szlachta in the Polish language simply translates to "nobility". In its broadest meaning, it can also denote some non-hereditary honorary knighthoods granted today by some European monarchs. Occasionally, 19th-century non-noble landowners were referred to as szlachta by courtesy or error, when they owned manorial estates though they were not noble by birth. In the narrow sense, szlachta denotes the old-Commonwealth nobility.
In the past, a certain misconception sometimes led to the mistranslation of "szlachta" as "gentry" rather than "nobility". This mistaken practice began due to the economic status of some szlachta members being inferior to that of the nobility in other European countries (see also Estates of the Realm regarding wealth and nobility). The szlachta included those almost rich and powerful enough to be magnates down to rascals with a noble lineage, no land, no castle, no money, no village, and no peasants.
As some szlachta were poorer than some non-noble gentry, some particularly impoverished szlachta were forced to become tenants of the wealthier gentry. In doing so, however, these szlachta retained all their constitutional prerogatives, as it was not wealth or lifestyle (obtainable by the gentry), but hereditary juridical status, that determined nobility.
An individual nobleman was called a "szlachcic", and a noblewoman a "szlachcianka".
Origins
Polish
right|thumb|250px|Union of Lublin (1569). Painting by Jan Matejko, 1869, Castle Museum, Lublin.
The origins of the szlachta, while ancient, have always been considered obscure. As a result, its members often referred to it as odwieczna (perennial). Two popular historic theories of origin forwarded by its members and earlier historians and chroniclers involved descent from the ancient Iranian tribes known as Sarmatians or from Japheth, one of Noah's sons (by contrast, the peasantry were said to be the offspring of another son of Noah, Ham—and hence subject to bondage under the Curse of Ham—and the Jews as the offspring of Shem). Other fanciful theories included its foundation by Julius Caesar, Alexander the Great or regional leaders who had not mixed their bloodlines with those of 'slaves, prisoners, and aliens'.
Another theory describes its derivation from a non-Slavic warrior class, forming a distinct element known as the Lechici/Lekhi (Lechitów) within the ancient Polonic tribal groupings (Indo-European caste systems). This hypothesis states this upper class was not of Slavonic extraction and was of a different origin than the Slavonic peasants (kmiecie; Latin: cmethones) over which they ruled. The Szlachta were differentiated from the rural population. The nobleman's sense of distinction led to practices that in later periods would be characterized as racism. The Szlachta were noble in the Aryan sense -- "noble" in contrast to the people over whom they ruled after coming into contact with them. The szlachta traced their descent from Lech/Lekh, who probably founded the Polish kingdom in about the fifth century. Lechia was the name of Poland in antiquity, and the szlachta's own name for themselves was Lechici/Lekhi. An exact counterpart of Szlachta society was the Meerassee system of tenure of southern India—an aristocracy of equality—settled as conquerors among a separate race. The Polish state paralleled the Roman Empire in that full rights of citizenship were limited to the szlachta. The szlachta were a caste, a military caste, as in Hindu society.
In the year 1244, Bolesław, Duke of Masovia, identified members of the knights' clan as members of a genealogia:
"I received my good servitors [Raciborz and Albert] from the land of [Great] Poland, and from the clan [genealogia] called Jelito, with my well-disposed knowledge [i.e., consent and encouragement] and the cry [vocitatio], [that is], the godło, [by the name of] Nagody, and I established them in the said land of mine, Masovia, [on the military tenure described elsewhere in the charter]."
The documentation regarding Raciborz and Albert's tenure is the earliest surviving of the use of the clan name and cry defining the honorable status of Polish knights. The names of knightly genealogiae only came to be associated with heraldic devices later in the Middle Ages and in the early modern period. The Polish clan name and cry ritualized the ius militare, i.e., the power to command an army; and they had been used some time before 1244 to define knightly status. .
"In Poland, the Radwanice were noted relatively early (1274) as the descendants of Radwan, a knight [more properly a "rycerz" from the German "ritter"] active a few decades earlier. ..."Janusz Bieniak, "Knight Clans in Medieval Poland," in Antoni Gąsiorowski (ed.), The Polish Nobility in the Middle Ages: Anthologies, Zakład Narodowy im. Ossolińskich; Wrocław, POLAND, EU; 1984, page 154.
thumb|right|upright|Polish Nobleman with a Parrot, by Józef Simmler.
Around the 14th century, there was little difference between knights and the szlachta in Poland. Members of the szlachta had the personal obligation to defend the country (pospolite ruszenie), thereby becoming the kingdom's most privileged social class. Inclusion in the class was almost exclusively based on inheritance.
Concerning the early Polish tribes, geography contributed to long-standing traditions. The Polish tribes were internalized and organized around a unifying religious cult, governed by the wiec, an assembly of free tribesmen. Later, when safety required power to be consolidated, an elected prince was chosen to govern. The election privilege was usually limited to elites.
The tribes were ruled by clans (ród) consisting of people related by blood or marriage and theoretically descending from a common ancestor, giving the ród/clan a highly developed sense of solidarity. (See gens.) The starosta (or starszyna) had judicial and military power over the ród/clan, although this power was often exercised with an assembly of elders. Strongholds called grόd were built where the religious cult was powerful, where trials were conducted, and where clans gathered in the face of danger. The opole was the territory occupied by a single tribe.
Mieszko I of Poland (c. 935 – 25 May 992) established an elite knightly retinue from within his army, which he depended upon for success in uniting the Lekhitic tribes and preserving the unity of his state. Documented proof exists of Mieszko I's successors utilizing such a retinue, as well.
thumb|upright|Blue Marquise. Portrait of Izabela Czartoryska. Painted by Marcello Bacciarelli.
Another class of knights were granted land by the prince, allowing them the economic ability to serve the prince militarily. A Polish nobleman living at the time prior to the 15th century was referred to as a "rycerz", very roughly equivalent to the English "knight," the critical difference being the status of "rycerz" was almost strictly hereditary; the class of all such individuals was known as the "rycerstwo". Representing the wealthier families of Poland and itinerant knights from abroad seeking their fortunes, this other class of rycerstwo, which became the szlachta/nobility ("szlachta" becomes the proper term for Polish nobility beginning about the 15th century), gradually formed apart from Mieszko I's and his successors' elite retinues. This rycerstwo/nobility obtained more privileges granting them favored status. They were absolved from particular burdens and obligations under ducal law, resulting in the belief only rycerstwo (those combining military prowess with high/noble birth) could serve as officials in state administration.
Select rycerstwo were distinguished above the other rycerstwo, because they descended from past tribal dynasties, or because early Piasts' endowments made them select beneficiaries. These rycerstwo of great wealth were called możni (Magnates). Socially they were not a distinct class from the rycerstwo from which they all originated and to which they would return were their wealth lost.
The Period of Division from, A.D., 1138 – A.D., 1314, which included nearly 200 years of feudal fragmentation and which stemmed from Bolesław III's division of Poland among his sons, was the genesis of the social structure which saw the economic elevation of the great landowning feudal nobles (możni/Magnates, both ecclesiastical and lay) from the rycerstwo they originated from. The prior social structure was one of Polish tribes united into the historic Polish nation under a state ruled by the Piast dynasty, this dynasty appearing circa 850 A.D.
Some możni (Magnates) descending from past tribal dynasties regarded themselves as co-proprietors of Piast realms, even though the Piasts attempted to deprive them of their independence. These możni (Magnates) constantly sought to undermine princely authority. In Gall Anonym's chronicle, there is noted the nobility's alarm when the Palatine Sieciech "elevated those of a lower class over those who were noble born" entrusting them with state offices.
Lithuanian
thumb|upright|Upon the death of King Augustus III in October 1763, nobleman Stanisław Antoni Poniatowski was elected by the nobility and reigned as Stanisław II Augustus.
In Lithuania Propria and in Samogitia prior to the creation of the Kingdom of Lithuania by Mindaugas, nobles were named die beste leuten in sources that were written in German language. In the Lithuanian language nobles were named ponai. The higher nobility were named 'kunigai' or 'kunigaikščiai' (dukes)—i.e., loanword from Scandinavic konung. They were the established local leaders and warlords. During the development of the state they gradually became subordinated to higher dukes, and later to the King of Lithuania. Because of expansion of Lithuanian duchy into lands of Ruthenia in the mid of 14th century a new term appeared to denominate nobility bajorai—from Ruthenian (modern Ukrainian and Belarusian languages) бояре. This word to this day is used in Lithuanian language to name nobility, not only for own, but also for nobility of other countries.
After the Union of Horodło the Lithuanian nobility acquired equal status with the Polish szlachta, and over time began to become more and more polonized, although they did preserve their national consciousness, and in most cases recognition of their Lithuanian family roots. In the 16th century some of the Lithuanian nobility claimed that they were of Roman extraction, and the Lithuanian language was just a morphed Latin language. This led to paradox: Polish nobility claimed own ancestry from Sarmatian tribes, but Sarmatians were considered enemies to Romans. Thus new Roman-Sarmatian theory was created. Strong cultural ties with Polish nobility led that in the 16th century the new term to name Lithuanian nobility appeared šlėkta—a direct loanword from Polish szlachta. From the view of historical truth Lithuanians also should use this term, šlėkta (szlachta), to name own nobility, but Lithuanian linguists forbade the usage of this Polish loanword. This refusal to use word szlachta (in Lithuanian text šlėkta) complicates all naming.
The process of polonization took place over a lengthy period of time. At first only the highest members of the nobility were involved, although gradually a wider group of the population was affected. The major effects on the lesser Lithuanian nobility took place after various sanctions were imposed by the Russian Empire such as removing Lithuania from the names of the Gubernyas few years after the November Uprising. After the January Uprising the sanctions went further, and Russian officials announced that "Lithuanians are Russians seduced by Poles and Catholicism" and began to intensify russification, and to ban the printing of books in the Lithuanian language.
Ruthenian
In Ruthenia the nobility gradually gravitated its loyalty towards the multicultural and multilingual Grand Duchy of Lithuania after the principalities of Halych and Volhynia became a part of it. Many noble Ruthenian families intermarried with Lithuanian ones.
The Orthodox nobles' rights were nominally equal to those enjoyed by Polish and Lithuanian nobility, but there was a cultural pressure to convert to Catholicism, that was greatly eased in 1596 by the Union of Brest. See for example careers of Senator Adam Kisiel and Jerzy Franciszek Kulczycki.
Ennoblement
In the Kingdom of Poland
thumb|upright|Karol Stanisław Radziwiłł, the richest noble of his time.
The number of legally granted ennoblements after the 15th century was minimal.
In the Kingdom of Poland and later in the Polish-Lithuanian Commonwealth, ennoblement (nobilitacja) may be equated with an individual given legal status as a szlachta (member of the Polish nobility). Initially, this privilege could be granted by the monarch, but from 1641 onward, this right was reserved for the sejm. Most often the individual being ennobled would join an existing noble szlachta clan and assume the undifferentiated coat of arms of that clan.
According to heraldic sources, the total number of legal ennoblements issued between the 14th century and the mid-18th century is estimated at approximately 800. This is an average of only about two ennoblements per year, or only 0.000,000,14 – 0.000,001 of the historical population. Compare: historical demography of Poland. Charles-Joseph, 7th Prince of Ligne, when trying to obtain Polish noble status, supposedly said in 1784, "It is easier to become a duke in Germany, than to be counted among Polish nobles."
The close of the late 18th century (see below) was a period in which a definite increase in the number of ennoblements can be noted. This can most readily be explained in terms of the ongoing decline and eventual collapse of Commonwealth and the resulting need for soldiers and other military leaders (see: Partitions of Poland, King Stanisław August Poniatowski).
Total number of ennoblements estimation
According to heraldic sources 1,600 is a total estimated number of all legal ennoblements throughout the history of Kingdom of Poland and Polish-Lithuanian Commonwealth from the 14th century onward (half of which were performed in the final years of the late 18th century).
Types of ennoblement:
Adopcja herbowa – The "old way" of ennoblement, popular in the 15th century, connected with adoption into an existing noble clan by a powerful lord, but abolished in the 17th century.
Skartabelat – Introduced by pacta conventa of the 17th century, this was ennoblement into a sort of "conditional" or "graduated nobility" status. Skartabels could not hold public offices or be members of the Sejm, but after three generations, the descendants of these families would "mature" to full szlachta status.
Similar terms:
Indygenat – Recognition of foreign noble status. A foreign noble, after indygenat, received all privileges of a Polish szlachcic. In Polish history, 413 foreign noble families were recognized. Prior to the 17th century this was done by the King and Sejm (Polish parliament), after the 17th century it was done only by the Sejm.
"secret ennoblement" – This was of questionable legal status and was often not recognized by many szlachta. It was typically granted by the elected monarch without the required legal approval of the sejm.
In the Grand Duchy of Lithuania
In the late 14th century, in the Grand Duchy of Lithuania, Vytautas the Great reformed the Grand Duchy's army: instead of calling all men to arms, he created forces comprising professional warriors—bajorai ("nobles"; see the cognate "boyar"). As there were not enough nobles, Vytautas trained suitable men, relieving them of labor on the land and of other duties; for their military service to the Grand Duke, they were granted land that was worked by hired men (veldams). The newly formed noble families generally took up, as their family names, the Lithuanian pagan given names of their ennobled ancestors; this was the case with the Goštautai, Radvilos, Astikai, Kęsgailos and others. These families were granted their coats of arms under the Union of Horodlo (1413).
In 1506, King Sigismund I the Old confirmed the position of the Lithuanian Council of Lords in state politics and limited entry into the nobility.
Privileges
Specific rights of the szlachta included:
The right to hold outright ownership of land (Allod) -- not as a fief, conditional upon service to the liege Lord, but absolutely in perpetuity unless sold.
The right to join in political and military assemblies of the regional nobility.
The right to form independent administrative councils for their locality.
The right to cast a vote for Polish Kings.
The right to travel freely anywhere in the old Commonwealth of the Polish and Lithuanian nobility; or outside it, as foreign policy dictated.
The right to demand information from Crown offices.
The right to spiritual semi-independence from the clergy.
The right to interdict, in suitable ways, the passage of foreigners and townsmen through their territories.
The right of priority over the courts of the peasantry.
Special rights in Polish courts—including freedom from arbitrary arrest and freedom from corporal punishment.
The right to sell their military or administrative services.
Heraldic rights.
The right to receive higher pay when entitled in the "Levée en masse" (mobilization of the szlachta for defence of the nation).
Educational rights
The right of importing duty-free goods often.
The exclusive right to enter the clergy until the time of the three partitions of Poland.
The right to try their peasants for major offences (reduced to minor offences only, after the 1760s).
thumb|right|upright|Franciszek Salezy Potocki, wearing the Order of the White Eagle.
Significant legislative changes in the status of the szlachta, as defined by Robert Bideleux and Ian Jeffries, consist of its 1374 exemption from the land tax, a 1425 guarantee against the 'arbitrary arrests and/or seizure of property' of its members, a 1454 requirement that military forces and new taxes be approved by provincial Sejms, and statutes issued between 1496 and 1611 that prescribed the rights of commoners.
Nobles were born into a noble family, adopted by a noble family (this was abolished in 1633) or ennobled by a king or Sejm for various reasons (bravery in combat, service to the state, etc.—yet this was the rarest means of gaining noble status). Many nobles were, in actuality, really usurpers, being commoners, who moved into another part of the country and falsely pretended to noble status. Hundreds of such false nobles were denounced by Hieronim Nekanda Trepka in his Liber generationis plebeanorium (or Liber chamorum) in the first half of the 16th century. The law forbade non-nobles from owning nobility-estates and promised the estate to the denouncer. Trepka was an impoverished nobleman who lived a townsman life and collected hundreds of such stories hoping to take over any of such estates. It does not seem he ever succeeded in proving one at the court. Many sejms issued decrees over the centuries in an attempt to resolve this issue, but with little success. It is unknown what percentage of the Polish nobility came from the 'lower' orders of society, but most historians agree that nobles of such base origins formed a 'significant' element of the szlachta.
The Polish nobility enjoyed many rights that were not available to the noble classes of other countries and, typically, each new monarch conceded them further privileges. Those privileges became the basis of the Golden Liberty in the Polish–Lithuanian Commonwealth. Despite having a king, Poland was called the nobility's Commonwealth because the king was elected by all interested members of hereditary nobility and Poland was considered to be the property of this class, not of the king or the ruling dynasty. This state of affairs grew up in part because of the extinction of the male-line descendants of the old royal dynasty (first the Piasts, then the Jagiellons), and the selection by the nobility of the Polish king from among the dynasty's female-line descendants.
Poland's successive kings granted privileges to the nobility at the time of their election to the throne (the privileges being specified in the king-elect's Pacta conventa) and at other times in exchange for ad hoc permission to raise an extraordinary tax or a pospolite ruszenie.
Poland's nobility thus accumulated a growing array of privileges and immunities:
In 1355 in Buda King Casimir III the Great issued the first country-wide privilege for the nobility, in exchange for their agreement that in the lack of Casimir's male heirs, the throne would pass to his nephew, Louis I of Hungary. He decreed that the nobility would no longer be subject to 'extraordinary' taxes, or use their own funds for military expeditions abroad. He also promised that during travels of the royal court, the king and the court would pay for all expenses, instead of using facilities of local nobility.
In 1374 King Louis of Hungary approved the Privilege of Koszyce (Polish: "przywilej koszycki" or "ugoda koszycka") in Košice in order to guarantee the Polish throne for his daughter Jadwiga. He broadened the definition of who was a member of the nobility and exempted the entire class from all but one tax (łanowy, which was limited to 2 grosze from łan (an old measure of land size)). In addition, the King's right to raise taxes was abolished; no new taxes could be raised without the agreement of the nobility. Henceforth, also, district offices (Polish: "urzędy ziemskie") were reserved exclusively for local nobility, as the Privilege of Koszyce forbade the king to grant official posts and major Polish castles to foreign knights. Finally, this privilege obliged the King to pay indemnities to nobles injured or taken captive during a war outside Polish borders.
In 1422 King Władysław II Jagiełło by the Privilege of Czerwińsk (Polish: "przywilej czerwiński") established the inviolability of nobles' property (their estates could not be confiscated except upon a court verdict) and ceded some jurisdiction over fiscal policy to the Royal Council (later, the Senat of Poland), including the right to mint coinage.
In 1430 with the Privileges of Jedlnia, confirmed at Kraków in 1433 (Polish: "przywileje jedlneńsko-krakowskie"), based partially on his earlier Brześć Kujawski privilege (April 25, 1425), King Władysław II Jagiełło granted the nobility a guarantee against arbitrary arrest, similar to the English Magna Carta's Habeas corpus, known from its own Latin name as "neminem captivabimus (nisi jure victum)." Henceforth no member of the nobility could be imprisoned without a warrant from a court of justice: the king could neither punish nor imprison any noble at his whim. King Władysław's quid pro quo for this boon was the nobles' guarantee that his throne would be inherited by one of his sons (who would be bound to honour the privileges theretofore granted to the nobility). On May 2, 1447 the same king issued the Wilno Privilege which gave the Lithuanian boyars the same rights as those possessed by the Polish szlachta.
right|thumb|upright|A Polish Nobleman. Rembrandt, 1637
In 1454 King Casimir IV granted the Nieszawa Statutes (Polish: "statuty cerkwicko-nieszawskie"), clarifying the legal basis of voivodship sejmiks (local parliaments). The king could promulgate new laws, raise taxes, or call for a levée en masse (pospolite ruszenie) only with the consent of the sejmiks, and the nobility were protected from judicial abuses. The Nieszawa Statutes also curbed the power of the magnates, as the Sejm (national parliament) received the right to elect many officials, including judges, voivods and castellans. These privileges were demanded by the szlachta as a compensation for their participation in the Thirteen Years' War.
The first "free election" (Polish: "wolna elekcja") of a king took place in 1492. (To be sure, some earlier Polish kings had been elected with help from bodies such as that which put Casimir II on the throne, thereby setting a precedent for free elections.) Only senators voted in the 1492 free election, which was won by John I Albert. For the duration of the Jagiellonian Dynasty, only members of that royal family were considered for election; later, there would be no restrictions on the choice of candidates.
In 1493 the national parliament, the Sejm, began meeting every two years at Piotrków. It comprised two chambers:
a Senate of 81 bishops and other dignitaries; and
a Chamber of Envoys of 54 envoys (in Polish, "envoy" is "poseł") representing their respective Lands.
The numbers of senators and envoys later increased.
On April 26, 1496 King John I Albert granted the Privilege of Piotrków (Polish: "Przywilej piotrkowski", "konstytucja piotrkowska" or "statuty piotrkowskie"), increasing the nobility's feudal power over serfs. It bound the peasant to the land, as only one son (not the eldest) was permitted to leave the village; townsfolk (Polish: "mieszczaństwo") were prohibited from owning land; and positions in the Church hierarchy could be given only to nobles.
On 23 October 1501, at Mielnik Polish–Lithuanian union was reformed at the Union of Mielnik (Polish: unia mielnicka, unia piotrkowsko-mielnicka). It was there that the tradition of the coronation Sejm (Polish: "Sejm koronacyjny") was founded. Once again the middle nobility (middle in wealth, not in rank) attempted to reduce the power of the magnates with a law that made them impeachable before the Senate for malfeasance. However the Act of Mielno (Polish: Przywilej mielnicki) of 25 October did more to strengthen the magnate dominated Senate of Poland then the lesser nobility. The nobles were given the right to disobey the King or his representatives—in the Latin, "non praestanda oboedientia"—and to form confederations, an armed rebellion against the king or state officers if the nobles thought that the law or their legitimate privileges were being infringed.
thumb|right|250px|The Commonwealth's Power at Its Zenith, Golden Liberty, the Election of 1573. Painting by Jan Matejko.
On 3 May 1505 King Alexander I Jagiellon granted the Act of "Nihil novi nisi commune consensu" (Latin: "I accept nothing new except by common consent"). This forbade the king to pass any new law without the consent of the representatives of the nobility, in Sejm and Senat assembled, and thus greatly strengthened the nobility's political position. Basically, this act transferred legislative power from the king to the Sejm. This date commonly marks the beginning of the First Rzeczpospolita, the period of a szlachta-run "Commonwealth".
In 1520 the Act of Bydgoszcz granted the Sejm the right to convene every four years, with or without the king's permission.
About that time the "executionist movement" (Polish: "egzekucja praw"--"execution of the laws") began to take form. Its members would seek to curb the power of the magnates at the Sejm and to strengthen the power of king and country. In 1562 at the Sejm in Piotrków they would force the magnates to return many leased crown lands to the king, and the king to create a standing army (wojsko kwarciane). One of the most famous members of this movement was Jan Zamoyski. After his death in 1605, the movement lost its political force.
Until the death of Sigismund II Augustus, the last king of the Jagiellonian dynasty, monarchs could be elected from within only the royal family. However, starting from 1573, practically any Polish noble or foreigner of royal blood could become a Polish–Lithuanian monarch. Every newly elected king was supposed to sign two documents—the Pacta conventa ("agreed pacts")—a confirmation of the king's pre-election promises, and Henrican articles (artykuły henrykowskie, named after the first freely elected king, Henry of Valois). The latter document served as a virtual Polish constitution and contained the basic laws of the Commonwealth:
Free election of kings;
Religious tolerance;
The Diet to be gathered every two years;
Foreign policy controlled by the Diet;
A royal advisory council chosen by the Diet;
Official posts restricted to Polish and Lithuanian nobles;
Taxes and monopolies set up by the Diet only;
Nobles' right to disobey the king should he break any of these laws.
In 1578 king Stefan Batory created the Crown Tribunal in order to reduce the enormous pressure on the Royal Court. This placed much of the monarch's juridical power in the hands of the elected szlachta deputies, further strengthening the nobility class. In 1581 the Crown Tribunal was joined by a counterpart in Lithuania, the Lithuanian Tribunal.
Transformation into aristocracy
thumb|right|Possessions of major magnate families in 16th–17th century.
For many centuries, wealthy and powerful members of the szlachta sought to gain legal privileges over their peers. Few szlachta were wealthy enough to be known as magnates (karmazyni—the "Crimsons", from the crimson colour of their boots). A proper magnate should be able to trace noble ancestors back for many generations and own at least 20 villages or estates. He should also hold a major office in the Commonwealth.
Some historians estimate the number of magnates as 1% of the number of szlachta. Out of approx. one million szlachta, tens of thousands of families, only 200–300 persons could be classed as great magnates with country-wide possessions and influence, and 30–40 of them could be viewed as those with significant impact on Poland's politics.
Magnates often received gifts from monarchs, which significantly increased their wealth. Often, those gifts were only temporary leases, which the magnates never returned (in the 16th century, the anti-magnate opposition among szlachta was known as the ruch egzekucji praw—movement for execution of the laws—which demanded that all such possessions are returned to their proper owner, the king).
One of the most important victories of the magnates was the late 16th century right to create ordynacja's (similar to majorats), which ensured that a family which gained wealth and power could more easily preserve this. Ordynacje's of families of Radziwiłł, Zamoyski, Potocki or Lubomirski often rivalled the estates of the king and were important power bases for the magnates.
Loss of influence by szlachta
thumb|right|225px|The Peasant Uprising of 1846, the largest peasant uprising against szlachta rules on Polish lands in the 19th century.
The sovereignty of szlachta was ended in 1795 by Partitions of Poland, and until 1918 their legal status was dependent on policies of the Russian Empire, the Kingdom of Prussia or the Habsburg Monarchy.
In the 1840s Nicholas I reduced 64,000 szlachta to commoner status.Richard Pipes, Russia under the old regime, page 181 Despite this, 62.8% of Russia's nobles were szlachta in 1858 and still 46.1% in 1897.Seymour Becker, Nobility and Privilege in late Imperial Russia, page 182
Serfdom was abolished in Russian Poland on February 19, 1864. It was deliberately enacted in a way that would ruin the szlachta. It was the only area where peasants paid the market price in redemption for the land (the average for the empire was 34% above the market price). All land taken from Polish peasants since 1846 was to be returned without redemption payments. The ex serfs could only sell land to other peasants, not szlachta. 90% of the ex serfs in the empire who actually gained land after 1861 were in the 8 western provinces. Along with Romania, Polish landless or domestic serfs were the only ones to be given land after serfdom was abolished.The End of the Old Order in Rural Europe, Jerome Blum, page 391. All this was to punish the szlachta's role in the uprisings of 1830 and 1863.
By 1864 80% of szlachta were déclassé, ¼ petty nobles were worse off than the average serf, 48.9% of land in Russian Poland was in peasant hands, nobles still held 46%.Norman Davies, God's playground, pages 182 and 188 In Second Polish Republic the privileges of the nobility were lawfully abolished by the March Constitution in 1921 and as such not granted by any future Polish law.
Culture of szlachta
The Polish nobility differed in many respects from the nobility of other countries. The most important difference was that, while in most European countries the nobility lost power as the ruler strove for absolute monarchy, in Poland the reverse process occurred: the nobility actually gained power at the expense of the king, and the political system evolved into an oligarchy.
thumb|right|Portrait of Tomasz Czapski, by Józef Pitschmann.
Poland's nobility were also more numerous than those of all other European countries, constituting some 10–12%From Da to Yes: Understanding the East Europeans, p. 51, Yale Richmond, 1995 of the total population of historic Polish–Lithuanian Commonwealth also some 10–12% among ethnic Poles on ethnic Polish lands (part of Commonwealth), but up to 25% of all Poles worldwide (szlachta could dispose more of resources to travels and/or conquering), while in some poorer regions (e.g., Mazowsze, the area centred on Warsaw) nearly 30%. However, according to szlachta comprised around 8% of the total population in 1791 (up from 6.6% in the 16th century), and no more than 16% of the Roman Catholic (mostly ethnically Polish) population. It should be noted, though, that Polish szlachta usually incorporated most local nobility from the areas that were absorbed by Poland–Lithuania (Ruthenian boyars, Livonian nobles, etc.) By contrast, the nobilities of other European countries, except for Spain, amounted to a mere 1–3%, however the era of sovereign rules of Polish nobility ended earlier than in other countries (excluding France) yet in 1795 (see: Partitions of Poland), since then their legitimation and future fate depended on legislature and procedures of Russian Empire, Kingdom of Prussia or Habsburg Monarchy. Gradually their privileges were under further limitations to be completely dissolved by March Constitution of Poland in 1921.
There were a number of avenues to upward social mobility and the achievement of nobility. Poland's nobility was not a rigidly exclusive, closed class. Many low-born individuals, including townsfolk, peasants and Jews, could and did rise to official ennoblement in Polish society. Each szlachcic had enormous influence over the country's politics, in some ways even greater than that enjoyed by the citizens of modern democratic countries. Between 1652 and 1791, any nobleman could nullify all the proceedings of a given sejm (Commonwealth parliament) or sejmik (Commonwealth local parliament) by exercising his individual right of liberum veto (Latin for "I do not allow"), except in the case of a confederated sejm or confederated sejmik.
All children of the Polish nobility inherited their noble status from a noble mother and father. Any individual could attain ennoblement () for special services to the state. A foreign noble might be naturalised as a Polish noble (Polish: "indygenat") by the Polish king (later, from 1641, only by a general sejm).
In theory at least, all Polish noblemen were social equals. Also in theory, they were legal peers. Those who held 'real power' dignities were more privileged but these dignities were not hereditary. Those who held honorary dignities were higher in 'ritual' hierarchy but these dignities were also granted for a lifetime. Some tenancies became hereditary and went with both privilege and titles. Nobles who were not direct barons of the Crown but held land from other lords were only peers "de iure". thumb|right|Count Stanisław Szczęsny Potocki with children, by Johann Baptist von Lampi the Elder.The poorest enjoyed the same rights as the wealthiest magnate. The exceptions were a few symbolically privileged families such as the Radziwiłł, Lubomirski and Czartoryski, who sported honorary aristocratic titles recognized in Poland or received from foreign courts, such as "Prince" or "Count". (see also The Princely Houses of Poland). All other szlachta simply addressed each other by their given name or as "Sir Brother" (Panie bracie) or the feminine equivalent. The other forms of address would be "Illustrious and Magnificent Lord", "Magnificent Lord", "Generous Lord" or "Noble Lord" (in decreasing order) or simply "His/Her Grace Lord/Lady".
According to their financial standing, the nobility were in common speech divided into:
magnates: the wealthiest class; owners of vast lands, towns, many villages, thousands of peasants
middle nobility (średnia szlachta): owners of one or more villages, often having some official titles or Envoys from the local Land Assemblies to the General Assembly,
petty nobility (drobna szlachta), owners of a part of a village or owning no land at all, often referred to by a variety of colourful Polish terms such as:
szaraczkowa – grey nobility, from their grey, woollen, uncoloured żupans
okoliczna – local nobility, similar to zaściankowa
zagrodowa – from zagroda, a farm, often little different from a peasant's dwelling
zagonowa – from zagon, a small unit of land measure, hide nobility
cząstkowa – partial, owners of only part of a single village
panek – little pan (i.e., lordling), term used in Kaszuby, the Kashubian region, also one of the legal terms for legally separated lower nobility in late medieval and early modern Poland
hreczkosiej – buckwheat sowers – those who had to work their fields themselves.
zaściankowa – from zaścianek, a name for plural nobility settlement, neighbourhood nobility. Just like hreczkosiej, zaściankowa nobility would have no peasants.
brukowa – cobble nobility, for those living in towns like townsfolk
gołota – naked nobility, i.e., the landless. Gołota szlachta would be considered the 'lowest of the high'.
półpanek ("half-lord"); also podpanek/pidpanek ("sub-lord") in Podolia and Ukrainian accentLwów i Wilno / [publ. by J. Godlewski]. (1948) nr 98 – a petty szlachcic pretending to be wealthy.
Note that the Polish landed gentry (ziemianie or ziemiaństwo) was composed of any nobility that owned lands: thus of course the magnates, the middle nobility and that lesser nobility that had at least part of the village. As manorial lordships were also opened to burgesses of certain privileged royal cities, not all landed gentry had a hereditary title of nobility.
Heraldry
Coats of arms were very important to the Polish nobility. Its heraldic system evolved together with its neighbours in Central Europe, while differing in many ways from the heraldry of other European countries. Polish knighthood families had its counterparts, links or roots in Moravia (i.e. Poraj) and Germany (i.e. Junosza).
The most notable difference is that, contrary to other European heraldic systems, the Jews, Muslim Tatars or another minorities would be given the noble title. Also, most families sharing origin would also share a coat-of-arms. They would also share arms with families adopted into the clan (these would often have their arms officially altered upon ennoblement). Sometimes unrelated families would be falsely attributed to the clan on the basis of similarity of arms. Also often noble families claimed inaccurate clan membership. Logically, the number of coats of arms in this system was rather low and did not exceed 200 in late Middle Ages (40,000 in the late 18th century).
Also, the tradition of differentiating between the coat of arms proper and a lozenge granted to women did not develop in Poland. Usually men inherited the coat of arms from their fathers. Also, the brisure was rarely used.
Sarmatism
thumb|right|upright|Hetman Jan Zamoyski, representative of Sarmatism.
The szlachtas prevalent mentality and ideology were manifested in "Sarmatism", a name derived from a myth of the szlachtas origin in the powerful ancient nation of Sarmatians. This belief system became an important part of szlachta culture and affected all aspects of their lives. It was popularized by poets who exalted traditional village life, peace and pacifism. It was also manifested in oriental-style apparel (the żupan, kontusz, sukmana, pas kontuszowy, delia); and made the scimitar-like szabla, too, a near-obligatory item of everyday szlachta apparel. Sarmatism served to integrate the multi-ethnic nobility as it created an almost nationalistic sense of unity and pride in the szlachta's "Golden Liberty" (złota wolność). Knowledge of Latin was widespread, and most szlachta freely mixed Polish and Latin vocabulary (the latter, "macaronisms"—from "macaroni") in everyday conversation.
Religious beliefs
Prior to the Reformation, the Polish nobility were mostly either Roman Catholic or Orthodox with a small group of Muslims. Many families, however, soon adopted the Reformed faiths. After the Counter-Reformation, when the Roman Catholic Church regained power in Poland, the nobility became almost exclusively Catholic, despite the fact that Roman Catholicism was not the majority religion in Commonwealth (the Catholic and Orthodox churches each accounted for some 40% of all citizens population, with the remaining 20% being Jews or members of Protestant denominations). In the 18th century, many followers of Jacob Frank joined the ranks of Jewish-descended Polish gentry. Although Jewish religion wasn't usually a pretext to block or deprive of noble status, some laws favoured religious conversion from Judaism to Christianity (see: Neophyte) by rewarding it with ennoblement. "Ennoblements of Neophytes during the rule of Stanislaw August", Rzeczpospolita daily, Tomasz Lenczewski, 2008
Gallery
See also
List of Polish titled nobility
List of szlachta
Lithuanian nobility
Polish heraldry
Polish landed gentry (Ziemiaństwo)
Polish name
References
Aleksander Brückner, Słownik etymologiczny języka polskiego (Etymological Dictionary of the Polish Language), first edition, Kraków, Krakowska Spółka Wydawnicza, 1927 (9th edition, Warsaw, Wiedza Powszechna, 2000).
.
External links
Descendants of the Great Sejm (genealogies of the most important Polish families)
Confederation of the Polish Nobility
Polish Nobility Association Foundation
Association of the Belarusian Nobility
Association of Lithuanian Nobility
The Polish Aristocracy by Rafal Heydel-Mankoo – History of Polish titled families, heraldry, Orders
The Inexorable Political Rise of the szlachta
Short article on The Polish Nobility
Digital Library of Wielkpolska
Central European Superpower, Henryk Litwin, BUM Magazine, 2016.
Winged Hussars, Radoslaw Sikora, Bartosz Musialowicz, BUM Magazine, 2016.
*
Category:Polish noble titles
Category:Noble titles
Category:Social class in Poland | 29,050 | 2017-01 |
House music | House music is a genre of electronic music created by club DJs and music producers that originated in Chicago in the early 1980s. Early house music was generally dance-based music characterized by repetitive 4/4 beats, rhythms mainly provided by drum machines, off-beat hi-hat cymbals, and synthesized basslines. While house displayed several characteristics similar to disco music, it was more electronic and minimalistic, and the repetitive rhythm of house was more important than the song itself. House music initially became popular in Chicago clubs in 1984, pioneered by figures such as Frankie Knuckles, Phuture, Kym Mazelle,Running free with Kym Mazelle. The Voice Online. Retrieved on December 6, 2016 and Mr. Fingers, and was associated with African-American and gay subcultures. House music quickly spread to other American cities such as Detroit, New York City, Baltimore, and Newark – all of which developed their own regional scenes. In the mid-to-late 1980s, house music became popular in Europe as well as major cities in South America, and Australia.
Early house music commercial success in Europe saw songs such as "Pump Up The Volume" by MARRS (1987), "House Nation" by House Master Boyz and the Rude Boy of House (1987), "Theme from S'Express" by S'Express (1988) and "Doctorin' the House" by Coldcut (1988) in the pop charts. Since the early to mid-1990s, house music has been infused in mainstream pop and dance music worldwide. In the late 1980s, many local Chicago house music artists suddenly found themselves presented with major label deals. House music proved to be a commercially successful genre and a more mainstream pop-based variation grew increasingly popular. House music in the 2010s, while keeping several of these core elements, notably the prominent kick drum on every beat, varies widely in style and influence, ranging from the soulful and atmospheric deep house to the more minimalistic microhouse. House music has also fused with several other genres creating fusion subgenres, such as euro house, tech house, electro house and jump house.
Artists and groups such as Madonna, Janet Jackson, Paula Abdul, Aretha Franklin, Bananarama, Diana Ross, Tina Turner, Whitney Houston, Steps, Kylie Minogue, Dannii Minogue, Björk, and C+C Music Factory all incorporated the genre into their work in the 1990s and beyond. After enjoying significant success in the early to mid-90s, house music grew even larger during the second wave of progressive house (1999–2001). The genre has remained popular and fused into other popular subgenres, for example, ghetto house, deep house and tech house. As of 2016, house music remains popular in both clubs and in the mainstream pop scene while retaining a foothold on underground scenes across the globe.
Characteristics
The song structure of house music songs typically involves an intro, a chorus, various verse sections, a midsection and an outro. Some songs don't have a verse, taking a part from the chorus and repeating the same cycle. The drum beat is one of the more important elements within the genre and is almost always provided by an electronic drum machine. The drum beats of house are "four on the floor", with bass drums played on every beat and they usually feature off-beat drum machine hi-hat sounds.
House music is often based on bass-heavy loops produced by a synthesizer and/or from samples of disco or funk songs . The tempo of most house songs often lie between 118 and 135 bpm.
Influences and precursors
Various disco songs incorporated sounds produced with synthesizers and electronic drum machines, and some compositions were entirely electronic; examples include Giorgio Moroder's late 1970s productions such as Donna Summer's hit single "I Feel Love" from 1977, Cerrone's "Supernature" (1977), Yellow Magic Orchestra's synth-disco-pop productions from Yellow Magic Orchestra (1978), Solid State Survivor (1979), and several early 1980s disco-pop productions by the Hi-NRG group Lime.
Soul music and disco influenced house music. As well, the audio mixing and editing techniques earlier explored by disco, garage music and post-disco DJs, record producers, and audio engineers such as Walter Gibbons, Tom Moulton, Jim Burgess, Larry Levan, Ron Hardy, M & M, and others was important. These artists produced longer, more repetitive, and percussive arrangements of existing disco recordings. Early house producers such as Frankie Knuckles created similar compositions from scratch, using samplers, synthesizers, sequencers, and drum machines.
The electronic instrumentation and minimal arrangement of Charanjit Singh's Synthesizing: Ten Ragas to a Disco Beat (1982), an album of Indian ragas performed in a disco style, anticipated the sounds of acid house music, but it is not known to have had any influence on the genre prior to the album's rediscovery in the 21st century.
Rachel Cain, co-founder of an influential Trax Records, was previously involved in the burgeoning punk scene and cites industrial and post-punk record store Wax Trax! Records as an important connection between the ever-changing underground sounds of Chicago. As most proto-house DJs were primarily stuck to playing their conventional ensemble and playlist of dance records, Frankie Knuckles and Ron Hardy, two influential pioneers of house music, were known for their unusual and non-mainstream playlists and mixing. The former, credited as "the Godfather of House", worked primarily with early disco music with a hint of new and different music (whether it was post-punk or post-disco)RBMA (2011). Frankie Knuckles: A journey to the roots of house music. Red Bull Music Academy. Retrieved 2014-06-01. but still enjoying a variety of music, while the latter produced unconventional DIY mixtapes which he later played straight-on in the music club Music Box, boiling with raw energy. Marshall Jefferson, who would later appear with the Chicago house classic "Move Your Body (The House-Music Anthem)", (originally released on Chicago-based Trax Records) got involved in house music after hearing Ron Hardy's music in Muzic Box.
Origins (1980s)
Chicago house
thumb|right|200px|An honorary street name sign in Chicago for house music and Frankie Knuckles.
In the early 1980s, Chicago radio jocks The Hot Mix 5, and club DJs Ron Hardy and Frankie Knuckles played a range of styles of dance music, including older disco records (mostly Philly disco and Salsoul tracks), electro funk tracks by artists such as Afrika Bambaataa, newer Italo disco, B-Boy hip hop music by Man Parrish, Jellybean Benitez, Arthur Baker, and John Robie, and electronic pop music by Kraftwerk and Yellow Magic Orchestra. Some DJs made and played their own edits of their favorite songs on reel-to-reel tape, and sometimes mixed in electronic effects, drum machines, and other rhythmic electronic instrumentation.
The hypnotic electronic dance song "On and On", produced in 1984 by Chicago DJ Jesse Saunders and co-written by Vince Lawrence, had elements that became staples of the early house sound, such as the Roland TB-303 bass synthesizer and minimal vocals as well as a Roland (specifically TR-808) drum machine and Korg (specifically Poly-61) synthesizer. It also utilized the bassline from Player One's disco record "Space Invaders" (1979). "On and On" is sometimes cited as the 'first house record',Mitchell, Euan. Interviews: Marshell Jefferson www.4clubbers.net though other examples from around that time, such as J.M. Silk's "Music is the Key" (1985), have also been cited.
Starting in 1984, some of these DJs, inspired by Jesse Saunders' success with "On and On", tried their hand at producing and releasing original compositions. These compositions used newly affordable electronic instruments to emulate not just Saunders' song, but the edited, enhanced styles of disco and other dance music they already favored. These homegrown productions were played on Chicago-area radio and in local discothèques catering mainly to African-American and gay audiences. By 1985, although the exact origins of the term are debated, "house music" encompassed these locally produced recordings. Subgenres of house, including deep house and acid house, quickly emerged and gained traction.
Deep house's origins can be traced to Chicago producer Mr Fingers's relatively jazzy, soulful recordings "Mystery of Love" (1985) and "Can You Feel It?" (1986). According to author Richie Unterberger, it moved house music away from its "posthuman tendencies back towards the lush" soulful sound of early disco music.
Acid house arose from Chicago artists' experiments with the squelchy Roland TB-303 bass synthesizer, and the style's origins on vinyl is generally cited as Phuture's "Acid Tracks" (1987). Phuture, a group founded by Nathan "DJ Pierre" Jones, Earl "Spanky" Smith Jr., and Herbert "Herb J" Jackson, is credited with having been the first to use the TB-303 in the house music context. The group's 12-minute "Acid Tracks" was recorded to tape and was played by DJ Ron Hardy at the Music Box, where Hardy was resident DJ. Hardy once played it four times over the course of an evening until the crowd responded favorably.Cheeseman, Phil. "The History Of House". The track also utilized a Roland TR-707 drum machine.
Club play from pioneering Chicago DJs such as Hardy and Lil Louis, local dance music record shops such as Importes, State Street Records, Loop Records, Gramaphone Records and the popular Hot Mix 5 shows on radio station WBMX-FM helped popularize house music in Chicago. Later, visiting DJs & producers from Detroit fell into the genre. Trax Records and DJ International Records, Chicago labels with wider distribution, helped popularize house music inside and outside of Chicago. One 1986 house tune called "Move Your Body" by Marshall Jefferson, taken from the appropriately titled "The House Music Anthem" EP, became a big hit in Chicago and eventually worldwide. By 1986, UK labels were releasing house music by Chicago acts, and by 1987 house tracks by Chicago DJs and producers were appearing on and topping the UK music chart. By this time, house music released by Chicago-based labels was considered a must-play in clubs.
Origins of the term
The term "house music" is said to have originated from a Chicago club called The Warehouse, which existed from 1977 to 1983.Snoman, Rick (2009). The Dance Music Manual: Tools, Toys, and Techniques — Second Edition. Oxford, UK: Elsevier Press. p.233 Clubbers to The Warehouse were primarily black, who came to dance to music played by the club's resident DJ Frankie Knuckles, whom fans refer to as the "godfather of house". After the Warehouse closed in 1983, the crowds went to Knuckles' new club, The Power Plant. In the Channel 4 documentary Pump Up The Volume, Knuckles remarks that the first time he heard the term "house music" was upon seeing "we play house music" on a sign in the window of a bar on Chicago's South Side. One of the people in the car with him joked, "you know, that's the kind of music you play down at the Warehouse!", and then everybody laughed. South-Side Chicago DJ Leonard "Remix" Roy, in self-published statements, claims he put such a sign in a tavern window because it was where he played music that one might find in one's home; in his case, it referred to his mother's soul & disco records, which he worked into his sets. Farley Jackmaster Funk was quoted as saying "In 1982, I was DJing at a club called The Playground and there was this kid named Leonard 'Remix' Roy who was a DJ at a rival club called The Rink. He came over to my club one night, and into the DJ booth and said to me, 'I've got the gimmick that's gonna take all the people out of your club and into mine – it's called House music.' Now, where he got that name from or what made him think of it I don't know, so the answer lies with him."
Chip E.'s 1985 recording "It's House" may also have helped to define this new form of electronic music. However, Chip E. himself lends credence to the Knuckles association, claiming the name came from methods of labeling records at the Importes Etc. record store, where he worked in the early 1980s: bins of music that DJ Knuckles played at the Warehouse nightclub were labelled in the store "As Heard At The Warehouse", which was shortened to simply "House". Patrons later asked for new music for the bins, which Chip E. implies was a demand the shop tried to meet by stocking newer local club hits.
In a 1986 interview, Rocky Jones, the former club DJ who ran the D.J. International record label, doesn't mention Importes Etc., Frankie Knuckles, or the Warehouse by name, but agrees that "house" was a regional catch-all term for dance music, and that it was once synonymous with older disco music.
Larry Heard, a.k.a. "Mr. Fingers", claims that the term "house" became popular due to many of the early DJs creating music in their own homes using synthesizers and drum machines such as the Roland TR-808, TR-909, and the TB 303. These synthesizers were used to create a house subgenre called acid house.
Juan Atkins, an originator of Detroit techno music, claims the term "house" reflected the exclusive association of particular tracks with particular clubs and DJs; those records helped differentiate the clubs and DJs, and thus were considered to be their "house" records. In an effort to maintain such exclusives, the DJs were inspired to create their own "house" records.
Lyrical themes
House also had an influence of relaying political messages to people who were considered to be outcasts of society. The music appealed to those who didn't fit into mainstream American society and was especially celebrated by many black males. Frankie Knuckles once said that the Warehouse club in Chicago was like "church for people who have fallen from grace" The house producer Marshall Jefferson compared it to "old-time religion in the way that people just get happy and screamin'". Deep house lyrics also contained messages calling for equality for the black community.
Regional scenes (1980s–1990s)
Detroit sound: 1986–1989
Detroit techno is an offshoot of Chicago house music. It was developed starting in the early-mid 80s, one of the earliest hits being "Big Fun" by Inner City. Detroit techno developed as the legendary disc jockey The Electrifying Mojo conducted his own radio program at this time, influencing the fusion of eclectic sounds into the signature Detroit techno sound. This sound, also influenced by European electronica (Kraftwerk, Art of Noise), Japanese synthpop (Yellow Magic Orchestra), early B-boy Hip-Hop (Man Parrish, Soul Sonic Force) and Italo disco (Doctor's Cat, Ris, Klein M.B.O.), was further pioneered by Juan Atkins, Derrick May, and Kevin Saunderson, the "godfathers" of Detroit Techno.
Derrick May a.k.a. "MAYDAY" and Thomas Barnett released "Nude Photo" in 1987 on May's label "Transmat Records", which helped kickstart the Detroit techno music scene and was put in heavy rotation on Chicago's Hot Mix 5 Radio DJ mix show and in many Chicago clubs. A year later, Transmat released what was to become one of techno and house music's classic anthems – the seminal track "Strings of Life". Transmat Records went on to have many more successful releases such as 1988's "Wiggin". As well, Derrick May had successful releases on Kool Kat Records and many remixes for a host of underground and mainstream recording artist.
Kevin Saunderson's company KMS Records contributed many releases that were as much house music as they were techno. These tracks were well received in Chicago and played on Chicago radio and in clubs. Blake Baxter's 1986 recording, "When we Used to Play / Work your Body", 1987's "Bounce Your Body to the Box" and "Force Field", "The Sound / How to Play our Music" and "the Groove that Won't Stop" and a remix of "Grooving Without a Doubt". In 1988, as house music became more popular among general audiences, Kevin Saunderson's group Inner City with Paris Gray released the 1988 hits "Big Fun" and "Good Life", which eventually were picked up by Virgin Records. Each EP / 12 inch single sported remixes by Mike "Hitman" Wilson and Steve "Silk" Hurley of Chicago and Derrick "Mayday" May and Juan Atkins of Detroit. In 1989, KMS had another hit release of "Rock to the Beat" which was a theme in Chicago dance clubs.
UK: 1986–early 1990s
With house music already massive on the '80s dance-scene it was only a matter of time before it would penetrate the UK pop charts. The record generally credited as the first house hit in the UK was Farley "Jackmaster" Funk's "Love Can't Turn Around" which reached #10 in the UK singles chart in September 1986.
In January 1987, Chicago artist Steve "Silk" Hurley's "Jack Your Body" reached number one in the UK, showing it was possible for house music to cross over. The same month also saw Raze enter the top 20 with "Jack the Groove", and several further house hits reached the top ten that year. Stock Aitken Waterman's productions for Mel and Kim, including the number-one hit "Respectable", added elements of house to their previous Europop sound, and session group Mirage scored top-ten hits with "Jack Mix II" and "Jack Mix IV", medleys of previous electro and Europop hits rearranged in a house style. Key labels in the rise of house music in the UK included:
Jack Trax, which specialised in licensing US club-hits for the British market (and released an influential series of compilation albums)
Rhythm King, which was set up as a hip hop label but also issued house records
Jive Records' Club Records imprint
The tour in March 1987 of Knuckles, Jefferson, Fingers Inc. (Heard) and Adonis as the DJ International Tour boosted house in the UK. Following the number-one success of MARRS' "Pump Up The Volume" in October, the years 1987 to 1989 also saw UK acts such as The Beatmasters, Krush, Coldcut, Yazz, Bomb The Bass, S-Express, and Italy's Black Box opening the doors to a house music onslaught on the UK charts. Early British house music quickly set itself apart from the original Chicago house sound; many of the early hits were based on sample montage, rap was often used for vocals (far more than in the US), and humor was frequently an important element.
The second best-selling British single of 1988 was an acid house record, the Coldcut-produced "The Only Way Is Up" by Yazz.
thumb|left|Kym Mazelle (born 1960) has been called the "The First Lady of House Music."
One of the early anthemic tunes, "Promised Land" by Joe Smooth, was covered and charted within a week by the Style Council. Europeans embraced house, and began booking legendary American house DJs to play at the big clubs, such as Ministry of Sound, whose resident, Justin Berkmann brought in Larry Levan. In the late 1980s, American-born singer Kym Mazelle relocated to London to sign a recording contract with EMI Records and her first album Brilliant! in 1989, which was based in the genre of house music.Brilliant! - Kym Mazelle. All Music Guide. Retrieved on December 6, 2016 Mazelle's single "Wait!" featuring Robert Howard became an one of the first international house record hits.
The house scene in cities such as Birmingham, Leeds, Sheffield and London were also provided with many underground Pirate Radio stations and DJs alike which helped bolster an already contagious, but otherwise ignored by the mainstream, music genre. The earliest and influential UK house and techno record labels such as Warp Records and Network Records (otherwise known as Kool Kat records) helped introduce American and later Italian dance music to Britain as well as promoting select UK dance music acts.
But house was also being developed on Ibiza although no house artists or labels were coming from this tiny island at the time. By the mid-1980s a distinct Balearic mix of house was discernible. Several clubs such as Amnesia with DJ Alfredo were playing a mix of rock, pop, disco and house. These clubs, fueled by their distinctive sound and Ecstasy, began to have an influence on the British scene. By late 1987, DJs such as Trevor Fung, Paul Oakenfold and Danny Rampling were bringing the Ibiza sound to UK clubs such as the Haçienda in Manchester, and in London clubs such as Shoom in Southwark, Heaven, Future and Spectrum.
In the U.S., the music was being developed to create a more sophisticated sound, moving beyond just drum loops and short samples. In Chicago, Marshall Jefferson had formed the house group Ten City Byron Burke, Byron Stingily & Herb Lawson(from "intensity"). New York–based performers such as Mateo & Matos and Blaze had slickly produced disco house tracks. In Detroit a proto-techno music sound began to emerge with the recordings of Juan Atkins, Derrick May and Kevin Saunderson.
Atkins, a former member of Cybotron, released Model 500 "No UFOs" in 1985, which became a regional hit, followed by dozens of tracks on Transmat, Metroplex and Fragile. One of the most unusual was "Strings of Life" by Derrick May, a darker, more intellectual strain of house. "Techno-Scratch" was released by the Knights Of The Turntable in 1984 which had a similar techno sound to Cybotron. The manager of the Factory nightclub and co-owner of the Haçienda, Tony Wilson, also promoted acid house culture on his weekly TV show. The Midlands also embraced the late 1980s house scene with illegal parties and more legal dance clubs such as The Hummingbird.
US: late 1980s–early 1990s
thumb|right|Building in New York City where the Paradise Garage nightclub was located
Back in America the scene had still not progressed beyond a small number of clubs in Chicago, Detroit, Newark and New York City. However, many independent Chicago-based record labels were making appearances on the Dance Chart with their releases. In the UK, any house song released by a Chicago-based label was routinely considered a must play at many clubs playing house music. Paradise Garage in New York City was still a top club. The emergence of Todd Terry, a pioneer of the genre, was important in America. His cover of Class Action's Larry Levan mixed "Weekend" demonstrated the continuum from the underground disco to a new house sound with hip-hop influences evident in the quicker sampling and the more rugged bass-line.
In the late 1980s, Nu Groove Records prolonged, if not launched the careers of Rheji Burrell & Rhano Burrell, collectively known as Burrell (after a brief stay on Virgin America via Timmy Regisford and Frank Mendez), along with basically every relevant DJ and Producer in the NY underground scene. The Burrell's are responsible for the "New York Underground" sound and are the undisputed champions of this style of house. Their 30+ releases on this label alone seems to support that fact. In today's market Nu Groove Record releases like the Burrells' enjoy a cult-like following and mint vinyl can fetch $100 U.S. or more in the open market.
By the late 80s, House had moved West, particularly to San Francisco, Oakland, Los Angeles, Fresno, San Diego and Seattle. Los Angeles saw a huge explosion of underground raves and DJs, notably DJs Marques Wyatt and Billy Long, who spun at Jewel's Catch One, the oldest dance club in America. In 1989, the L.A. based, former EBN-OZN singer/rapper Robert Ozn started indie house label One Voice Records, releasing the Mike "Hitman" Wilson remix of Dada Nada's "Haunted House", which garnered instant club and mix show radio play in Chicago, Detroit and New York as well as in the U.K. and France. The record shot up to Number Five on the Billboard Club Chart, marking it as the first House record by a white artist to chart in the U.S. Dada Nada, the moniker for Ozn's solo act, released in 1990, what has become a classic example of jazz-based Deep House, the Frankie Knuckles and David Morales remix of Dada Nada's "Deep Love" (One Voice Records/US, Polydor/UK), featuring Ozn's lush, crooning vocals and muted trumpet improvisational solos, underscoring Deep House's progression into a genre that integrated jazz and pop songwriting structures – a feature which continued to set it apart from Acid House and Techno.
The early 1990s additionally saw the rise in mainstream US popularity for house music. Pop recording artist Madonna's 1990 single "Vogue" became an international hit single and topped the US charts. The single is credited as helping to bring house music to the US mainstream.
Influential gospel/R&B-influenced Aly-us released "Time Passes On" in 1993 (Strictly Rhythm), then later, "Follow Me" which received radio airplay as well as being played in clubs. Another U.S. hit which received radio play was the single "Time for the Perculator" by Cajmere, which became the prototype of ghetto house subgenre. Cajmere started the Cajual and Relief labels (amongst others). By the early 1990s artists such as Cajmere himself (under that name as well as Green Velvet and as producer for Dajae), DJ Sneak, Glenn Underground and others did many recordings. The 1990s saw new Chicago house artists emerge such as DJ Funk, who operates a Chicago house record label called Dance Mania. Ghetto house and acid house were other house music styles that were also started in Chicago.
Late 1980s–1990s
In Britain, further experiments in the genre boosted its appeal. House and rave clubs such as Lakota and Cream emerged across Britain, hosting house and dance scene events. The 'chilling out' concept developed in Britain with ambient house albums such as The KLF's Chill Out and Analogue Bubblebath by Aphex Twin. The Godskitchen superclub brand also began in the midst of the early 90's rave scene. After initially hosting small nights in Cambridge and Northampton, the associated events scaled up in Milton Keynes, Birmingham and Leeds. A new indie dance scene also emerged in the 90's. In New York, bands such as Deee-Lite furthered house's international influence. Two distinctive tracks from this era were the Orb's "Little Fluffy Clouds" (with a distinctive vocal sample from Rickie Lee Jones) and the Happy Mondays' "Wrote for Luck" ("WFL") which was transformed into a dance hit by Vince Clarke.
In England, one of the few licensed venues The Eclipse attracted people from up and down the country as it was open until the early hours. The Criminal Justice and Public Order Act 1994 was a government attempt to ban large rave dance events featuring music with "repetitive beats". There were a number of abortive "Kill the Bill" demonstrations. The Spiral Tribe at Castle Morten was probably the nail in the coffin for illegal raves, and forced through the bill, which became law, in November 1994. The music continued to grow and change, as typified by Leftfield with "Release the Pressure", which introduced dub and reggae into the house sound, although Leftfield had prior releases, such as "Not Forgotten" released in 1990 on Sheffield's Outer Rhythm records.
A new generation of clubs such as Liverpool's Cream and the Ministry of Sound were opened to provide a venue for more commercial sounds. Major record companies began to open "superclubs" promoting their own acts. These superclubs entered into sponsorship deals initially with fast food, soft drinks, and clothing companies. Flyers in clubs in Ibiza often sported many corporate logos. A new subgenre, Chicago hard house, was developed by DJs such as Bad Boy Bill, DJ Lynnwood, DJ Irene, Richard "Humpty" Vission and DJ Enrie, mixing elements of Chicago house, funky house and hard house together. Additionally, Producers such as George Centeno, Darren Ramirez, and Martin O. Cairo would develop the Los Angeles Hard House sound. Similar to gabber or hardcore techno from the Netherlands, this sound was often associated with the "rebel" culture of the time. These 3 producers are often considered "ahead of their time" since many of the sounds they engineered during the late 20th century became more prominent during the 21st century.
Towards the end of the 1990s and into the 2000s, producers such as Daft Punk, Stardust, Cassius, St. Germain and DJ Falcon began producing a new sound out of Paris's house scene. Together, they laid the groundwork for what would be known as the French house movement. By combining the harder-edged-yet-soulful philosophy of Chicago house with the melodies of obscure funk, state-of-the-art production techniques and the sound of analog synthesizers, they began to create the standards that would shape all house music.
21st century
2000s
Chicago Mayor Richard M. Daley proclaimed August 10, 2005 to be "House Unity Day" in Chicago, in celebration of the "21st anniversary of house music" (actually the 21st anniversary of the founding of Trax Records, an independent Chicago-based house label). The proclamation recognized Chicago as the original home of house music and that the music's original creators "were inspired by the love of their city, with the dream that someday their music would spread a message of peace and unity throughout the world". DJs such as Frankie Knuckles, Marshall Jefferson, Paul Johnson and Mickey Oliver celebrated the proclamation at the Summer Dance Series, an event organized by Chicago's Department of Cultural Affairs.
It was during this decade that vocal house became firmly established, both in the underground and as part of the pop market, and labels such as Defected Records, Roule and Om were at the forefront of championing the emerging sound. In the mid-2000s, fusion genres such as electro house and fidget house emerged. This fusion is apparent in the crossover of musical styles by artists such as Dennis Ferrer and Booka Shade, with the former's production style having evolved from the New York soulful house scene and the latter's roots in techno.
Numerous live performance events dedicated to house music were founded during the course of the decade, including Shambhala Music Festival and major industry sponsored events like Miami's Winter Music Conference. The genre even gained popularity through events like Creamfields.
In the late 2000s, house witnessed renewed chart success thanks to acts such as Daft Punk, Deadmau5, Fedde Le Grand, David Guetta, Calvin Harris.
2010s
thumb|Swedish House Mafia performing in 2011.
2010s saw multiple new sounds in house music developed by numerous DJs. Sweden knew a prominence of snare-less "Swedish progressive house" with the emergence of Sebastian Ingrosso, Axwell, Steve Angello (These three formed a trio called Swedish House Mafia), Avicii, Alesso, etc. Netherlands brought together a concept of "Dirty Dutch", electro house subgenre characterized by very abrasive leads and darker arpeggios, with prominent DJs Chuckie, Hardwell, Laidback Luke, Afrojack, R3hab, Bingo Players, Quintino, Alvaro, Cedric Gervais, 2G, etc. Elsewhere, fusion genres derivative of 2000s progressive house returned to prominence, especially with the help of DJs Calvin Harris, Eric Prydz, Mat Zo, Above & Beyond and Fonzerelli in Europe, Deadmau5, Kaskade, Steve Aoki, Porter Robinson and Wolfgang Gartner in the US and Canada. The growing popularity of such artists led to the emergence of electro house and progressive house blended sounds in popular music, such as singles Lady Gaga's "Marry the Night", The Black Eyed Peas' "The Best One Yet (The Boy)" and the will.i.am and Britney Spears "Scream & Shout". Big room house found increasing popularity since 2010, particularly through international dance music festivals such as Tomorrowland, Ultra Music Festival, and Electric Daisy Carnival.
In addition to these popular examples of house, there has also been a reunification of contemporary house and its roots. Many hip hop and R&B artists also turn to house music to add a mass appeal to the music they produce.
Tropical house surged onto the top 40 on the UK Singles Chart in 2015 with artists such as Kygo and Jonas Blue.
The mid-2010s saw house integrate with K-pop music, with its artists now incorporating the genre, an example being f(x) and their single "4 Walls".
Events
Chosen Few is an annual event in Chicago that celebrates house music in its birthplace. Started in 1990, it was a gathering of house music artists and their friends and families. In the 2010s, it is an annual event with live performances by DJs and artists from around the world.
See also
House dance
List of electronic music genres
List of house music artists
Styles of house music
Notes
Further reading
Bidder, Sean (2002). Pump Up the Volume: A History of House Music, MacMillan. ISBN 0-7522-1986-3
Bidder, Sean (1999). The Rough Guide to House Music, Rough Guides. ISBN 1-85828-432-5
Brewster, Bill, & Frank Broughton 2000 Last Night a DJ Saved My Life: The History of the Disc Jockey, Grove Press. ISBN 0-8021-3688-5 and in UK: 1999 / 2006, Headline.
Kai Fikentscher 2000 "'You Better Work!' Underground Dance Music in New York City". Middletown, Connecticut: Wesleyan University Press. ISBN 0-8195-6404-4
Hewitt, Michael. Music Theory for Computer Musicians. 1st Ed. U.S. Cengage Learning, 2008. ISBN 978-1-59863-503-4
Kempster, Chris (Ed) (1996). History of House, Castle Communications. ISBN 1-86074-134-7 (A reprinting of magazine articles from the 1980s and 90s)
Mireille, Silcott (1999). Rave America: New School Dancescapes, ECW Press. ISBN 1-55022-383-6
Reynolds, Simon (1998). Energy Flash: a Journey Through Rave Music and Dance Culture, (UK title, Pan Macmillan. ISBN 0-330-35056-0), also released in U.S. as Generation Ecstasy : Into the World of Techno and Rave Culture (U.S. title, Routledge, 1999, ISBN 0-415-92373-5)
Rizza Corrado, Trani Marco, "I love the nightlife"' Wax Production (Roma), 2010
Shapiro, P., (2000), Modulations: A History of Electronic Music: Throbbing Words on Sound, ISBN 1-891024-06-X.
Snoman, Rick (2009). The Dance Music Manual: Tools, Toys, and Techniques — Second Edition: Chapter 11: House. Oxford, UK: Elsevier Press. p. 231–249.
Rietveld, Hillegonda C. (1998). This is our House: House Music, Cultural Spaces and Technologies, Ashgate. ISBN 1-85742-242-2
External links
The History of House (2004) HouseKeeping: Funky House DJs from the UK
Excerpt taken From the book, What Kind Of House Party Is This?
History of House History of House music and legal MP3 DJ mixes.
Category:African-American music
Category:Electronic dance music genres
Category:African-American culture
Category:LGBT African-American culture
Category:Music of Chicago
Category:1980s in music | 13,930 | 2017-01 |
Czech language | Czech (; čeština ), historically also Bohemian (; lingua Bohemica in Latin), is a West Slavic language of the Czech–Slovak group, that is strongly influenced by Latinhttp://babel.mml.ox.ac.uk/naughton/lit_to_1918.html. University of Oxford and German.http://slavic.ucla.edu/czech/czech-republic/. University of California, Los Angeles
It is spoken by over 10 million people and is the official language of the Czech Republic. Czech is closely related to Slovak, to the point of being mutually intelligible to a very high degree.http://link.springer.com/article/10.1007/s11185-015-9150-9
The Czech-Slovak group developed within West Slavic in the high medieval period, and the standardisation of Czech and Slovak within the Czech–Slovak dialect continuum emerges in the early modern period. In the later 18th to mid-19th century, the modern written standard was codified in the context of the Czech National Revival. The main vernacular, known as Common Czech, is based on the vernacular of Prague, but is now spoken throughout most of the Czech Republic. The Moravian dialects spoken in the eastern part of the country are mostly also counted as Czech, although some of their eastern variants are closer to Slovak.
The Czech phoneme inventory is moderate in size, comprising five vowels (each short or long) and twenty-five consonants (divided into "hard", "neutral" and "soft" categories). Words may contain uncommon (or complicated) consonant clusters, including one consonant represented by the grapheme ř, or lack vowels altogether. Czech orthography is simple, and has been used as a model by phonologists.
Classification
thumb|right|350px|alt=Language-tree graph|Classification of Czech within the Balto-Slavic branch of the Indo-European language family. Czech and Slovak make up a "Czech–Slovak" subgroup.
Czech is classified as a member of the West Slavic sub-branch of the Slavic branch of the Indo-European language family. This branch includes Polish, Kashubian, Upper and Lower Sorbian and Slovak. Slovak is by far the closest genetic neighbor of Czech, and the languages are closer than any other pair of West Slavic languages (including Upper and Lower Sorbian, which share a name by association with an ethnic group).
The West Slavic languages are spoken in an area classified as part of Central Europe. Except for Polish they differ from East and South Slavic languages by their initial-syllable stress, and Czech is distinguished from other West Slavic languages by a more-restricted distinction between "hard" and "soft" consonants (see Phonology below).
History
thumb|left|150px|The Bible of Kralice was the first complete translation of the Bible into the Czech language. Its six volumes were first published between 1579 and 1593.|alt=A Gothic-style book with ornate, flowery designs on the cover
Old Czech
Around the 7th century, the Slavic expansion reached Central Europe, settling on the eastern fringes of the Frankish Empire. The West Slavic polity of Great Moravia formed by the 9th century. The Christianization of Bohemia took place during the 9th and 10th centuries. The diversification of the Czech-Slovak group within West Slavic began around that time, marked among other things by its ephemeral use of the voiced velar fricative consonant (/ɣ/) and consistent stress on the first syllable.
The Bohemian (Czech) language is first recorded in writing in glosses and short notes during the 12th to 13th centuries. Administrative documents written in Czech first appear towards the late 14th century. The first Bible translation also dates to this period. Old Czech texts, including poetry and cookbooks, were produced outside the university as well.
Literary activity becomes widespread in the early 15th century in the context of the Bohemian Reformation. The term "Old Czech" is applied to the period predating the 16th century, with the earliest records of the high medieval period also classified as "early Old Czech".
Jan Hus contributed significantly to the standardization of Czech orthography, advocated for widespread literacy among Czech commoners (particularly in religion) and made early efforts to model written Czech after the spoken language.
There was no standardisation distinguishing between Czech and Slovak prior to the 15th century. In the 16th century, the division between Czech and Slovak becomes apparent, marking the confessional division between Lutheran Protestants in Slovakia using Czech orthography and Catholics, especially Slovak Jesuits, beginning to use a separate Slovak orthography based on the language of the Trnava region.
The publication of the Kralice Bible, between 1579 and 1593, spawned widespread nationalism, and in 1615 the government of Bohemia ruled that only Czech-speaking residents would be allowed to become full citizens or inherit goods or land.
This, and the conversion of the Czech upper classes from the Habsburg Empire's Catholicism to Protestantism, angered the Habsburgs and helped trigger the Thirty Years' War (where the Czechs were defeated at the Battle of White Mountain).
The Czechs became serfs; Bohemia's printing industry (and its linguistic and political rights) were dismembered, removing official regulation and support from its language. German quickly became the dominant language in Bohemia.
Modern Czech
thumb|right|Josef Dobrovský, whose writing played a key role in reviving Czech as a written language|alt=In a detailed pencil sketch, a middle-aged man in a suit looks idly into the distance.
The modern standard Czech language originates in standardisation efforts of the 18th century. By then the language had developed a literary tradition, and since then it has changed little; journals from that period have no substantial differences from modern standard Czech, and contemporary Czechs can understand them with little difficulty. Changes include the morphological shift of í to ej and é to í (although é survives for some uses) and the merging of í and the former ejí. Sometime before the 18th century, the Czech language abandoned a distinction between phonemic /l/ and /ʎ/ which survives in Slovak.
thumb|Prohibition signs written in Czech, by entry #3 into the building of National Technical Library in Prague.
With the beginning national revival of the mid-18th century, Czech historians began to emphasize their people's accomplishments from the 15th through the 17th centuries, rebelling against the Counter-Reformation (the Habsburg re-catholization efforts which had denigrated Czech and other non-Latin languages). Czech philologists studied sixteenth-century texts, advocating the return of the language to high culture. This period is known as the Czech National Revival (or Renaissance).
During the national revival, in 1809 linguist and historian Josef Dobrovský released a German-language grammar of Old Czech entitled Ausführliches Lehrgebäude der böhmischen Sprache (Comprehensive Doctrine of the Bohemian Language). Dobrovský had intended his book to be descriptive, and did not think Czech had a realistic chance of returning as a major language. However, Josef Jungmann and other revivalists used Dobrovský's book to advocate for a Czech linguistic revival. Changes during this time included spelling reform (notably, í in place of the former j and j in place of g), the use of t (rather than ti) to end infinitive verbs and the non-capitalization of nouns (which had been a late borrowing from German). These changes differentiated Czech from Slovak. Modern scholars disagree about whether the conservative revivalists were motivated by nationalism or considered contemporary spoken Czech unsuitable for formal, widespread use.
Adherence to historical patterns was later relaxed and standard Czech adopted a number of features from Common Czech (a widespread, informal register), such as leaving some proper nouns undeclined. This has resulted in a relatively high level of homogeneity among all varieties of the language.
Geographic distribution
thumb|right|A map of the languages of Central and Eastern Europe. Within the Czech Republic, Standard Czech is represented by dark yellow (C1) and Moravian dialects by medium yellow (C2) and light green (C3).|alt=Eastern European countries are shown on a map. The Czech Republic, the westernmost of these, is shaped a bit like a jagged horizontal oval, and it is covered by the color representing the Czech language and, at its borders, a little by languages from Poland and Slovakia.
thumb|right|alt=Map of Vojvodina, a province of Serbia, with Czech in official use in one southeastern municipality|Official use of Czech in Vojvodina, Serbia
In 2005 and 2007, Czech was spoken by about 10 million residents of the Czech Republic. A Eurobarometer survey conducted from January to March 2012 found that the first language of 98 percent of Czech citizens was Czech, the third-highest in the European Union (behind Greece and Hungary).
Czech, the official language of the Czech Republic (a member of the European Union since 2004), is one of the EU's official languages and the 2012 Eurobarometer survey found that Czech was the foreign language most often used in Slovakia. Economist Jonathan van Parys collected data on language knowledge in Europe for the 2012 European Day of Languages. The five countries with the greatest use of Czech were the Czech Republic (98.77 percent), Slovakia (24.86 percent), Portugal (1.93 percent), Poland (0.98 percent) and Germany (0.47 percent).
Czech speakers in Slovakia primarily live in cities. Since it is a recognised minority language in Slovakia, Slovak citizens who speak only Czech may communicate with the government in their language to the extent that Slovak speakers in the Czech Republic may do so.
United States
Immigration of Czechs from Europe to the United States occurred primarily from 1848 to 1914. Czech is a Less Commonly Taught Language in U.S. schools, and is taught at Czech heritage centers. Large communities of Czech Americans live in the states of Texas, Nebraska and Wisconsin. In the 2000 United States Census, Czech was reported as the most-common language spoken at home (besides English) in Valley, Butler and Saunders Counties, Nebraska and Republic County, Kansas. With the exception of Spanish (the non-English language most commonly spoken at home nationwide), Czech was the most-common home language in over a dozen additional counties in Nebraska, Kansas, Texas, North Dakota and Minnesota. As of 2009, 70,500 Americans spoke Czech as their first language (49th place nationwide, behind Turkish and ahead of Swedish).
Varieties
The main vernacular is "Common Czech", based on the dialect of the Prague region.
Other Bohemian dialects have become marginalized, while Moravian dialects remain more widespread, with a political movement for Moravian linguistic revival active since the 1990s.
Common Czech
The main Czech vernacular, spoken primarily near Prague but also throughout the country, is known as Common Czech (obecná čeština). This is an academic distinction; most Czechs are unaware of the term or associate it with vernacular (or incorrect) Czech. Compared to standard Czech, Common Czech is characterized by simpler inflection patterns and differences in sound distribution.
Common Czech has become ubiquitous in most parts of the Czech Republic since the later 20th century. It is usually defined as an interdialect used in common speech in Bohemia and western parts of Moravia (by about two thirds of all inhabitants of the Czech Republic). Common Czech is not codified, but some of its elements have become adopted in the written standard.
Since the second half of the 20th century, Common Czech elements have also been spreading to regions previously unaffected, as a consequence of media influence.
Standard Czech is still the norm for politicians, businesspeople and other Czechs in formal situations, but Common Czech is gaining ground in journalism and the mass media.
Common Czech is characterized by quite regular differences from the standard morphology and phonology. These variations are more or less common to all Common Czech dialects:
é usually replaced by ý/í: malý město (small town), plamínek (little flame), lítat (to fly);
ý (sometimes also í) replaced by ej: malej dům (small house), mlejn (mill), plejtvat (to waste), bejt (to be) – as a consequence of the loss of the difference in the pronunciation of y/ý and i/í in the 15th century;
unified plural endings of adjectives: malý lidi (small people), malý ženy (small women), malý města (small towns) – stand.: malí lidé, malé ženy, malá města;
unified instrumental ending -ma in plural: s těma dobrejma lidma, ženama, chlapama, městama (with the good people, women, guys, towns) – stand.: s těmi dobrými lidmi, ženami, chlapy, městy (in essence, this form resembles the form of the dual, which was once a productive form, but now is almost extinct, except a few examples; in Common Czech it can often be used indiscriminately, i.e. it can substitute a regular plural form, not just as it was once used);
prothetic v- added to most words beginning o-: votevřít vokno (to open the window) – stand.: otevřít okno; but ovoce not *vovoce (fruit)
omitting of the syllabic -l in the masculine ending of past tense verbs: řek (he said), moh (he could), pích (he pricked) – stand.: řekl, mohl, píchl.
Example of declension (with the comparison with the standard Czech):
MasculineanimateMasculineinanimateFeminineNeuterSg.Nominativemladej člověkmladý člověkmladej státmladý státmladá ženamladá ženamladý zvířemladé zvířeGenitivemladýho člověkamladého člověkamladýho státumladého státumladý ženymladé ženymladýho zvířetemladého zvířeteDativemladýmu člověkovimladému člověkumladýmu státumladému státumladý ženěmladé ženěmladýmu zvířetimladému zvířetiAccusativemladýho člověkamladého člověkamladej státmladý státmladou ženumladou ženumladý zvířemladé zvířeVocativemladej člověče!mladý člověče!mladej státe!mladý státe!mladá ženo!mladá ženo!mladý zvíře!mladé zvíře!Locativemladym člověkovimladém člověkovimladym státěmladém státěmladý ženěmladé ženěmladym zvířetimladém zvířetiInstrumentalmladym člověkemmladým člověkemmladym státemmladým státemmladou ženoumladou ženoumladym zvířetemmladým zvířetemPl.Nominativemladý lidimladí lidémladý státymladé státymladý ženymladé ženymladý zvířatamladá zvířataGenitivemladejch lidímladých lidímladejch státůmladých státůmladejch ženmladých ženmladejch zvířatmladých zvířatDativemladejm lidemmladým lidemmladejm státůmmladým státůmmladejm ženámmladým ženámmladejm zvířatůmmladým zvířatůmAccusativemladý lidimladé lidimladý státymladé státymladý ženymladé ženymladý zvířatamladá zvířataVocativemladý lidi!mladí lidé!mladý státy!mladé státy!mladý ženy!mladé ženy!mladý zvířata!mladá zvířata!Locativemladejch lidechmladých lidechmladejch státechmladých státechmladejch ženáchmladých ženáchmladejch zvířatechmladých zvířatechInstrumentalmladejma lidmamladými lidmimladejma státamamladými státymladejma ženamamladými ženamimladejma zvířatamamladými zvířaty
mladý člověk – young man/person, mladí lidé – young people, mladý stát – young state, mladá žena – young woman, mladé zvíře – young animal
Bohemian dialects
Apart from the Common Czech vernacular, there remain a variety of other Bohemian dialect, mostly in marginal rural areas. Dialect use began to weaken in the second half of the 20th century, and by
the early 1990s dialect use was stigmatized, associated with the shrinking lower class and used in literature or other media for comedic effect.
Increased travel and media availability to dialect-speaking populations has encouraged them to shift to (or add to their own dialect) standard Czech.
Although Czech has received considerable scholarly interest for a Slavic language, this interest has focused primarily on modern standard Czech and historical texts rather than dialects.
The Czech Statistical Office in 2003 recognized the following Bohemian dialects:
Nářečí středočeská (Central Bohemian dialects)
Nářečí jihozápadočeská (Southwestern Bohemian dialects)
Podskupina chodská (Chod subgroup)
Podskupina doudlebská (Doudleby subgroup)
Nářečí severovýchodočeská (Northeastern Bohemian dialects)
Podskupina podkrknošská (Krkonoše subgroup)
Moravian dialects
The Czech dialects spoken in Moravia and Silesia are known as Moravian (moravština). In the Austro-Hungarian Empire, "Bohemian-Moravian-Slovak" was a language citizens could register as speaking (with German, Polish and several others). Of the Czech dialects, only Moravian is distinguished in nationwide surveys by the Czech Statistical Office. As of 2011, 62,908 Czech citizens spoke Moravian as their first language and 45,561 were diglossal (speaking Moravian and standard Czech as first languages).
Beginning in the sixteenth century, some varieties of Czech resembled Slovak; the southeastern Moravian dialects, in particular, are sometimes considered dialects of Slovak rather than Czech. These dialects form a continuum between the Czech and Slovak languages, using the same declension patterns for nouns and pronouns and the same verb conjugations as Slovak.
The Czech Statistical Office in 2003 recognized the following Moravian dialects:
Nářečí českomoravská (Bohemian–Moravian dialects)
Nářečí středomoravská (Central Moravian dialects)
Podskupina tišnovská (Tišnov subgroup)
Nářečí východomoravská (Eastern Moravian dialects)
Podskupina slovácká (Moravian Slovak subgroup)
Podskupina valašská (Moravian Wallachian subgroup)
Nářečí slezská (Silesian dialects)
Sample
In a 1964 textbook on Czech dialectology, Břetislav Koudela used the following sentence to highlight phonetic differences between dialects:
Standard Czech: Dej mouku ze mlýna na vozík. Common Czech: Dej mouku ze mlejna na vozejk. Central Moravian: Dé móku ze mléna na vozék. Eastern Moravian: Daj múku ze młýna na vozík. Silesian: Daj muku ze młyna na vozik. Slovak: Daj múku z mlyna na vozík. English: Put the flour from the mill into the cart.
Mutual intelligibility
Czech and Slovak have been considered mutually intelligible; speakers of either language can communicate with greater ease than those of any other pair of West Slavic languages. Since the 1993 dissolution of Czechoslovakia mutual intelligibility has declined for younger speakers, probably because Czech speakers now experience less exposure to Slovak and vice versa.
In phonetic differences, Czech is characterized by a glottal stop before initial vowels and Slovak by its less-frequent use of long vowels than Czech; however, Slovak has long forms of the consonants r and l when they function as vowels. Phonemic differences between the two languages are generally consistent, typical of two dialects of a language. Grammatically, although Czech (unlike Slovak) has a vocative case both languages share a common syntax.
One study showed that Czech and Slovak lexicons differed by 80 percent, but this high percentage was found to stem primarily from differing orthographies and slight inconsistencies in morphological formation; Slovak morphology is more regular (when changing from the nominative to the locative case, Praha becomes Praze in Czech and Prahe in Slovak). The two lexicons are generally considered similar, with most differences found in colloquial vocabulary and some scientific terminology. Slovak has slightly more borrowed words than Czech.
The similarities between Czech and Slovak led to the languages being considered a single language by a group of 19th-century scholars who called themselves "Czechoslavs" (Čechoslované), believing that the peoples were connected in a way which excluded German Bohemians and (to a lesser extent) Hungarians and other Slavs. During the First Czechoslovak Republic (1918–1938), although "Czechoslovak" was designated as the republic's official language both Czech and Slovak written standards were used. Standard written Slovak was partially modeled on literary Czech, and Czech was preferred for some official functions in the Slovak half of the republic. Czech influence on Slovak was protested by Slovak scholars, and when Slovakia broke off from Czechoslovakia in 1938 as the Slovak State (which then aligned with Nazi Germany in World War II) literary Slovak was deliberately distanced from Czech. When the Axis powers lost the war and Czechoslovakia reformed, Slovak developed somewhat on its own (with Czech influence); during the Prague Spring of 1968, Slovak gained independence from (and equality with) Czech. Since then, "Czechoslovak" refers to improvised pidgins of the languages which have arisen from the decrease in mutual intelligibility.
Vocabulary
Czech vocabulary derives primarily from Slavic, Baltic and other Indo-European roots. Although most verbs have Balto-Slavic origins, pronouns, prepositions and some verbs have wider, Indo-European roots. Some loanwords have been restructured by folk etymology to resemble native Czech words (hřbitov, "graveyard" and listina, "list").
Most Czech loanwords originated in one of two time periods. Earlier loanwords, primarily from German, Greek and Latin, arrived before the Czech National Revival. More recent loanwords derive primarily from English and French, and also from Hebrew, Arabic and Persian. Many Russian loanwords, principally animal names and naval terms, also exist in Czech.
Although older German loanwords were colloquial, recent borrowings from other languages are associated with high culture. During the nineteenth century, words with Greek and Latin roots were rejected in favor of those based on older Czech words and common Slavic roots; "music" is muzyka in Polish and музыка (muzyka) in Russian, but in Czech it is hudba. Some Czech words have been borrowed as loanwords into English and other languages—for example, robot (from robota, "labor") and polka (from polka, "Polish woman" or from "půlka" "half").
Standard Czech
The modern written standard is directly based on the standardisation during the Czech National Revival in the 1830s, significantly influenced by Josef Jungmann's Czech-German dictionary published during 1834–1839. Jungmann used vocabulary of the Bible of Kralice (1579–1613) period and of the language used by his contemporaries. He borrowed words not present in Czech from other Slavic languages or created neologisms.
Phonology
Czech contains ten basic vowel phonemes, and three more found only in loanwords. They are , their long counterparts , and three diphthongs, . The latter two diphthongs and the long are exclusive to loanwords. Vowels are never reduced to schwa sounds when unstressed. Each word usually has primary stress on its first syllable, except for enclitics (minor, monosyllabic, unstressed syllables). In all words of more than two syllables, every odd-numbered syllable receives secondary stress. Stress is unrelated to vowel length, and the possibility of stressed short vowels and unstressed long vowels can be confusing to students whose native language combines the features (such as English).
Voiced consonants with unvoiced counterparts are unvoiced at the end of a word, or when they are followed by unvoiced consonants. Czech consonants are categorized as "hard", "neutral" or "soft":
Hard:
Neutral:
Soft:
This distinction describes the declension patterns of nouns, which is based on the category of a noun's ending consonant. Hard consonants may not be followed by i or í in writing, or soft ones by y or ý (except in loanwords such as kilogram). Neutral consonants may take either character. Hard consonants are sometimes known as "strong", and soft ones as "weak".
The phoneme represented by the letter ř (capital Ř) is considered unique to Czech. It represents the raised alveolar non-sonorant trill (IPA: ), a sound somewhere between Czech's r and ž (example: ), and is present in Dvořák.
The consonants can be syllabic, acting as syllable nuclei in place of a vowel. This can be difficult for non-native speakers to pronounce, and Strč prst skrz krk ("Stick [your] finger down [your] throat") is a Czech tongue twister.
Consonants
Labial Alveolar Post-alveolar Palatal Velar Glottal Nasal Plosive () Affricate () () Fricative () Trill Approximant
Vowels
left|thumb|A Czech vowel chart
Grammar
Slavic grammar is fusional; its nouns, verbs, and adjectives are inflected by phonological processes to modify their meanings and grammatical functions, and the easily separable affixes characteristic of agglutinative languages are limited.
Slavic-language inflection is complex and pervasive, inflecting for case, gender and number in nouns and tense, aspect, mood, person and subject number and gender in verbs.
Parts of speech include adjectives, adverbs, numbers, interrogative words, prepositions, conjunctions and interjections. Adverbs are primarily formed by taking the final ý or í of an adjective and replacing it with e, ě, or o. Negative statements are formed by adding the affix ne- to the verb of a clause, with one exception: je (he, she or it is) becomes není.
Sentence and clause structure
+Czech pronouns, nominative casePersonSingularPlural1. já my2. tyvy (formal) vy 3. on (masculine)ona (feminine)ono (neuter) oni (masculine)ony (feminine)ona (neuter)
Because Czech uses grammatical case to convey word function in a sentence (instead of relying on word order, as English does), its word order is flexible. As a pro-drop language, in Czech an intransitive sentence can consist of only a verb; information about its subject is encoded in the verb. Enclitics (primarily auxiliary verbs and pronouns) must appear in the second syntactic slot of a sentence, after the first stressed unit. The first slot must contain a subject and object, a main form of a verb, an adverb or a conjunction (except for the light conjunctions a, "and", i, "and even" or ale, "but").
Czech syntax has a subject–verb–object sentence structure. In practice, however, word order is flexible and used for topicalization and focus. Although Czech has a periphrastic passive construction (like English), colloquial word-order changes frequently produce the passive voice. For example, to change "Peter killed
Paul" to "Paul was killed by Peter" the order of subject and object is inverted: Petr zabil Pavla ("Peter killed Paul") becomes "Paul, Peter killed" (Pavla zabil Petr). Pavla is in the accusative case, the grammatical object (in this case, the victim) of the verb.
A word at the end of a clause is typically emphasized, unless an upward intonation indicates that the sentence is a question:
Pes jí bagetu. – The dog eats the baguette (rather than eating something else).
Bagetu jí pes. – The dog eats the baguette (rather than someone else doing so).
Pes bagetu jí. – The dog eats the baguette (rather than doing something else to it).
Jí pes bagetu? – Does the dog eat the baguette? (emphasis ambiguous)
In portions of Bohemia (including Prague), questions such as Jí pes bagetu? without an interrogative word (such as co, "what" or kdo, "who") are intoned in a slow rise from low to high, quickly dropping to low on the last word or phrase.
In Czech syntax, adjectives precede nouns. Relative clauses are introduced by relativizers such as the adjective který, analogous to the English relative pronouns "which", "that", "who" and "whom". As with other adjectives, it is declined into the appropriate case (see Declension below) to match its associated noun, person and number. Relative clauses follow the noun they modify, and the following is a glossed example:
Czech: Chc-i navšt-ívit univerzit-u, na kter-ou chod-í Jan.Gloss: want-1.SG visit-INF university-SG.ACC, on which-SG.F.ACC attend-3.SG John.SG.NOM
English: I want to visit the university that John attends.
Declension
In Czech, nouns and adjectives are declined into one of seven grammatical cases. Nouns are inflected to indicate their use in a sentence. A nominative–accusative language, Czech marks subject nouns with nominative case and object nouns with accusative case. The genitive case marks possessive nouns and some types of movement. The remaining cases (instrumental, locative, vocative and dative) indicate semantic relationships, such as secondary objects, movement or position (dative case) and accompaniment (instrumental case). An adjective's case agrees with that of the noun it describes. When Czech children learn their language's declension patterns, the cases are referred to by number:
No. Ordinal name (Czech) Full name (Czech) Case Main usage1. první pád nominativ nominative Subjects2. druhý pád genitiv genitive Belonging, movement away from something (or someone)3. třetí pád dativ dative Indirect objects, movement toward something (or someone)4. čtvrtý pád akuzativ accusative Direct objects5. pátý pád vokativ vocative Addressing someone6. šestý pád lokál locative Location7. sedmý pád instrumentál instrumental Being used for a task; acting with someone (or something)
Some Czech grammatical texts order the cases differently, grouping the nominative and accusative (and the dative and locative) together because those declension patterns are often identical; this order accommodates learners with experience in other inflected languages, such as Latin or Russian. This order is nominative, accusative, genitive, dative, locative, instrumental and vocative.
Some prepositions require the nouns they modify to take a particular case. The cases assigned by each preposition are based on the physical (or metaphorical) direction, or location, conveyed by it. For example, od (from, away from) and z (out of, off) assign the genitive case. Other prepositions take one of several cases, with their meaning dependent on the case; na means "onto" or "for" with the accusative case, but "on" with the locative.
Examples of declension patterns (using prepositions) for a few nouns with adjectives follow. Only one plural example is given, since plural declension patterns are similar across genders.
+ Case Noun/adjective Big dog (m.) Small cat (f.) Hard wood (n.) Young dragons (pl.) Nom. velký pes(big dog) malá kočka(small cat) tvrdé dřevo(hard wood) mladí draci(young dragons) Gen. z velkého psa(from the big dog) z malé kočky(from the small cat) z tvrdého dřeva(from the hard wood) z mladých draků(from the young dragons) Dat. k velkému psovi(to the big dog) k malé kočce(to the small cat) ke tvrdému dřevu(to the hard wood) ke mladým drakům(to the young dragons) Acc. na velkého psa(for the big dog) na malou kočku(for the small cat) na tvrdé dřevo(for the hard wood) na mladé draky(for the young dragons) Voc. velký pse!(big dog!) malá kočko!(small cat!) tvrdé dřevo!(hard wood!) mladí draci!(young dragons!) Loc. o velkém psovi(about the big dog) o malé kočce(about the small cat) o tvrdém dřevě(about the hard wood) o mladých dracích(about the young dragons) Ins. s velkým psem(with the big dog) s malou kočkou(with the small cat) s tvrdým dřevem(with the hard wood) s mladými draky(with the young dragons)
This is a glossed example of a sentence using several cases:
Czech: Nes-l js-em krabic-i do dom-u se sv-ým přítel-em.Gloss: carry-SG.M.PST be-1.SG box-SG.ACC into house-SG.GEN with own-SG.INS friend-SG.INS
English: I carried the box into the house with my friend.
Czech distinguishes three genders—masculine, feminine, and neuter—and the masculine gender is subdivided into animate and inanimate. With few exceptions, feminine nouns in the nominative case end in -a, -e, or -ost; neuter nouns in -o, -e, or -í, and masculine nouns in a consonant. Adjectives agree in gender and animacy (for masculine nouns in the accusative or genitive singular and the nominative plural) with the nouns they modify. The main effect of gender in Czech is the difference in noun and adjective declension, but other effects include past-tense verb endings: for example, dělal (he did, or made); dělala (she did, or made) and dělalo (it did, or made).
Nouns are also inflected for number, distinguishing between singular and plural. Typical of a Slavic language, Czech cardinal numbers one through four allow the nouns and adjectives they modify to take any case, but numbers over five place these nouns and adjectives in the genitive case when the entire expression is in nominative or accusative case. The Czech koruna is an example of this feature; it is shown here as the subject of a hypothetical sentence, and declined as genitive for numbers five and up.
EnglishCzech one crown jedna koruna two crowns dvě koruny three crowns tři koruny four crowns čtyři koruny five crowns pět korun
Numerical words decline for case and, for numbers one and two, for gender. Numbers one through five are shown below as examples, and have some of the most exceptions among Czech numbers. The number one has declension patterns identical to those of the demonstrative pronoun, to.
12345Nominativejeden (male)jedna (female)jedno (neuter)dva (male)dvě (female, neuter)třičtyřipětGenitivejednoho (male)jedné (female)jednoho (neuter)dvoutříčtyřpětiDativejednomu (male)jedné (female)jednomu (neuter)dvěmatřemčtyřempětiAccusativejednoho (male an.)jeden (male in.)jednu (female)jedno (neuter)dva (male)dvě (female, neuter)třičtyřipětLocativejednom (male)jedné (female)jednom (neuter)dvoutřechčtyřechpětiInstrumentaljedním (male)jednou (female)jedním (neuter)dvěmatřemičtyřmipěti
Although Czech's main grammatical numbers are singular and plural, a vestigial dual number remains. Some nouns for paired body parts have a dual form: ruka (hand)—ruce; noha (leg)—nohy; oko (eye)—oči, and ucho (ear)—uši. While two of these nouns are neuter in their singular forms, all dual nouns are considered feminine. Czech has no standard declension pattern for dual nouns, and their gender is relevant to their associated adjectives and verbs.
Verb conjugation
Czech verb conjugation is less complex than noun and adjective declension because it codes for fewer categories. Verbs agree with their subjects in person (first, second or third) and number (singular or plural), and are conjugated for tense (past, present or future). For example, the conjugated verb mluvíme (we speak) is in the present tense and first-person plural; it is distinguished from other conjugations of the infinitive mluvit by its ending, me.
Typical of Slavic languages, Czech marks its verbs for one of two grammatical aspects: perfective and imperfective. Most verbs are part of inflected aspect pairs—for example, koupit (perfective) and kupovat (imperfective). Although the verbs' meaning is similar, in perfective verbs the action is completed and in imperfective verbs it is ongoing. This is distinct from past and present tense, and any Czech verb of either aspect can be conjugated into any of its three tenses. Aspect describes the state of the action at the time specified by the tense.
The verbs of most aspect pairs differ in one of two ways: by prefix or by suffix. In prefix pairs, the perfective verb has an added prefix—for example, the imperfective psát (to write, to be writing) compared with the perfective napsat (to write down, to finish writing). The most common prefixes are na-, o-, po-, s-, u-, vy-, z- and za-. In suffix pairs, a different infinitive ending is added to the perfective stem; for example, the perfective verbs koupit (to buy) and prodat (to sell) have the imperfective forms kupovat and prodávat. Imperfective verbs may undergo further morphology to make other imperfective verbs (iterative and frequentative forms), denoting repeated or regular action. The verb jít (to go) has the iterative form chodit (to go repeatedly) and the frequentative form chodívat (to go regularly).
Many verbs have only one aspect, and verbs describing continual states of being—být (to be), chtít (to want), moct (to be able to), ležet (to lie down, to be lying down)—have no perfective form. Conversely, verbs describing immediate states of change—for example, otěhotnět (to become pregnant) and nadchnout se (to become enthusiastic)—have no imperfective aspect.
Although Czech's use of present and future tense is largely similar to that of English, the language uses past tense to represent the English present perfect and past perfect; ona běžela could mean she ran, she has run or she had run.
+Conjugation of být in future tensePersonSingularPlural1. budu budeme2. budeš budete3. bude budou
In some contexts, Czech's perfective present (which differs from the English present perfect) implies future action; in others, it connotes habitual action. As a result, the language has a proper future tense to minimize ambiguity. The future tense does not involve conjugating the verb describing an action to be undertaken in the future; instead, the future form of být (as shown in the table at left) is placed before the infinitive (for example, budu jíst—"I will eat").
This conjugation is not followed by být itself, so future-oriented expressions involving nouns, adjectives, or prepositions (rather than verbs) omit být. "I will be happy" is translated as Budu šťastný (not Budu být šťastný).
+Conditional form of koupit (to buy)PersonSingularPlural1. koupil/a bych koupili/y bychom2. koupil/a bys koupili/y byste 3. koupil/a/o by koupili/y/a by
The infinitive form ends in t (archaically, ti). It is the form found in dictionaries and the form that follows auxiliary verbs (for example, můžu tě slyšet—"I can hear you"). Czech verbs have three grammatical moods: indicative, imperative and conditional. The imperative mood adds specific endings for each of three person (or number) categories: -Ø/-i/-ej for second-person singular, -te/-ete/-ejte for second-person plural and -me/-eme/-ejme for first-person plural. The conditional mood is formed with a particle after the past-tense verb. This mood indicates possible events, expressed in English as "I would" or "I wish".
Most Czech verbs fall into one of five classes, which determine their conjugation patterns. The future tense of být would be classified as a Class I verb because of its endings. Examples of the present tense of each class and some common irregular verbs follow in the tables below:
Class IClass IIClass IIIClass IVClass VDefinition to carryto printto wanderto sufferto do, to makeInfinitivenésttisknoutputovattrpětdělat1st p. sg.nesutisknuputujitrpímdělám2nd p. sg.neseštisknešputuještrpíšděláš3rd p. sg.nesetiskneputujetrpídělá1st p. pl.nesemetisknemeputujemetrpímeděláme2nd p. pl.nesetetiskneteputujetetrpíteděláte3rd p. pl.nesoutisknouputujítrpídělají
+Irregular verbsDefinition to beto wantto eatto sayInfinitivebýtchtítjístříct1st p. sg.jsemchcijímřeknu2nd p. sg.jsichcešjíšřekneš3rd p. sg.jechcejířekne1st p. pl.jsmechcemejímeřekneme2nd p. pl.jstechcetejíteřeknete3rd p. pl.jsouchtějíjedířeknou
Orthography
thumb|right|300px|The handwritten Czech alphabet
Czech has one of the most phonemic orthographies of all European languages. Its thirty-one graphemes represent thirty sounds (in most dialects, i and y have the same sound), and it contains only one digraph: ch, which follows h in the alphabet. As a result, some of its characters have been used by phonologists to denote corresponding sounds in other languages. The characters q, w and x appear only in foreign words. The háček (ˇ) is used with certain letters to form new characters: š, ž, and č, as well as ň, ě, ř, ť, and ď (the latter five uncommon outside Czech). The last two letters are sometimes written with a comma above (ʼ, an abbreviated háček) because of their height. The character ó exists only in loanwords and onomatopoeia.
Unlike most European languages, Czech distinguishes vowel length; long vowels are indicated by an acute accent or, occasionally with ů, a ring. Long u is usually written ú at the beginning of a word or morpheme (úroda, neúrodný) and ů elsewhere, except for loanwords (skútr) or onomatopoeia (bú). Long vowels and ě are not considered separate letters.
Czech typographical features not associated with phonetics generally resemble those of most Latin European languages, including English. Proper nouns, honorifics, and the first letters of quotations are capitalized, and punctuation is typical of other Latin European languages. Writing of ordinal numerals is similar to most European languages. The Czech language uses a decimal comma instead of a decimal point. When writing a long number, spaces between every three numbers (e.g. between hundreds and thousands) may be used for better orientation in handwritten texts, but not in decimal places, like in English. The number 1,234,567.8910 may be written as 1234567,8910 or 1 234 567,8910. Ordinal numbers (1st) use a point as in German (1.). In proper noun phrases (except personal names), only the first word is capitalized (Pražský hrad, Prague Castle).
Sample text
thumb|right|1846 sample of printed Czech
According to Article 1 of the United Nations Universal Declaration of Human Rights:
Czech: Všichni lidé se rodí svobodní a sobě rovní co do důstojnosti a práv. Jsou nadáni rozumem a svědomím a mají spolu jednat v duchu bratrství.
English: "All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood."
See also
Czech Centers
Czech name
Czech Sign Language
Swadesh list of Slavic words
Notes
References
External links
Ústav pro jazyk český – Czech Language Institute, the regulatory body for the Czech language
A GRAMMAR OF CZECH AS A FOREIGN LANGUAGE, written by Karel Tahal
Czech National Corpus
Czech Monolingual Online Dictionary
Czech Translation Dictionaries (Lexilogos)
Czech Swadesh list of basic vocabulary words (from Wiktionary's Swadesh-list appendix)
Basic Czech Phrasebook with Audio
Pimsleur Czech Comprehensive Course
Category:Languages of the Czech Republic
Category:Languages of Slovakia
Category:West Slavic languages | 6,343 | 2017-01 |
Vacuum | thumb|Pump to demonstrate vacuum
Vacuum is space void of matter. The word stems from the Latin adjective vacuus for "vacant" or "void". An approximation to such vacuum is a region with a gaseous pressure much less than atmospheric pressure. Physicists often discuss ideal test results that would occur in a perfect vacuum, which they sometimes simply call "vacuum" or free space, and use the term partial vacuum to refer to an actual imperfect vacuum as one might have in a laboratory or in space. In engineering and applied physics on the other hand, vacuum refers to any space in which the pressure is lower than atmospheric pressure. The Latin term in vacuo is used to describe an object that is surrounded by a vacuum.
The quality of a partial vacuum refers to how closely it approaches a perfect vacuum. Other things equal, lower gas pressure means higher-quality vacuum. For example, a typical vacuum cleaner produces enough suction to reduce air pressure by around 20%. Note that 1 inch of water is ≈0.0025 atm. Much higher-quality vacuums are possible. Ultra-high vacuum chambers, common in chemistry, physics, and engineering, operate below one trillionth (10−12) of atmospheric pressure (100 nPa), and can reach around 100 particles/cm3. Outer space is an even higher-quality vacuum, with the equivalent of just a few hydrogen atoms per cubic meter on average. This source estimates a density of for the Local Group. An atomic mass unit is , for roughly 40 atoms per cubic meter. According to modern understanding, even if all matter could be removed from a volume, it would still not be "empty" due to vacuum fluctuations, dark energy, transiting gamma rays, cosmic rays, neutrinos, and other phenomena in quantum physics. In the electromagnetism in the 19th century, vacuum was thought to be filled with a medium called aether. In modern particle physics, the vacuum state is considered the ground state of matter.
Vacuum has been a frequent topic of philosophical debate since ancient Greek times, but was not studied empirically until the 17th century. Evangelista Torricelli produced the first laboratory vacuum in 1643, and other experimental techniques were developed as a result of his theories of atmospheric pressure. A torricellian vacuum is created by filling a tall glass container closed at one end with mercury, and then inverting the container into a bowl to contain the mercury.How to Make an Experimental Geissler Tube, Popular Science monthly, February 1919, Unnumbered page. Bonnier Corporation
Vacuum became a valuable industrial tool in the 20th century with the introduction of incandescent light bulbs and vacuum tubes, and a wide array of vacuum technology has since become available. The recent development of human spaceflight has raised interest in the impact of vacuum on human health, and on life forms in general.
thumb|300px|right|A large vacuum chamber
Etymology
The word vacuum comes , noun use of neuter of vacuus, meaning "empty", related to vacare, meaning "be empty".
Vacuum is one of the few words in the English language that contains two consecutive letters u.
Historical interpretation
Historically, there has been much dispute over whether such a thing as a vacuum can exist. Ancient Greek philosophers debated the existence of a vacuum, or void, in the context of atomism, which posited void and atom as the fundamental explanatory elements of physics. Following Plato, even the abstract concept of a featureless void faced considerable skepticism: it could not be apprehended by the senses, it could not, itself, provide additional explanatory power beyond the physical volume with which it was commensurate and, by definition, it was quite literally nothing at all, which cannot rightly be said to exist. Aristotle believed that no void could occur naturally, because the denser surrounding material continuum would immediately fill any incipient rarity that might give rise to a void.
In his Physics, book IV, Aristotle offered numerous arguments against the void: for example, that motion through a medium which offered no impediment could continue ad infinitum, there being no reason that something would come to rest anywhere in particular. Although Lucretius argued for the existence of vacuum in the first century BC and Hero of Alexandria tried unsuccessfully to create an artificial vacuum in the first century AD, it was European scholars such as Roger Bacon, Blasius of Parma and Walter Burley in the 13th and 14th century who focused considerable attention on these issues. Eventually following Stoic physics in this instance, scholars from the 14th century onward increasingly departed from the Aristotelian perspective in favor of a supernatural void beyond the confines of the cosmos itself, a conclusion widely acknowledged by the 17th century, which helped to segregate natural and theological concerns.
Almost two thousand years after Plato, René Descartes also proposed a geometrically based alternative theory of atomism, without the problematic nothing–everything dichotomy of void and atom. Although Descartes agreed with the contemporary position, that a vacuum does not occur in nature, the success of his namesake coordinate system and more implicitly, the spatial–corporeal component of his metaphysics would come to define the philosophically modern notion of empty space as a quantified extension of volume. By the ancient definition however, directional information and magnitude were conceptually distinct. With the acquiescence of Cartesian mechanical philosophy to the "brute fact" of action at a distance, and at length, its successful reification by force fields and ever more sophisticated geometric structure, the anachronism of empty space widened until "a seething ferment" of quantum activity in the 20th century filled the vacuum with a virtual pleroma.
The explanation of a clepsydra or water clock was a popular topic in the Middle Ages. Although a simple wine skin sufficed to demonstrate a partial vacuum, in principle, more advanced suction pumps had been developed in Roman Pompeii.Institute and Museum of the History of Science. Pompeii: Nature, Science, and Technology in a Roman Town
thumb|100px|left|Torricelli's mercury barometer produced one of the first sustained vacuums in a laboratory.
In the medieval Middle Eastern world, the physicist and Islamic scholar, Al-Farabi (Alpharabius, 872–950), conducted a small experiment concerning the existence of vacuum, in which he investigated handheld plungers in water. He concluded that air's volume can expand to fill available space, and he suggested that the concept of perfect vacuum was incoherent.Arabic and Islamic Natural Philosophy and Natural Science, Stanford Encyclopedia of Philosophy However, according to Nader El-Bizri, the physicist Ibn al-Haytham (Alhazen, 965–1039) and the Mu'tazili theologians disagreed with Aristotle and Al-Farabi, and they supported the existence of a void. Using geometry, Ibn al-Haytham mathematically demonstrated that place (al-makan) is the imagined three-dimensional void between the inner surfaces of a containing body. According to Ahmad Dallal, Abū Rayhān al-Bīrūnī also states that "there is no observable evidence that rules out the possibility of vacuum". The suction pump later appeared in Europe from the 15th century.Donald Routledge Hill, "Mechanical Engineering in the Medieval Near East", Scientific American, May 1991, pp. 64–69 (cf. Donald Routledge Hill, Mechanical Engineering)Donald Routledge Hill (1996), A History of Engineering in Classical and Medieval Times, Routledge, pp. 143 & 150–2.
Medieval thought experiments into the idea of a vacuum considered whether a vacuum was present, if only for an instant, between two flat plates when they were rapidly separated. There was much discussion of whether the air moved in quickly enough as the plates were separated, or, as Walter Burley postulated, whether a 'celestial agent' prevented the vacuum arising. The commonly held view that nature abhorred a vacuum was called horror vacui. Speculation that even God could not create a vacuum if he wanted to was shut down by the 1277 Paris condemnations of Bishop Etienne Tempier, which required there to be no restrictions on the powers of God, which led to the conclusion that God could create a vacuum if he so wished.
Jean Buridan reported in the 14th century that teams of ten horses could not pull open bellows when the port was sealed.
right|thumb|The Crookes tube, used to discover and study cathode rays, was an evolution of the Geissler tube.
The 17th century saw the first attempts to quantify measurements of partial vacuum. Evangelista Torricelli's mercury barometer of 1643 and Blaise Pascal's experiments both demonstrated a partial vacuum.
In 1654, Otto von Guericke invented the first vacuum pumpEncyclopædia Britannica:Otto von Guericke and conducted his famous Magdeburg hemispheres experiment, showing that teams of horses could not separate two hemispheres from which the air had been partially evacuated. Robert Boyle improved Guericke's design and with the help of Robert Hooke further developed vacuum pump technology. Thereafter, research into the partial vacuum lapsed until 1850 when August Toepler invented the Toepler Pump and Heinrich Geissler invented the mercury displacement pump in 1855, achieving a partial vacuum of about 10 Pa (0.1 Torr). A number of electrical properties become observable at this vacuum level, which renewed interest in further research.
While outer space provides the most rarefied example of a naturally occurring partial vacuum, the heavens were originally thought to be seamlessly filled by a rigid indestructible material called aether. Borrowing somewhat from the pneuma of Stoic physics, aether came to be regarded as the rarefied air from which it took its name, (see Aether (mythology)). Early theories of light posited a ubiquitous terrestrial and celestial medium through which light propagated. Additionally, the concept informed Isaac Newton's explanations of both refraction and of radiant heat.Robert Hogarth Patterson, Essays in History and Art 10, 1862 19th century experiments into this luminiferous aether attempted to detect a minute drag on the Earth's orbit. While the Earth does, in fact, move through a relatively dense medium in comparison to that of interstellar space, the drag is so minuscule that it could not be detected. In 1912, astronomer Henry Pickering commented: "While the interstellar absorbing medium may be simply the ether, [it] is characteristic of a gas, and free gaseous molecules are certainly there".
In 1930, Paul Dirac proposed a model of the vacuum as an infinite sea of particles possessing negative energy, called the Dirac sea. This theory helped refine the predictions of his earlier formulated Dirac equation, and successfully predicted the existence of the positron, confirmed two years later. Werner Heisenberg's uncertainty principle formulated in 1927, predict a fundamental limit within which instantaneous position and momentum, or energy and time can be measured. This has far reaching consequences on the "emptiness" of space between particles. In the late 20th century, so-called virtual particles that arise spontaneously from empty space were confirmed.
Classical field theories
The strictest criterion to define a vacuum is a region of space and time where all the components of the stress–energy tensor are zero. It means that this region is empty of energy and momentum, and by consequence, it must be empty of particles and other physical fields (such as electromagnetism) that contain energy and momentum.
Gravity
In general relativity, a vanishing stress-energy tensor implies, through Einstein field equations, the vanishing of all the components of the Ricci tensor. Vacuum does not mean that the curvature of space-time is necessarily flat: the gravitational field can still produce curvature in a vacuum in the form of tidal forces and gravitational waves (technically, these phenomena are the components of the Weyl tensor). The black hole (with zero electric charge) is an elegant example of a region completely "filled" with vacuum, but still showing a strong curvature.
Electromagnetism
In classical electromagnetism, the vacuum of free space, or sometimes just free space or perfect vacuum, is a standard reference medium for electromagnetic effects. Some authors refer to this reference medium as classical vacuum, a terminology intended to separate this concept from QED vacuum or QCD vacuum, where vacuum fluctuations can produce transient virtual particle densities and a relative permittivity and relative permeability that are not identically unity.For a qualitative description of vacuum fluctuations and virtual particles, see The relative permeability and permittivity of field-theoretic vacuums is described in and more recently in and also QCD vacuum is paramagnetic, while QED vacuum is diamagnetic. See
In the theory of classical electromagnetism, free space has the following properties:
Electromagnetic radiation travels, when unobstructed, at the speed of light, the defined value 299,792,458 m/s in SI units.
The superposition principle is always exactly true. For example, the electric potential generated by two charges is the simple addition of the potentials generated by each charge in isolation. The value of the electric field at any point around these two charges is found by calculating the vector sum of the two electric fields from each of the charges acting alone.
The permittivity and permeability are exactly the electric constant ε0 and magnetic constant μ0, respectively (in SI units), or exactly 1 (in Gaussian units).
The characteristic impedance (η) equals the impedance of free space Z0 ≈ 376.73 Ω.
The vacuum of classical electromagnetism can be viewed as an idealized electromagnetic medium with the constitutive relations in SI units:
relating the electric displacement field D to the electric field E and the magnetic field or H-field H to the magnetic induction or B-field B. Here r is a spatial location and t is time.
Quantum mechanics
thumb|350px|A video of an experiment showing vacuum fluctuations (in the red ring) amplified by spontaneous parametric down-conversion.
In quantum mechanics and quantum field theory, the vacuum is defined as the state (that is, the solution to the equations of the theory) with the lowest possible energy (the ground state of the Hilbert space). In quantum electrodynamics this vacuum is referred to as 'QED vacuum' to distinguish it from the vacuum of quantum chromodynamics, denoted as QCD vacuum. QED vacuum is a state with no matter particles (hence the name), and also no photons. As described above, this state is impossible to achieve experimentally. (Even if every matter particle could somehow be removed from a volume, it would be impossible to eliminate all the blackbody photons.) Nonetheless, it provides a good model for realizable vacuum, and agrees with a number of experimental observations as described next.
QED vacuum has interesting and complex properties. In QED vacuum, the electric and magnetic fields have zero average values, but their variances are not zero.For example, see As a result, QED vacuum contains vacuum fluctuations (virtual particles that hop into and out of existence), and a finite energy called vacuum energy. Vacuum fluctuations are an essential and ubiquitous part of quantum field theory. Some experimentally verified effects of vacuum fluctuations include spontaneous emission and the Lamb shift. Coulomb's law and the electric potential in vacuum near an electric charge are modified.In effect, the dielectric permittivity of the vacuum of classical electromagnetism is changed. For example, see
Theoretically, in QCD multiple vacuum states can coexist. The starting and ending of cosmological inflation is thought to have arisen from transitions between different vacuum states. For theories obtained by quantization of a classical theory, each stationary point of the energy in the configuration space gives rise to a single vacuum. String theory is believed to have a huge number of vacua — the so-called string theory landscape.
Outer space
left|thumb|350px|Outer space is not a perfect vacuum, but a tenuous plasma awash with charged particles, electromagnetic fields, and the occasional star.
Outer space has very low density and pressure, and is the closest physical approximation of a perfect vacuum. But no vacuum is truly perfect, not even in interstellar space, where there are still a few hydrogen atoms per cubic meter.
Stars, planets, and moons keep their atmospheres by gravitational attraction, and as such, atmospheres have no clearly delineated boundary: the density of atmospheric gas simply decreases with distance from the object. The Earth's atmospheric pressure drops to about at of altitude, the Kármán line, which is a common definition of the boundary with outer space. Beyond this line, isotropic gas pressure rapidly becomes insignificant when compared to radiation pressure from the Sun and the dynamic pressure of the solar winds, so the definition of pressure becomes difficult to interpret. The thermosphere in this range has large gradients of pressure, temperature and composition, and varies greatly due to space weather. Astrophysicists prefer to use number density to describe these environments, in units of particles per cubic centimetre.
But although it meets the definition of outer space, the atmospheric density within the first few hundred kilometers above the Kármán line is still sufficient to produce significant drag on satellites. Most artificial satellites operate in this region called low Earth orbit and must fire their engines every few days to maintain orbit. The drag here is low enough that it could theoretically be overcome by radiation pressure on solar sails, a proposed propulsion system for interplanetary travel. Planets are too massive for their trajectories to be significantly affected by these forces, although their atmospheres are eroded by the solar winds.
All of the observable universe is filled with large numbers of photons, the so-called cosmic background radiation, and quite likely a correspondingly large number of neutrinos. The current temperature of this radiation is about 3 K, or −270 degrees Celsius or −454 degrees Fahrenheit.
Measurement
The quality of a vacuum is indicated by the amount of matter remaining in the system, so that a high quality vacuum is one with very little matter left in it. Vacuum is primarily measured by its absolute pressure, but a complete characterization requires further parameters, such as temperature and chemical composition. One of the most important parameters is the mean free path (MFP) of residual gases, which indicates the average distance that molecules will travel between collisions with each other. As the gas density decreases, the MFP increases, and when the MFP is longer than the chamber, pump, spacecraft, or other objects present, the continuum assumptions of fluid mechanics do not apply. This vacuum state is called high vacuum, and the study of fluid flows in this regime is called particle gas dynamics. The MFP of air at atmospheric pressure is very short, 70 nm, but at 100 mPa (~) the MFP of room temperature air is roughly 100 mm, which is on the order of everyday objects such as vacuum tubes. The Crookes radiometer turns when the MFP is larger than the size of the vanes.
Vacuum quality is subdivided into ranges according to the technology required to achieve it or measure it. These ranges do not have universally agreed definitions, but a typical distribution is shown in the following table. As we travel into orbit, outer space and ultimately intergalactic space, the pressure varies by several orders of magnitude.
+Pressure ranges of each quality of vacuum in different units Vacuum quality Torr Pa AtmosphereAtmospheric pressure 760 1Low vacuum 760 to 25 to to Medium vacuum 25 to to to High vacuum to to to Ultra high vacuum to to to Extremely high vacuum < < < Outer space to < to < to < Perfect vacuum 0 0 0
Atmospheric pressure is variable but standardized at 101.325 kPa (760 Torr).
Low vacuum, also called rough vacuum or coarse vacuum, is vacuum that can be achieved or measured with rudimentary equipment such as a vacuum cleaner and a liquid column manometer.
Medium vacuum is vacuum that can be achieved with a single pump, but the pressure is too low to measure with a liquid or mechanical manometer. It can be measured with a McLeod gauge, thermal gauge or a capacitive gauge.
High vacuum is vacuum where the MFP of residual gases is longer than the size of the chamber or of the object under test. High vacuum usually requires multi-stage pumping and ion gauge measurement. Some texts differentiate between high vacuum and very high vacuum.
Ultra high vacuum requires baking the chamber to remove trace gases, and other special procedures. British and German standards define ultra high vacuum as pressures below 10−6 Pa (10−8 Torr).BS 2951: Glossary of Terms Used in Vacuum Technology. Part I. Terms of General Application. British Standards Institution, London, 1969.DIN 28400: Vakuumtechnik Bennenungen und Definitionen, 1972.
Deep space is generally much more empty than any artificial vacuum. It may or may not meet the definition of high vacuum above, depending on what region of space and astronomical bodies are being considered. For example, the MFP of interplanetary space is smaller than the size of the Solar System, but larger than small planets and moons. As a result, solar winds exhibit continuum flow on the scale of the Solar System, but must be considered a bombardment of particles with respect to the Earth and Moon.
Perfect vacuum is an ideal state of no particles at all. It cannot be achieved in a laboratory, although there may be small volumes which, for a brief moment, happen to have no particles of matter in them. Even if all particles of matter were removed, there would still be photons and gravitons, as well as dark energy, virtual particles, and other aspects of the quantum vacuum.
Hard vacuum and soft vacuum are terms that are defined with a dividing line defined differently by different sources, such as 1 Torr, or 0.1 Torr, the common denominator being that a hard vacuum is a higher vacuum than a soft one.
Relative versus absolute measurement
Vacuum is measured in units of pressure, typically as a subtraction relative to ambient atmospheric pressure on Earth. But the amount of relative measurable vacuum varies with local conditions. On the surface of Jupiter, where ground level atmospheric pressure is much higher than on Earth, much higher relative vacuum readings would be possible. On the surface of the moon with almost no atmosphere, it would be extremely difficult to create a measurable vacuum relative to the local environment.
Similarly, much higher than normal relative vacuum readings are possible deep in the Earth's ocean. A submarine maintaining an internal pressure of 1 atmosphere submerged to a depth of 10 atmospheres (98 metres; a 9.8 metre column of seawater has the equivalent weight of 1 atm) is effectively a vacuum chamber keeping out the crushing exterior water pressures, though the 1 atm inside the submarine would not normally be considered a vacuum.
Therefore, to properly understand the following discussions of vacuum measurement, it is important that the reader assumes the relative measurements are being done on Earth at sea level, at exactly 1 atmosphere of ambient atmospheric pressure.
Measurements relative to 1 atm
right|thumb|A glass McLeod gauge, drained of mercury
The SI unit of pressure is the pascal (symbol Pa), but vacuum is often measured in torrs, named for Torricelli, an early Italian physicist (1608–1647). A torr is equal to the displacement of a millimeter of mercury (mmHg) in a manometer with 1 torr equaling 133.3223684 pascals above absolute zero pressure. Vacuum is often also measured on the barometric scale or as a percentage of atmospheric pressure in bars or atmospheres. Low vacuum is often measured in millimeters of mercury (mmHg) or pascals (Pa) below standard atmospheric pressure. "Below atmospheric" means that the absolute pressure is equal to the current atmospheric pressure.
In other words, most low vacuum gauges that read, for example 50.79 Torr. Many inexpensive low vacuum gauges have a margin of error and may report a vacuum of 0 Torr but in practice this generally requires a two-stage rotary vane or other medium type of vacuum pump to go much beyond (lower than) 1 torr.
Measuring instruments
Many devices are used to measure the pressure in a vacuum, depending on what range of vacuum is needed.Hydrostatic gauges (such as the mercury column manometer) consist of a vertical column of liquid in a tube whose ends are exposed to different pressures. The column will rise or fall until its weight is in equilibrium with the pressure differential between the two ends of the tube. The simplest design is a closed-end U-shaped tube, one side of which is connected to the region of interest. Any fluid can be used, but mercury is preferred for its high density and low vapour pressure. Simple hydrostatic gauges can measure pressures ranging from 1 torr (100 Pa) to above atmospheric. An important variation is the McLeod gauge which isolates a known volume of vacuum and compresses it to multiply the height variation of the liquid column. The McLeod gauge can measure vacuums as high as 10−6 torr (0.1 mPa), which is the lowest direct measurement of pressure that is possible with current technology. Other vacuum gauges can measure lower pressures, but only indirectly by measurement of other pressure-controlled properties. These indirect measurements must be calibrated via a direct measurement, most commonly a McLeod gauge.
The kenotometer is a particular type of hydrostatic gauge, typically used in power plants using steam turbines. The kenotometer measures the vacuum in the steam space of the condenser, that is, the exhaust of the last stage of the turbine.Mechanical or elastic gauges depend on a Bourdon tube, diaphragm, or capsule, usually made of metal, which will change shape in response to the pressure of the region in question. A variation on this idea is the capacitance manometer, in which the diaphragm makes up a part of a capacitor. A change in pressure leads to the flexure of the diaphragm, which results in a change in capacitance. These gauges are effective from 103 torr to 10−4 torr, and beyond.Thermal conductivity gauges rely on the fact that the ability of a gas to conduct heat decreases with pressure. In this type of gauge, a wire filament is heated by running current through it. A thermocouple or Resistance Temperature Detector (RTD) can then be used to measure the temperature of the filament. This temperature is dependent on the rate at which the filament loses heat to the surrounding gas, and therefore on the thermal conductivity. A common variant is the Pirani gauge which uses a single platinum filament as both the heated element and RTD. These gauges are accurate from 10 torr to 10−3 torr, but they are sensitive to the chemical composition of the gases being measured.Ion gauges''' are used in ultrahigh vacuum. They come in two types: hot cathode and cold cathode. In the hot cathode version an electrically heated filament produces an electron beam. The electrons travel through the gauge and ionize gas molecules around them. The resulting ions are collected at a negative electrode. The current depends on the number of ions, which depends on the pressure in the gauge. Hot cathode gauges are accurate from 10−3 torr to 10−10 torr. The principle behind cold cathode version is the same, except that electrons are produced in a discharge created by a high voltage electrical discharge. Cold cathode gauges are accurate from 10−2 torr to 10−9 torr. Ionization gauge calibration is very sensitive to construction geometry, chemical composition of gases being measured, corrosion and surface deposits. Their calibration can be invalidated by activation at atmospheric pressure or low vacuum. The composition of gases at high vacuums will usually be unpredictable, so a mass spectrometer must be used in conjunction with the ionization gauge for accurate measurement.
Uses
thumb|right|Light bulbs contain a partial vacuum, usually backfilled with argon, which protects the tungsten filament
Vacuum is useful in a variety of processes and devices. Its first widespread use was in the incandescent light bulb to protect the filament from chemical degradation. The chemical inertness produced by a vacuum is also useful for electron beam welding, cold welding, vacuum packing and vacuum frying. Ultra-high vacuum is used in the study of atomically clean substrates, as only a very good vacuum preserves atomic-scale clean surfaces for a reasonably long time (on the order of minutes to days). High to ultra-high vacuum removes the obstruction of air, allowing particle beams to deposit or remove materials without contamination. This is the principle behind chemical vapor deposition, physical vapor deposition, and dry etching which are essential to the fabrication of semiconductors and optical coatings, and to surface science. The reduction of convection provides the thermal insulation of thermos bottles. Deep vacuum lowers the boiling point of liquids and promotes low temperature outgassing which is used in freeze drying, adhesive preparation, distillation, metallurgy, and process purging. The electrical properties of vacuum make electron microscopes and vacuum tubes possible, including cathode ray tubes. The elimination of air friction is useful for flywheel energy storage and ultracentrifuges.
thumb|left|This shallow water well pump reduces atmospheric air pressure inside the pump chamber. Atmospheric pressure extends down into the well, and forces water up the pipe into the pump to balance the reduced pressure. Above-ground pump chambers are only effective to a depth of approximately 9 meters due to the water column weight balancing the atmospheric pressure.
Vacuum-driven machines
Vacuums are commonly used to produce suction, which has an even wider variety of applications. The Newcomen steam engine used vacuum instead of pressure to drive a piston. In the 19th century, vacuum was used for traction on Isambard Kingdom Brunel's experimental atmospheric railway. Vacuum brakes were once widely used on trains in the UK but, except on heritage railways, they have been replaced by air brakes.
Manifold vacuum can be used to drive accessories on automobiles. The best-known application is the vacuum servo, used to provide power assistance for the brakes. Obsolete applications include vacuum-driven windscreen wipers and Autovac fuel pumps. Some aircraft instruments (Attitude Indicator (AI) and the Heading Indicator (HI)) are typically vacuum-powered, as protection against loss of all (electrically powered) instruments, since early aircraft often did not have electrical systems, and since there are two readily available sources of vacuum on a moving aircraft—the engine and an external venturi.
Vacuum induction melting uses electromagnetic induction within a vacuum.
Maintaining a vacuum in the Condenser is an important aspect of the efficient operation of steam turbines. A steam jet ejector or liquid ring vacuum pump is used for this purpose. The typical vacuum maintained in the Condenser steam space at the exhaust of the turbine (also called Condenser Backpressure) is in the range 5 to 15 kPa (absolute), depending on the type of condenser and the ambient conditions.
Outgassing
Evaporation and sublimation into a vacuum is called outgassing. All materials, solid or liquid, have a small vapour pressure, and their outgassing becomes important when the vacuum pressure falls below this vapour pressure. In man-made systems, outgassing has the same effect as a leak and can limit the achievable vacuum. Outgassing products may condense on nearby colder surfaces, which can be troublesome if they obscure optical instruments or react with other materials. This is of great concern to space missions, where an obscured telescope or solar cell can ruin an expensive mission.
The most prevalent outgassing product in man-made vacuum systems is water absorbed by chamber materials. It can be reduced by desiccating or baking the chamber, and removing absorbent materials. Outgassed water can condense in the oil of rotary vane pumps and reduce their net speed drastically if gas ballasting is not used. High vacuum systems must be clean and free of organic matter to minimize outgassing.
Ultra-high vacuum systems are usually baked, preferably under vacuum, to temporarily raise the vapour pressure of all outgassing materials and boil them off. Once the bulk of the outgassing materials are boiled off and evacuated, the system may be cooled to lower vapour pressures and minimize residual outgassing during actual operation. Some systems are cooled well below room temperature by liquid nitrogen to shut down residual outgassing and simultaneously cryopump the system.
Pumping and ambient air pressure
thumb|left|Deep wells have the pump chamber down in the well close to the water surface, or in the water. A "sucker rod" extends from the handle down the center of the pipe deep into the well to operate the plunger. The pump handle acts as a heavy counterweight against both the sucker rod weight and the weight of the water column standing on the upper plunger up to ground level.
Fluids cannot generally be pulled, so a vacuum cannot be created by suction. Suction can spread and dilute a vacuum by letting a higher pressure push fluids into it, but the vacuum has to be created first before suction can occur. The easiest way to create an artificial vacuum is to expand the volume of a container. For example, the diaphragm muscle expands the chest cavity, which causes the volume of the lungs to increase. This expansion reduces the pressure and creates a partial vacuum, which is soon filled by air pushed in by atmospheric pressure.
To continue evacuating a chamber indefinitely without requiring infinite growth, a compartment of the vacuum can be repeatedly closed off, exhausted, and expanded again. This is the principle behind positive displacement pumps, like the manual water pump for example. Inside the pump, a mechanism expands a small sealed cavity to create a vacuum. Because of the pressure differential, some fluid from the chamber (or the well, in our example) is pushed into the pump's small cavity. The pump's cavity is then sealed from the chamber, opened to the atmosphere, and squeezed back to a minute size.
thumb|A cutaway view of a turbomolecular pump, a momentum transfer pump used to achieve high vacuum
The above explanation is merely a simple introduction to vacuum pumping, and is not representative of the entire range of pumps in use. Many variations of the positive displacement pump have been developed, and many other pump designs rely on fundamentally different principles. Momentum transfer pumps, which bear some similarities to dynamic pumps used at higher pressures, can achieve much higher quality vacuums than positive displacement pumps. Entrapment pumps can capture gases in a solid or absorbed state, often with no moving parts, no seals and no vibration. None of these pumps are universal; each type has important performance limitations. They all share a difficulty in pumping low molecular weight gases, especially hydrogen, helium, and neon.
The lowest pressure that can be attained in a system is also dependent on many things other than the nature of the pumps. Multiple pumps may be connected in series, called stages, to achieve higher vacuums. The choice of seals, chamber geometry, materials, and pump-down procedures will all have an impact. Collectively, these are called vacuum technique''. And sometimes, the final pressure is not the only relevant characteristic. Pumping systems differ in oil contamination, vibration, preferential pumping of certain gases, pump-down speeds, intermittent duty cycle, reliability, or tolerance to high leakage rates.
In ultra high vacuum systems, some very "odd" leakage paths and outgassing sources must be considered. The water absorption of aluminium and palladium becomes an unacceptable source of outgassing, and even the adsorptivity of hard metals such as stainless steel or titanium must be considered. Some oils and greases will boil off in extreme vacuums. The permeability of the metallic chamber walls may have to be considered, and the grain direction of the metallic flanges should be parallel to the flange face.
The lowest pressures currently achievable in laboratory are about 10−13 torr (13 pPa). However, pressures as low as (6.7 fPa) have been indirectly measured in a 4 K cryogenic vacuum system. This corresponds to ≈100 particles/cm3.
Effects on humans and animals
thumb|This painting, An Experiment on a Bird in the Air Pump by Joseph Wright of Derby, 1768, depicts an experiment performed by Robert Boyle in 1660.
Humans and animals exposed to vacuum will lose consciousness after a few seconds and die of hypoxia within minutes, but the symptoms are not nearly as graphic as commonly depicted in media and popular culture. The reduction in pressure lowers the temperature at which blood and other body fluids boil, but the elastic pressure of blood vessels ensures that this boiling point remains above the internal body temperature of Although the blood will not boil, the formation of gas bubbles in bodily fluids at reduced pressures, known as ebullism, is still a concern. The gas may bloat the body to twice its normal size and slow circulation, but tissues are elastic and porous enough to prevent rupture. 33.1 MB Swelling and ebullism can be restrained by containment in a flight suit. Shuttle astronauts wore a fitted elastic garment called the Crew Altitude Protection Suit (CAPS) which prevents ebullism at pressures as low as 2 kPa (15 Torr). Rapid boiling will cool the skin and create frost, particularly in the mouth, but this is not a significant hazard.
Animal experiments show that rapid and complete recovery is normal for exposures shorter than 90 seconds, while longer full-body exposures are fatal and resuscitation has never been successful. A study by NASA on eight chimpanzees found all of them survived two and a half minute exposures to vacuum.http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19650027167.pdf There is only a limited amount of data available from human accidents, but it is consistent with animal data. Limbs may be exposed for much longer if breathing is not impaired.. Robert Boyle was the first to show in 1660 that vacuum is lethal to small animals.
An experiment indicates that plants are able to survive in a low pressure environment (1.5 kPa) for about 30 minutes.
During 1942, in one of a series of experiments on human subjects for the Luftwaffe, the Nazi regime experimented on prisoners in Dachau concentration camp by exposing them to low pressure.
Cold or oxygen-rich atmospheres can sustain life at pressures much lower than atmospheric, as long as the density of oxygen is similar to that of standard sea-level atmosphere. The colder air temperatures found at altitudes of up to 3 km generally compensate for the lower pressures there. Above this altitude, oxygen enrichment is necessary to prevent altitude sickness in humans that did not undergo prior acclimatization, and spacesuits are necessary to prevent ebullism above 19 km. Most spacesuits use only 20 kPa (150 Torr) of pure oxygen. This pressure is high enough to prevent ebullism, but decompression sickness and gas embolisms can still occur if decompression rates are not managed.
Rapid decompression can be much more dangerous than vacuum exposure itself. Even if the victim does not hold his or her breath, venting through the windpipe may be too slow to prevent the fatal rupture of the delicate alveoli of the lungs. Eardrums and sinuses may be ruptured by rapid decompression, soft tissues may bruise and seep blood, and the stress of shock will accelerate oxygen consumption leading to hypoxia. Injuries caused by rapid decompression are called barotrauma. A pressure drop of 13 kPa (100 Torr), which produces no symptoms if it is gradual, may be fatal if it occurs suddenly.
Some extremophile microrganisms, such as tardigrades, can survive vacuum for a period of days or weeks.
Examples
Pressure (Pa or kPa) Pressure (Torr) Mean Free Path Molecules per cm3 Standard atmosphere, for comparison 101.325 kPa 760 66 nm Computed using "1976 Standard Atmosphere Properties" calculator. Retrieved 2012-01-28 Intense hurricane approx. 87 to 95 kPa 650 to 710 Vacuum cleaner approximately 80 kPa 600 70 nm 1019 Steam turbine exhaust (Condenser Backpressure) 9 kPa liquid ring vacuum pump approximately 3.2 kPa 24 1.75 μm 1018 Mars atmosphere 1.155 kPa to 0.03 kPa (mean 0.6 kPa) 8.66 to 0.23 freeze drying 100 to 10 1 to 0.1 100 μm to 1 mm 1016 to 1015 Incandescent light bulb 10 to 1 0.1 to 0.01 1 mm to 1 cm 1015 to 1014 Thermos bottle 1 to 0.01 10−2 to 10−4 1 cm to 1 m 1014 to 1012 Earth thermosphere 1 Pa to 10−2 to 10−9 1 cm to 100 km 1014 to 107 Vacuum tube to 10−7 to 10−10 1 to 1,000 km 109 to 106 Cryopumped MBE chamber to 10−9 to 10−11 100 to 10,000 km 107 to 105 Pressure on the Moon approximately 10−11 10,000 km Interplanetary space 11 Interstellar space 1 Intergalactic space 10−6
See also
Notes
External links
VIDEO on the nature of vacuum by Canadian astrophysicist Doctor P
The Foundations of Vacuum Coating Technology
American Vacuum Society
Journal of Vacuum Science and Technology A
Journal of Vacuum Science and Technology B
FAQ on explosive decompression and vacuum exposure.
Discussion of the effects on humans of exposure to hard vacuum.
Vacuum, Production of Space
"Much Ado About Nothing" by Professor John D. Barrow, Gresham College
Free pdf copy of The Structured Vacuum – thinking about nothing by Johann Rafelski and Berndt Muller (1985) ISBN 3-87144-889-3.
Category:Concepts in physics
Category:Industrial processes
Category:Nothing
Category:Gases
Category:Articles containing video clips
Category:Latin words and phrases | 32,502 | 2017-01 |
Central Intelligence Agency | The Central Intelligence Agency (CIA) is a civilian foreign intelligence service of the United States federal government, tasked with gathering, processing and analyzing national security information from around the world, primarily through the use of human intelligence (HUMINT). As one of the principal members of the U.S. Intelligence Community (IC), the CIA reports to the Director of National Intelligence and is primarily focused on providing intelligence for the President and Cabinet.
Unlike the Federal Bureau of Investigation (FBI), which is a domestic security service, the CIA has no law enforcement function and is mainly focused on overseas intelligence gathering, with only limited domestic intelligence collection. Though it is not the only U.S. government agency specializing in HUMINT, the CIA serves as the national manager for coordination of HUMINT activities across the entire intelligence community. Moreover, the CIA is the only agency authorized by law to carry out and oversee covert action at the behest of the President, unless the President determines that another agency is better suited for carrying out such action. It can, for example, exert foreign political influence through its tactical divisions, such as the Special Activities Division.
Before the Intelligence Reform and Terrorism Prevention Act, the CIA Director concurrently served as the head of the Intelligence Community; today the CIA is organized under the Director of National Intelligence (DNI). Despite transferring some of its powers to the DNI, the CIA has grown in size as a result of the September 11 terrorist attacks. In 2013, The Washington Post reported that in fiscal year 2010, the CIA had the largest budget of all IC agencies, exceeding previous estimates.
The CIA has increasingly expanded its roles, including covert paramilitary operations. One of its largest divisions, the Information Operations Center (IOC), has shifted focus from counter-terrorism to offensive cyber-operations. While the CIA has had some recent accomplishments, such as locating Osama bin Laden and taking part in the successful Operation Neptune Spear, it has also been involved in controversial programs such as extraordinary rendition and enhanced interrogation techniques.
Purpose
When the CIA was created, its purpose was to create a clearinghouse for foreign policy intelligence and analysis. Today its primary purpose is to collect, analyze, evaluate, and disseminate foreign intelligence, and to perform covert actions.
According to its fiscal 2013 budget, the CIA has five priorities:
Counterterrorism, the top priority, given the ongoing Global War on Terror.
Nonproliferation of nuclear and other weapons of mass destruction, with North Korea described as perhaps the most difficult target.
Warning/informing American leaders of important overseas events, with Pakistan described as an "intractable target".
Counterintelligence, with China, Russia, Iran, Cuba, and Israel described as "priority" targets.
Cyber intelligence.
Organizational structure
thumb|Mike Pompeo, the current director of the Central Intelligence Agency
thumb|280px|Chart showing the organization of the Central Intelligence Agency.
The CIA has an executive office and five major directorates:
The Directorate of Digital Innovation
The Directorate of Analysis
The Directorate of Operations
The Directorate of Support
The Directorate of Science and Technology
Executive Office
The Director of the Central Intelligence Agency (D/CIA) reports directly to the Director of National Intelligence (DNI); in practice, the CIA director interfaces with the DNI, Congress, and the White House, while the Deputy Director is the internal executive of the CIA.
The Executive Office also supports the U.S. military by providing it with information it gathers, receiving information from military intelligence organizations, and cooperates on field activities. The Executive Director is in charge of the day to day operation of the CIA. Each branch of the military service has its own Director. The Associate Director of military affairs, a senior military officer, manages the relationship between the CIA and the Unified Combatant Commands, who produce and deliver to the CIA regional/operational intelligence and consume national intelligence produced by the CIA.https://www.cia.gov/about-cia/leadership/ciaorgchart.jpg/image.jpg
Directorate of Analysis
thumb|240px|Aerial view of the Central Intelligence Agency headquarters, Langley, Virginia
The Directorate of Analysis produces all-source intelligence investigation on key foreign and intercontinental issues relating to powerful and sometimes anti-government sensitive topics. It has four regional analytic groups, six groups for transnational issues, and three that focus on policy, collection, and staff support. There is an office dedicated to Iraq; regional analytical Offices covering the Near Eastern and South Asian Analysis; Russian and European Analysis; and the Asian Pacific, Latin American, and African Analysis.
Directorate of Operations
The Directorate of Operations is responsible for collecting foreign intelligence, mainly from clandestine HUMINT sources, and covert action. The name reflects its role as the coordinator of human intelligence activities between other elements of the wider U.S. intelligence community with their own HUMINT operations. This Directorate was created in an attempt to end years of rivalry over influence, philosophy and budget between the United States Department of Defense (DOD) and the CIA. In spite of this, the Department of Defense recently organized its own global clandestine intelligence service, the Defense Clandestine Service (DCS), under the Defense Intelligence Agency (DIA).
This Directorate is organized by geographic regions and issues, but at present the precise organization of this Directorate is classified.
Directorate of Science and Technology
The Directorate of Science & Technology was established to research, create, and manage technical collection disciplines and equipment. Many of its innovations were transferred to other intelligence organizations, or, as they became more overt, to the military services.
For example, the development of the U-2 high-altitude reconnaissance aircraft was done in cooperation with the United States Air Force. The U-2's original mission was clandestine imagery intelligence over denied areas such as the Soviet Union.Pocock, Chris, "50 Years of the U-2: The Complete Illustrated History of the 'Dragon Lady' ", Schiffer Publishing, Ltd., Atglen, Pennsylvania, Library of Congress card number 2005927577, ISBN 0-7643-2346-6, page 404. It was subsequently provided with signals intelligence and measurement and signature intelligence capabilities, and is now operated by the Air Force.
Imagery intelligence collected by the U-2 and reconnaissance satellites was analyzed by a DS&T organization called the National Photointerpretation Center (NPIC), which had analysts from both the CIA and the military services. Subsequently, NPIC was transferred to the National Geospatial-Intelligence Agency (NGA).
Directorate of Support
The Directorate of Support has organizational and administrative functions to significant units including:
The Office of Security
The Office of Communications
The Office of Information Technology
Training
The CIA established its first training facility, the Office of Training and Education, in 1950. Following the end of the Cold War, the CIA's training budget was slashed, which had a negative effect on employee retention. In response, Director of Central Intelligence George Tenet established CIA University in 2002. CIA University holds between 200 and 300 courses each year, training both new hires and experienced intelligence officers, as well as CIA support staff. The facility works in partnership with the National Intelligence University, and includes the Sherman Kent School for Intelligence Analysis, the Directorate of Analysis' component of the university.
For later stage training of student operations officers, there is at least one classified training area at Camp Peary, near Williamsburg, Virginia. Students are selected, and their progress evaluated, in ways derived from the OSS, published as the book Assessment of Men, Selection of Personnel for the Office of Strategic Services. Additional mission training is conducted at Harvey Point, North Carolina.
The primary training facility for the Office of Communications is Warrenton Training Center, located near Warrenton, Virginia. The facility was established in 1951 and has been used by the CIA since at least 1955.
Budget
Details of the overall United States intelligence budget are classified. Under the Central Intelligence Agency Act of 1949, the Director of Central Intelligence is the only federal government employee who can spend "un-vouchered" government money. The government has disclosed a total figure for all non-military intelligence spending since 2007; the fiscal 2013 figure is $52.6 billion. According to the 2013 mass surveillance disclosures, the CIA's fiscal 2013 budget is $14.7 billion, 28% of the total and almost 50% more than the budget of the National Security Agency. CIA's HUMINT budget is $2.3 billion, the SIGINT budget is $1.7 billion, and spending for security and logistics of CIA missions is $2.5 billion. "Covert action programs", including a variety of activities such as the CIA's drone fleet and anti-Iranian nuclear program activities, accounts for $2.6 billion.
There were numerous previous attempts to obtain general information about the budget. As a result, it was revealed that CIA's annual budget in Fiscal Year 1963 was US $550 million (inflation-adjusted US$ billion in ), and the overall intelligence budget in FY 1997 was US $26.6 billion (inflation-adjusted US$ billion in ). There have been accidental disclosures; for instance, Mary Margaret Graham, a former CIA official and deputy director of national intelligence for collection in 2005, said that the annual intelligence budget was $44 billion, and in 1994 Congress accidentally published a budget of $43.4 billion (in 2012 dollars) in 1994 for the non-military National Intelligence Program, including $4.8 billion for the CIA. After the Marshall Plan was approved, appropriating $13.7 billion over five years, 5% of those funds or $685 million were made available to the CIA.Legacy of Ashes, p.28
Employees
Polygraphing
Robert Baer, a CNN analyst and former CIA operative, stated that normally a CIA employee undergoes a polygraph examination every three to four years."Exclusive: Dozens of CIA operatives on the ground during Benghazi attack." CNN. August 1, 2013. Retrieved on August 2, 2013.
Relationship with other intelligence agencies
The CIA acts as the primary US HUMINT and general analytic agency, under the Director of National Intelligence, who directs or coordinates the 16 member organizations of the United States Intelligence Community. In addition, it obtains information from other U.S. government intelligence agencies, commercial information sources, and foreign intelligence services.
U.S. agencies
CIA employees form part of the National Reconnaissance Office (NRO) workforce, originally created as a joint office of the CIA and US Air Force to operate the spy satellites of the US military.
The Special Collections Service is a joint CIA and National Security Agency (NSA) office that conducts clandestine electronic surveillance in embassies and hostile territory throughout the world.
Foreign intelligence services
The role and functions of the CIA are roughly equivalent to those of the United Kingdom's Secret Intelligence Service (the SIS or MI6), the Australian Secret Intelligence Service (ASIS), the French foreign intelligence service Direction Générale de la Sécurité Extérieure (DGSE), the Russian Foreign Intelligence Service (Sluzhba Vneshney Razvedki) (SVR), the Chinese Ministry of State Security (MSS), the Indian Research and Analysis Wing (RAW), the Pakistani Inter-Services Intelligence (ISI), the Egyptian General Intelligence Service, and Israel's Mossad. While the preceding agencies both collect and analyze information, some like the U.S. State Department's Bureau of Intelligence and Research are purely analytical agencies.
The closest links of the U.S. IC to other foreign intelligence agencies are to Anglophone countries: Australia, Canada, New Zealand, and the United Kingdom. There is a special communications marking that signals that intelligence-related messages can be shared with these four countries. An indication of the United States' close operational cooperation is the creation of a new message distribution label within the main U.S. military communications network. Previously, the marking of NOFORN (i.e., No Foreign Nationals) required the originator to specify which, if any, non-U.S. countries could receive the information. A new handling caveat, USA/AUS/CAN/GBR/NZL Five Eyes, used primarily on intelligence messages, gives an easier way to indicate that the material can be shared with Australia, Canada, United Kingdom, and New Zealand.
The task of the division called "Verbindungsstelle 61" of the German Bundesnachrichtendienst is keeping contact to the CIA office in Wiesbaden. Ireland's Directorate of Military Intelligence liaises with the CIA, although it is not a member of the Five Eyes.
History
thumb|The 113 stars on the CIA Memorial Wall in the original CIA headquarters, each representing a CIA officer killed in action
The Central Intelligence Agency was created on July 26, 1947, when Harry S. Truman signed the National Security Act into law. A major impetus for the creation of the CIA was the unforeseen attack on Pearl Harbor. In addition, towards the end of World War II the U.S. government felt the need for a group to coordinate intelligence efforts.
Immediate predecessors
The success of the British Commandos during World War II prompted U.S. President Franklin D. Roosevelt to authorize the creation of an intelligence service modeled after the British Secret Intelligence Service (MI6), and Special Operations Executive. This led to the creation of the Office of Strategic Services (OSS). On September 20, 1945, shortly after the end of World War II, Harry S. Truman signed an executive order dissolving the OSS, and by October 1945 its functions had been divided between the Departments of State and War. The division lasted only a few months. The first public mention of the "Central Intelligence Agency" appeared on a command-restructuring proposal presented by Jim Forrestal and Arthur Radford to the U.S. Senate Military Affairs Committee at the end of 1945. Despite opposition from the military establishment, the United States Department of State and the Federal Bureau of Investigation (FBI), Truman established the National Intelligence Authority"The Role of Intelligence" (1965). Congress and the Nation 1945-1964: a review of government and politics in the postwar years. Washington, DC: Congressional Quarterly Service. p.306. in January 1946, which was the direct predecessor of the CIA. Its operational extension was known as the Central Intelligence Group (CIG)
National Security Act
Lawrence Houston, head counsel of the SSU, CIG, and, later CIA, was principal draftsman of the National Security Act of 1947,https://www.cia.gov/mobile/offices-of-cia/general-counsel/history-of-the-office.html which dissolved the NIA and the CIG, and established both the National Security Council and the Central Intelligence Agency. In 1949 Houston helped to draft the Central Intelligence Agency Act (Public law 81-110), which authorized the agency to use confidential fiscal and administrative procedures, and exempted it from most limitations on the use of Federal funds. It also exempted the CIA from having to disclose its "organization, functions, officials, titles, salaries, or numbers of personnel employed." It created the program "PL-110" to handle defectors and other "essential aliens" who fell outside normal immigration procedures.
Intelligence vs. action
At the outset of the Korean War the CIA still only had a few thousand employees, a thousand of whom worked in analysis. Intelligence primarily came from the Office of Reports and Estimates, which drew its reports from a daily take of State Department telegrams, military dispatches, and other public documents. The CIA still lacked its own intelligence gathering abilities.http://www.foia.cia.gov/sites/default/files/document_conversions/44/2010-05-01.pdf On August 21, 1950, shortly after the invasion of South Korea, Truman announced Walter Bedell Smith as the new Director of the CIA to correct what was seen as a grave failure of Intelligence.
The CIA had different demands placed on it by the different bodies overseeing it. Truman wanted a centralized group to organize the information that reached him,https://www.cia.gov/library/center-for-the-study-of-intelligence/kent-csi/vol38no1/pdf/v38i1a06p.pdf the Department of Defense wanted military intelligence and covert action, and the State Department wanted to create global political change favorable to the US. Thus the two areas of responsibility for the CIA were covert action and covert intelligence. One of the main targets for intelligence gathering was the Soviet Union, which had also been a priority of the CIA's predecessors.
US army general Hoyt Vandenberg, the CIG's second director, created the Office of Special Operations (OSO), as well as the Office of Reports and Estimates (ORE). Initially the OSO was tasked with spying and subversion overseas with a budget of $15 million, the largesse of a small number of patrons in congress. Vandenberg's goals were much like the ones set out by his predecessor; finding out "everything about the Soviet forces in Eastern and Central Europe - their movements, their capabilities, and their intentions."
On June 18, 1948, the National Security Council issued Directive 10/2 calling for covert action against the USSR, and granting the authority to carry out covert operations against "hostile foreign states or groups" that could, if needed, be denied by the U.S. government. To this end, the Office of Policy Coordination was created inside the new CIA. The OPC was quite unique; Frank Wisner, the head of the OPC, answered not to the CIA Director, but to the secretaries of defense, state, and the NSC, and the OPC's actions were a secret even from the head of the CIA. Most CIA stations had two station chiefs, one working for the OSO, and one working for the OPC.
The early track record of the CIA was poor, with the agency unable to provide sufficient intelligence about the Soviet takeovers of Romania and Czechoslovakia, the Soviet blockade of Berlin, and the Soviet atomic bomb project. In particular, the agency failed to predict the Chinese entry into the Korean War with 300,000 troops."The Role of Intelligence" (1965) Congress and the Nation 1945-1964. p.306 The famous double agent Kim Philby was the British liaison to American Central Intelligence. Through him the CIA coordinated hundreds of airdrops inside the iron curtain, all compromised by Philby. Arlington Hall, the nerve center of CIA cryptanalysisl was compromised by Bill Weisband, a Russian translator and Soviet spy.
However, the CIA was successful in influencing the 1948 Italian election in favor of the Christian Democrats.American visions of the Netherlands East Indies/Indonesia: US foreign policy and Indonesian nationalism, 1920–1949, Frances Gouda, Thijs Brocades Zaalberg. Amsterdam University Press, 2002. ISBN 90-5356-479-9, ISBN 978-90-5356-479-0. p. 365 The $200 million Exchange Stabilization Fund, earmarked for the reconstruction of Europe, was used to pay wealthy Americans of Italian heritage. Cash was then distributed to Catholic Action, the Vatican's political arm, and directly to Italian politicians. This tactic of using its large fund to purchase elections was frequently repeated in the subsequent years.
Korean War
At the beginning of the Korean War, CIA officer Hans Tofte claimed to have turned a thousand North Korean expatriates into a guerrilla force tasked with infiltration, guerrilla warfare, and pilot rescue. In 1952 the CIA sent 1,500 more expatriate agents north. Seoul station chief Albert Haney would openly celebrate the capabilities of those agents, and the information they sent. In September 1952 Haney was replaced by John Limond Hart, a Europe veteran with a vivid memory for bitter experiences of misinformation. Hart was suspicious of the parade of successes reported by Tofte and Haney and launched an investigation which determined that the entirety of the information supplied by the Korean sources was false or misleading. After the war, internal reviews by the CIA would corroborate Hart's findings. The CIA's Seoul station had 200 officers, but not a single speaker of Korean. Hart reported to Washington that Seoul station was hopeless, and could not be salvaged. Loftus Becker, Deputy Director of Intelligence, was sent personally to tell Hart that the CIA had to keep the station open to save face. Becker returned to Washington, pronounced the situation to be "hopeless", and that, after touring the CIA's Far East operations, the CIA's ability to gather intelligence in the far east was "almost negligible". He then resigned. Air Force Colonel James Kallis stated that CIA director Allen Dulles continued to praise the CIA's Korean force, despite knowing that they were under enemy control. When China entered the war in 1952, the CIA attempted a number of subversive operations in the country, all of which failed due to the presence of double agents. Millions of dollars were spent in these efforts. These included a team of young CIA officers airdropped into to China who were ambushed, and CIA funds being used to set up a global heroin empire in Burma's Golden Triangle following a betrayal by another double agent.
1953 Iranian coup d'état
In 1951, Mohammad Mosaddegh, a member of the National Front, was elected Iranian prime-minister. As prime minister, he nationalized the Anglo-Iranian Oil Company which his predecessor had supported. The nationalization of the British funded Iranian oil industry, including the largest oil refinery in the world, was disastrous for Mossadeq. A British naval embargo shuttered the British oil facilities, which Iran had no skilled workers to operate. In '52 Mosaddegh resisted the royal refusal to approve his Minister of War, and resigned in protest. The National Front took to the streets in protest. Fearing a loss of control, the military pulled its troops back five days later, and the Shah gave in to Mosaddegh's demands. Mosaddegh quickly replaced military leaders loyal to the Shah with those loyal to him, giving him personal control over the military. Given six months of emergency powers, Mosaddegh unilaterally passed legislation. When that six months expired, his powers were extended for another year. In 1953 Mossadeq dismissed parliament and assumed dictatorial powers. This power grab triggered the Shah to exercise his constitutional right to dismiss Mosaddegh. Mosaddegh launched a military coup as the Shah fled the country. As was typical of CIA operations, CIA interventions were preceded by radio announcements on July 7, 1953, made by the CIA's intended victim by way of operational leaks. On August 19, a CIA paid mob led by Ayatollah Ruhollah Khomeini would spark what a US embassy officer called "an almost spontaneous revolution" but Mosaddegh was protected by his new inner military circle, and the CIA had been unable to gain influence within the Iranian military. Their chosen man, former general Fazlollah Zahedi, had no troops to call on. General McClure, commander of the American military assistance advisory group, would get his second star buying the loyalty of the Iranian officers he was training. An attack on his house would force Mossadegh to flee. He surrendered the next day, and his coup came to an end. The end result would be a 60/40 oil profit split in favor of Iran (possibly similar to agreements with Saudi Arabia and Venezuela).
1954 Guatemalan coup d'état
The return of the Shah to power, and the impression, cultivated by Allen Dulles, that an effective CIA had been able to guide that nation to friendly and stable relations with the west triggered planning for Operation PBSUCCESS, a plan to overthrow Guatemalan President Jacobo Arbenz. The plan was exposed in major newspapers before it happened after a CIA agent left plans for the coup in his Guatemala City hotel room.
The Guatemalan Revolution of 1944-54 overthrew the U.S. backed dictator Jorge Ubico and brought a democratically elected government to power. The government began an ambitious agrarian reform program attempting to grant land to millions of landless peasants. This program threatened the land holdings of the United Fruit Company, who lobbied for a coup by portraying these reforms as communist.The Secrets in Guatemala’s Bones. The New York Times. June 30, 2016.
On June 18, 1954, Carlos Castillo Armas led 480 CIA-trained men across the border from Honduras into Guatemala. The weapons had also come from the CIA. The CIA also mounted a psychological campaign to convince the Guatemalan people and government that Armas' victory was a fait accompli, the largest part of which was a radio broadcast entitled "The Voice of Liberation" which announced that Guatemalan exiles led by Castillo Armas were shortly about to liberate the country. On June 25, a CIA plane bombed Guatemala City, destroying the government's main oil reserves. Árbenz ordered the army to distribute weapons to local peasants and workers. The army refused, forcing Jacobo Árbenz's resignation on June 27, 1954. Árbenz handed over power to Colonel Carlos Enrique Diaz. The CIA then orchestrated a series of power transfers that ended with the confirmation of Castillo Armas as president in July 1954. Armas was the first in a series of military dictators that would rule the country, triggering the brutal Guatemalan Civil War in which some 200,000 people were killed, mostly by the U.S.-backed military.Stephen Schlesinger (June 3, 2011). Ghosts of Guatemala's Past. The New York Times. Retrieved 5 July 2014.Nick Cullather, with an afterword by Piero Gleijeses "Secret History: The CIA's Classified Account of Its Operations in Guatemala, 1952–1954". Stanford University Press, 2006.Piero Gleijeses. "Shattered Hope: The Guatemalan Revolution and the United States, 1944–1954". Princeton University Press, 1992.Stephen M. Streeter. "Managing the Counterrevolution: The United States and Guatemala, 1954–1961". Ohio University Press, 2000.Gordon L. Bowen. "U.S. Foreign Policy toward Radical Change: Covert Operations in Guatemala, 1950–1954". Latin American Perspectives, 1983, Vol. 10, No. 1, p. 88-102.Guatemalan Army Waged 'Genocide,' New Report Finds. The New York Times. February 26, 1999.
Syria
In 1949, Colonel Adib Shishakli rose to power in Syria in a CIA-backed coup. Four years later, he would be overthrown by the military, Ba'athists, and communists. The CIA and MI6 started funding right wing members of the military, but suffered a large setback in the aftermath of the Suez Crisis. CIA Agent Rocky Stone, who had played a minor role in the Iranian Revolution, was working at the Damascus embassy as a diplomat, but was actually the station chief. Syrian officers on the CIA dole quickly appeared on television stating that they had received money from "corrupt and sinister Americans" "in an attempt to overthrow the legitimate government of Syria." Syrian forces surrounded the embassy and rousted Agent Stone, who confessed and subsequently made history as the first American diplomat expelled from an Arab nation. This strengthened ties between Syria and Egypt, helping establish the United Arab Republic, and poisoning the well for the US for the foreseeable future.
Indonesia
The charismatic leader of Indonesia was President Sukarno. His declaration of neutrality in the Cold War put the suspicions of the CIA on him. After Sukarno hosted Bandung Conference, promoting the Non-Aligned Movement, the Eisenhower White House responded with NSC 5518 authorizing "all feasible covert means" to move Indonesia into the Western sphere.
The US had no clear policy on Indonesia. Ike sent his special assistant for security operations, F. M. Dearborn Jr., to Jakarta. His report that there was great instability, and that the US lacked stable allies, reinforced the domino theory. Indonesia suffered from what he described as "subversion by democracy". The CIA decided to attempt another military coup in Indonesia, where the Indonesian military was trained by the US, had a strong professional relationship with the US military, had a pro-American officer corps that strongly supported their government, and a strong belief in civilian control of the military, instilled partly by its close association with the US military.
On September 25, 1957, Eisenhower ordered the CIA to start a revolution in Indonesia with the goal of regime change. Three days later, Blitz, a Soviet-controlled weekly in India, reported that the US was plotting to overthrow Sukarno. The story was picked up by the media in Indonesia. One of the first parts of the operation was an 11,500 ton US navy ship landing at Sumatra, delivering weapons for as many as 8,000 potential revolutionaries.
The CIA described Agent Al Pope's bombing and strafing of Indonesia in a CIA B-26 to the President as attacks by "dissident planes". Pope's B-26 was shot down over Ambon, Indonesia on May 18, 1958, and he bailed out. When he was captured, the Indonesian military found his personnel records, after action reports, and his membership card for the officer's club at Clark Field. On March 9, Foster Dulles, the Secretary of State, and brother of DI Allen Dulles, made a public statement calling for a revolt against communist despotism under Sukarno. Three days later, the CIA reported to the White House that the Indonesian Army's actions against CIA-instigated revolution was suppressing communism.
After Indonesia, Eisenhower displayed mistrust of both the CIA and its Director, Allen Dulles. Dulles too displayed mistrust of the CIA itself. Abbot Smith, a CIA analyst who later became chief of the Office of National Estimates, said, "We had constructed for ourselves a picture of the USSR, and whatever happened had to be made to fit into this picture. Intelligence estimators can hardly commit a more abominable sin." Something reflected in the intelligence failure in Indonesia. On December 16, Eisenhower received a report from his intelligence board of consultants that said the agency was "incapable of making objective appraisals of its own intelligence information as well as its own operations."
Congo
In the election of Patrice Lumumba, and his acceptance of Soviet support the CIA saw another possible Cuba. This view swayed the White House. Ike ordered that Lumumba be "eliminated". The CIA delivered a quarter of a million dollars to Joseph Mobutu, their favored Congolese political figure. Mobutu delivered Lumumba to the Belgians, the former colonial masters of Congo, who executed him in short order.
Gary Powers U-2 shootdown
thumb|Suspended from the ceiling of the glass enclosed atrium: three models of the U-2, Lockheed A-12, and D-21 drone. These models are exact replicas at one-sixth scale of the real planes. All three had photographic capabilities. The U-2 was one of the first espionage planes developed by the CIA. The A-12 set unheralded flight records. The D-21 drone was one of the first unmanned aircraft ever built. Lockheed Martin Corporation donated all three models to the CIA.
After the Bomber Gap came the Missile Gap. Eisenhower wanted to use the U-2 to disprove the Missile Gap, but he had banned U-2 overflights of the USSR after meeting Secretary Khrushchev at Camp David. Another reason the President objected to the use of the U-2 was that, in the nuclear age, the intelligence he needed most was on their intentions, without which, the US would face a paralysis of intelligence. He was particularly worried that U-2 flights could be seen as preparations for first strike attacks. He had high hopes for an upcoming meeting with Khrushchev in Paris. Eisenhower finally gave into CIA pressure to authorize a 16-day window for flights, which was extended an additional six days because of poor weather. On May 1, 1960, the USSR shot down a U-2 flying over the Soviet territory. To Eisenhower, the ensuing coverup destroyed his perceived honesty, and his hope of leaving a legacy of thawing relations with Khrushchev. It would also mark the beginning of a long downward slide in the credibility of the Office of the President of the United States. Eisenhower later said that the U-2 coverup was the greatest regret of his Presidency.
Dominican Republic
The human rights abuses of Generalissimo Rafael Trujillo had a history of more than 3 decades, but in August 1960 the United States severed diplomatic relations. The CIA's Special group had decided to arm Dominicans in hopes of an assassination. The CIA had dispersed three rifles, and three .38 revolvers, but things paused as Kennedy assumed office. An order approved by Kennedy resulted in the dispersal of four machine guns. Trujillo died from gunshot wounds two weeks later. In the aftermath, Robert Kennedy wrote that the CIA had succeeded where it had failed many times in the past, but in the face of that success, it was caught flatfooted, having failed to plan what to do next.
Bay of Pigs
The CIA welcomed Fidel Castro on his visit to DC, and gave him a face to face briefing. The CIA hoped that Castro would bring about a friendly democratic government, and planned to curry his favor with money and guns. On December 11, 1959, a memo reached the DI's desk recommending Castro's "elimination". Dulles replaced the word "elimination" with "removal", and set the wheels in motion. By mid August 1960, Dick Bissell would seek, with the blessing of the CIA, to hire the Mafia to assassinate Castro.
The Bay of Pigs Invasion was a failed military invasion of Cuba undertaken by the CIA-sponsored paramilitary group Brigade 2506 on April 17, 1961. A counter-revolutionary military, trained and funded by the CIA, Brigade 2506 fronted the armed wing of the Democratic Revolutionary Front (DRF) and intended to overthrow the increasingly communist government of Fidel Castro. Launched from Guatemala, the invading force was defeated within three days by the Cuban Revolutionary Armed Forces, under the direct command of Prime Minister Fidel Castro. US President Dwight D. Eisenhower was concerned at the direction Castro's government was taking, and in March 1960, Eisenhower allocated $13.1 million to the CIA to plan Castro's overthrow. The CIA proceeded to organize the operation with the aid of various Cuban counter-revolutionary forces, training Brigade 2506 in Guatemala. Over 1,400 paramilitaries set out for Cuba by boat on April 13. Two days later on April 15, eight CIA-supplied B-26 bombers attacked Cuban air fields. On the night of April 16, the main invasion landed in the Bay of Pigs, but by April 20, the invaders finally surrendered. The failed invasion strengthened the position of Castro's leadership as well as his ties with the USSR. This led eventually to the events of the Cuban Missile Crisis of 1962. The invasion was a major embarrassment for US foreign policy. US President John F. Kennedy ordered a number of internal investigations across Latin America.
The Taylor Board was commissioned to determine what went wrong in Cuba. The Board came to the same conclusion that the Jan '61 President's Board of Consultants on Foreign Intelligence Activities had concluded, and many other reviews prior, and to come, that Covert Action had to be completely isolated from intelligence and analysis. The Inspector General of the CIA investigated the Bay of Pigs. His conclusion was that there was a need to drastically improve the organization and management of the CIA. The Special Group (Later renamed the 303 committee) was convened in an oversight role.
Early Cold War, 1953–1966
thumb|Lockheed U-2 "Dragon Lady", the first generation of near-space reconnaissance aircraft
thumb|Early CORONA/KH-4B imagery IMINT satellite
thumb|The USAF's SR-71 Blackbird was developed from the CIA's A-12 OXCART.
The CIA was involved in anti-Communist activities in Burma, Guatemala, and Laos."The Role of Intelligence" (1965). Congress and the Nation. p. 306 There have been suggestions that the Soviet attempt to put missiles into Cuba came, indirectly, when they realized how badly they had been compromised by a U.S.-UK defector in place, Oleg Penkovsky. One of the biggest operations ever undertaken by the CIA was directed at Zaïre in support of general-turned-dictator Mobutu Sese Seko.
Indochina, Tibet and the Vietnam War (1954–1975)
The OSS Patti mission arrived in Vietnam near the end of World War II, and had significant interaction with the leaders of many Vietnamese factions, including Ho Chi Minh.
The CIA Tibetan program consists of political plots, propaganda distribution, as well as paramilitary and intelligence gathering based on U.S. commitments made to the Dalai Lama in 1951 and 1956.
During the period of U.S. combat involvement in the Vietnam War, there was considerable argument about progress among the Department of Defense under Robert McNamara, the CIA, and, to some extent, the intelligence staff of Military Assistance Command Vietnam.
Sometime between 1959 and 1961 the CIA started Project Tiger, a program of dropping South Vietnamese agents into North Vietnam to gather intelligence. These were failures; the Deputy Chief for Project Tiger, Captain Do Van Tien, admitted that he was an agent for Hanoi.
Johnson
In the face of the failure of Project Tiger, the Pentagon wanted CIA paramilitary forces to participate in their Op Plan 64A, this resulted in the CIA's foreign paramilitaries being put under the command of the DOD, a move seen as a slippery slope inside the CIA, a slide from covert action towards militarization.
A CIA analyst's assessment of Vietnam was that the US was "becoming progressively divorced from reality... [and] proceeding with far more courage than wisdom".
Nixon
In 1971, the NSA and CIA were engaged in domestic spying. The DOD was eavesdropping on Kissinger. The White House, and Camp David were wired for sound. Nixon and Kissinger were eavesdropping on their aides, as well as reporters. Famously, Nixon's Plumbers had in their number many former CIA agents, including Howard Hunt, Jim McCord, and Eugenio Martinez. On July 7, 1971, John Ehrlichman, Nixon's domestic policy chief, told DCI Cushman, Nixon's hatchet-man in the CIA, to let Cushman "know that [Hunt] was in fact doing some things for the President... you should consider he has pretty much carte blanche" Importantly, this included a camera, disguises, a voice altering device, and ID papers furnished by the CIA, as well as the CIA's participation developing film from the burglary Hunt staged on the office of Pentagon Papers leaker Daniel Ellsberg's psychologist.
On June 17, Nixon's Plumbers were caught burglarizing the DNC offices in the Watergate. On June 23, DI Helms was ordered by the White House to wave the FBI off using national security as a pretext. The new DCI, Walters, another Nixon hack, called the acting director of the FBI and told him to drop the investigation as ordered. On June 26, Nixon's counsel John Dean ordered DCI Walters to pay the plumbers untraceable hush money. The CIA was the only part of the government that had the power to make off the book payments, but it could only be done on the orders of the CI, or, if he was out of the country, the DCI. The Acting Director of the FBI started breaking ranks. He demanded the CIA produce a signed document attesting to the national security threat of the investigation. Jim McCord's lawyer contacted the CIA informing them that McCord had been offered a Presidential pardon if he fingered the CIA, testifying that the break-in had been an operation of the CIA. Nixon had long been frustrated by what he saw as a liberal infection inside the CIA, and had been trying for years to tear the CIA out by its roots. McCord wrote "If [DI] Helms goes (takes the fall) and the Watergate operation is laid at the CIA's feet, where it does not belong, every tree in the forest will fall. It will be a scorched desert."
On November 13, after Nixon's landslide re-election, Nixon told Kissinger "[I intend] to ruin the Foreign Service. I mean ruin it - the old Foreign Service - and to build a new one." He had similar designs for the CIA, and intended to replace Helms with James Schlesinger. Nixon had told Helms that he was on the way out, and promised that Helms could stay on until his 60th birthday, the mandatory retirement age. On February 2, Nixon broke that promise, carrying through with his intention to "remove the deadwood" from the CIA. "Get rid of the clowns" was his order to the incoming CI. Kissinger had been running the CIA since the beginning of Nixon's presidency, but Nixon impressed on Schlesinger that he must appear to congress to be in charge, averting their suspicion of Kissinger's involvement. Nixon also hoped that Schlesinger could push through broader changes in the intelligence community that he had been working towards for years, the creation of a Director of National Intelligence, and spinning off the covert action part of the CIA into a separate organ. Before Helms would leave office, he would destroy every tape he had secretly made of meetings in his office, and many of the papers on Project MKUltra. In Schlesinger's 17 week tenure, he would fire more than 1,500 employees. As Watergate threw the spotlight on the CIA, Schlesinger, who had been kept in the dark about the CIA's involvement, decided he needed to know what skeletons were in the closet. He issued a memo to every CIA employee directing them to disclose to him any CIA activity they knew of past or present that could fall outside the scope of the CIA's charter.
This became the Family Jewels. It included information linking the CIA to the assassination of foreign leaders, the illegal surveillance of some 7,000 U.S. citizens involved in the antiwar movement (Operation CHAOS), the CIA had also experimented on U.S. and Canadian citizens without their knowledge, secretly giving them LSD (among other things) and observing the results. This prompted Congress to create the Church Committee in the Senate, and the Pike Committee in the House. President Gerald Ford created the Rockefeller Commission, and issued an executive order prohibiting the assassination of foreign leaders. DCI Colby leaked the papers to the press, later he stated that he believed that providing Congress with this information was the correct thing to do, and ultimately in the CIA's own interests.
Congressional Investigations
Acting Attorney General Laurence Silberman learned of the existence of the family jewels, he issued a subpoena for them, prompting eight congressional investigations on the domestic spying activities of the CIA. Bill Colby's short tenure as DCI would end with the Halloween Massacre. His replacement was George H.W. Bush. At the time, the DOD had control of 80% of the intelligence budget. Communication and coordination between the CIA and the DOD would suffer greatly under Defense Secretary Donald Rumsfeld. The CIA's budget for hiring clandestine officers had been squeezed out by the paramilitary operations in south-east Asia, and hiring was further strained by the government's poor popularity. This left the Agency bloated with middle management, and anemic in younger officers. With employee training taking five years, the Agency's only hope would be on the trickle of new officers coming to fruition years in the future. The CIA would see another setback as communists would take Angola. William J. Casey, a member of Ford's Intelligence Advisory Board, would press Bush to allow a team from outside the CIA to produce Soviet military estimates as a "Team B". Bush gave the OK. The "B" team was composed of hawks. Their estimates were the highest that could be justified, and they painted a picture of a growing Soviet military when the Soviet military was actually shrinking. Many of their reports found their way to the press. As a result of the investigations, Congressional oversight of the CIA eventually evolved into a select intelligence committee in the House, and Senate supervising covert actions authorized by the President.
Chad
Chad's neighbor Libya was a major source of weaponry to communist rebel forces. The CIA seized the opportunity to arm and finance Chad's Prime Minister, Hissène Habré after he created a breakaway government in Western Sudan, even giving him Stinger missiles.
Afghanistan
In Afghanistan, the CIA funneled $40 billion worth of weapons, which included over two thousand FIM-92 Stinger surface-to-air missiles, to Pakistani Inter-Services Intelligence (ISI), which funneled them to almost 100,000 Afghan resistance fighters, notably the Mujahideen, and foreign "Afghan Arabs" from forty Muslim countries.
Iran/Contra
Under President Carter, the CIA was conducting covertly funding pro-American opposition against the Sandinista. In March 1981, Reagan told Congress that the CIA would protect El Salvador by preventing the shipment of Nicaraguan arms into the country to arm Communist rebels. This was a ruse. The CIA was actually arming and training Nicaraguans Contras in Honduras in hopes that they could depose the Sandinistas in Nicaragua. Through William J. Casey's tenure as DI little of what he said in the National Security Planning Group, or to President Reagan was supported by the intelligence branch of the CIA, so Casey formed the Central American Task Force, staffed with yes men from Covert Action. On December 21, 1982, Congress passed a law restricting the CIA to its stated mission, restricting the flow of arms from Nicaragua to El Salvador, prohibiting the use of funds to oust the Sandinistas. Reagan testified before Congress, assuring them that the CIA was not trying to topple the Nicaraguan government.
Lebanon
The CIA's prime source in Lebanon was Bashir Gemayel, a member of the Christian Maronite sect. The CIA was blinded by the uprising against the Maronite minority. Israel invaded Lebanon, and, along with the CIA, propped up Gemayel. This got Gemayel's assurance that Americans would be protected in Lebanon. 13 days later he was assassinated. Imad Mughniyah, a Hezbollah assassin would target Americans in retaliation for the Israeli invasion, the Sabra and Shatila massacre, and the US Marines of the Multi-National Force for their role in opposing the PLO in Lebanon. On April 18, 1983, a 2,000 lb car bomb exploded in the lobby of the American embassy in Beirut, killing 63 people including 17 Americans, and 7 CIA officers, including Robert Ames, one of the CIA's best Middle East experts. America's fortunes in Lebanon would only suffer more as America's poorly-directed retaliation for the bombing was interpreted by many as support for the Christian Maronite minority. On October 23, 1983, two bombs were Beirut in Beirut, including a 10 ton bomb at a US military barracks that killed 242 people. Both attacks are believed to have been planned by Iran by way of Mughniyah.
The Embassy bombing had taken the life of the CIA's Beirut Station Chief, Ken Haas. Bill Buckley was sent in to replace him. Eighteen days after the US Marines left Lebanon, Buckley was kidnapped. On March 7, 1984, Jeremy Levin, CNN Bureau Chief in Beirut was kidnapped. Twelve more Americans would be kidnapped in Beirut during the Reagan Administration. Manucher Ghorbanifar, a former Savak agent was an information seller, and the subject of a rare CIA burn notice for his track record of misinformation. He reached out to the Agency offering a back channel to Iran, suggesting a trade of missiles that would be lucrative to the intermediaries.
Poland 1980–89
Unlike the Carter Administration, the Reagan Administration supported the Solidarity movement in Poland, and—based on CIA intelligence—waged a public relations campaign to deter what the Carter administration felt was "an imminent move by large Soviet military forces into Poland." Colonel Ryszard Kukliński, a senior officer on the Polish General Staff was secretly sending reports to the CIA.Richard T. Davies, "The CIA and the Polish Crisis of 1980–1981." Journal of Cold War Studies (2004) 6#3 pp: 120-123. online The CIA transferred around $2 million yearly in cash to Solidarity, which suggests that $10 million total is a reasonable estimate for the 5-year total. There were no direct links between the CIA and Solidarnosc, and all money was channeled through third parties., revised as Domber 2014, p. 110 . CIA officers were barred from meeting Solidarity leaders, and the CIA's contacts with Solidarnosc activists were weaker than those of the AFL-CIO, which raised 300 thousand dollars from its members, which were used to provide material and cash directly to Soldarity, with no control of Solidarity's use of it. The U.S. Congress authorized the National Endowment for Democracy to promote democracy, and the NED allocated $10 million to Solidarity. When the Polish government launched a crackdown of its own in December 1981, however, Solidarity was not alerted. Potential explanations for this vary; some believe that the CIA was caught off guard, while others suggest that American policy-makers viewed an internal crackdown as preferable to an "inevitable Soviet intervention."MacEachin, Douglas J. "US Intelligence and the Polish Crisis 1980–1981." CIA. June 28, 2008. CIA support for Solidarity included money, equipment and training,which was coordinated by Special Operations CIA divisionCover Story: The Holy Alliance By Carl Bernstein Sunday, June 24, 2001 Henry Hyde, U.S. House intelligence committee member, stated that USA provided "supplies and technical assistance in terms of clandestine newspapers, broadcasting, propaganda, money, organizational help and advice".Branding Democracy: U.S. Regime Change in Post-Soviet Eastern Europe Gerald Sussman, page 128 Michael Reisman from Yale Law School named operations in Poland as one of the covert actions of CIA during Cold War.Looking to the Future: Essays on International Law in Honor of W. Michael Reisman Initial funds for covert actions by CIA were $2 million, but soon after authorization were increased and by 1985 CIA successfully infiltrated PolandExecutive Secrets: Covert Action and the Presidency William J. Daugherty. page 201-203 Rainer Thiel in "Nested Games of External Democracy Promotion: The United States and the Polish Liberalization 1980-1989" mentions how covert operations by CIA and spy games among others allowed USA to proceed with successful regime change.Rainer Thiel in "Nested Games of External Democracy Promotion: The United States and the Polish Liberalization 1980-1989" page 273
Operation Desert Storm
During the Iran-Iraq war, the CIA had backed both sides. The CIA had maintained a network of spies in Iran, but in 1989 a CIA mistake compromised every agent they had in there, and the CIA had no agents in Iraq. In the weeks before the Invasion of Kuwait the CIA downplayed the military buildup. During the war CIA estimates of Iraqi abilities and intentions flip-flopped and were rarely accurate. In one particular case, the DOD had asked the CIA to identify military targets to bomb. One target the CIA identified was an underground shelter. The CIA didn't know that it was a civilian bomb shelter. In a rare instance the CIA correctly determined that the coalition forces efforts were coming up short in their efforts to destroy SCUD missiles. Congress took away the CIA's role in interpreting spy-satellite photos, putting the CIA's satellite intelligence operations under the auspices of the military. The CIA created its office of military affairs, which operated as "second-echelon support for the pentagon... answering... questions from military men [like] 'how wide is this road?'"
Fall of the USSR
Gorbachev's announcement of the unilateral reduction of 500,000 Soviet troops took the CIA by surprise. Moreover, Doug MacEachin, the CIA's Chief of Soviet analysis said that even if the CIA had told the President, the NSC, and Congress about the cuts beforehand, it would have been ignored. "We never would have been able to publish it." All the CIA numbers on the USSR's economy were wrong. Too often the CIA relied on people inexperienced with that which they were supposed to be the expert. Bob Gates had preceded Doug MacEachin as Chief of Soviet analysis, and he had never visited Russia. Few officers, even those stationed in country spoke the language of the people they were spying on. And the CIA had no capacity to send agents to respond to developing situations. The CIA analysis of Russia during the entire cold war was either driven by ideology, or by politics. William J Crowe, the Chairman of the Joint Chiefs of Staff, noted that the CIA "talked about the Soviet Union as if they weren't reading the newspapers, much less developed clandestine intelligence."
President Clinton
On January 25, 1993, Mir Qazi opened fire at the CIA headquarters in Langley, Virginia, killing two agents and wounding three others. On February 26, Al-Qaeda terrorists led by Ramzi Yousef bombed the parking garage below the North Tower of the World Trade Center in New York City, killing six people and injured 1,402 others.
During the Bosnian War, the CIA ignored signs within and without of the Srebrenica massacre. Two weeks after news reports of the slaughter, the CIA sent a U-2 to photograph it, a week later the CIA completed its report on the matter. During Operation Allied Force, the CIA had incorrectly provided the coordinates of the Chinese Embassy as a Yugoslav military target resulting in its bombing.
In France, the CIA had orders for economic intelligence, a female CIA agent revealed her connections to the CIA to the French. Dick Holm, Paris Station Chief, was expelled. In Guatemala, the CIA produced the Murphy Memo, based on audio recordings made by bugs planted in the bedroom of Ambassador Marilyn McAfee placed by Guatemalan intelligence. In the recording, Ambassador McAfee verbally entreated "Murphy". The CIA circulated a memo in the highest Washington circles accusing Ambassador McAfee of having an extramarital lesbian affair with her secretary, Carol Murphy. There was no affair. Ambassador McAfee was calling to Murphy, her poodle.
Harold James Nicholson would burn several serving officers and 3 years of trainees before he was caught spying for Russia. In 1997 the House would pen another report, which said that CIA officers know little about the language or politics of the people they spy on, the conclusion was that the CIA lacked the "depth, breadth, and expertise to monitor political, military, and economic developments worldwide." Russ Travers said in the CIA in-house journal that in 5 years "intelligence failure is inevitable". In 1997 the CIA's new director George Tenet would promise a new working agency by 2002. The CIA's surprise at India's detonation of an atom bomb was a failure at almost every level. After the 1998 embassy bombings by Al Qaeda, the CIA offered two targets to be hit in retaliation. One of them was the Al-Shifa pharmaceutical factory, where traces of chemical weapon precursors had been detected. In the aftermath it was concluded that "the decision to target al Shifa continues a tradition of operating on inadequate intelligence about Sudan." It triggered the CIA to make "substantial and sweeping changes" to prevent "a catastrophic systemic intelligence failure." Between 1991 and 1998 the CIA had lost 3,000 employees.
Aldrich Ames
Between 1985 and 1986 the CIA lost every spy it had in Eastern Europe. The details of the investigation into the cause was obscured from the new Director, and the investigation had little success, and has been widely criticized. In June 1987, Major Florentino Aspillaga Lombard, the chief of Cuban Intelligence in Czechoslovakia drove into Vienna, and walked into the American Embassy to defect. He revealed that every single Cuban spy on the CIA payroll was a double agent, pretending to work for the CIA, but secretly still being loyal to Castro. On February 21, 1994, FBI agents pulled Aldrich Ames out of his Jaguar. In the investigation that ensued, the CIA discovered that many of the sources for its most important analyses of the USSR were based on Soviet disinformation fed to the CIA by controlled agents. On top of that, it was discovered that, in some cases, the CIA suspected at the time that the sources were compromised, but the information was sent up the chain as genuine.
Osama Bin Laden
Agency files show that it is believed Osama Bin Laden was funding the Afghan rebels against the USSR in the 1980s. In 1991, Bin Laden returned to his native Saudi Arabia protesting the presence of troops, and Operation Desert Storm. He was expelled from the country. In 1996 the CIA created a team to hunt Bin Laden. They were trading information with the Sudanese until, on the word of a source that would later be found to be a fabricator, the CIA closed its Sudan station later that year. In 1998 Bin Laden would declare war on America, and, on August 7, strike in Tanzania and Nairobi. On October 12, 2000, Al Qaeda bombed the USS Cole. In 1947 when the CIA was founded, there were 200 agents in the Clandestine Service. In 2001, of the 17,000 employees in the CIA, there were 1,000 in the Clandestine Service. Of that 1,000 few would accept hardship postings. In the first days of George W. Bush' Presidency, Al Qaeda threats were ubiquitous in daily Presidential CIA briefings, but it may have become a case of the boy who cries wolf. The Agency's predictions were dire, but carried little weight, and the attentions of the President, and his defense staff were elsewhere. The CIA arranged the arrests of suspected Al Qaeda members through cooperation with foreign agencies, but the CIA could not definitively say what effect these arrests had hat, and it could not gain hard intelligence from those captured. The President had asked the CIA if Al Qaeda could plan attacks in the US. On August 6, Bush received a daily briefing with the headline, not based on current, solid intelligence, "Al Qaeda determined to strike inside the US." The US had been hunting Bin Laden since '96 and had had several opportunities, but neither Clinton, nor Bush had wanted to risk their skin taking an active role in a murky assassination plot, and the perfect opportunity had never materialized for a trigger shy DI that would have given him the reassurances he needed to take the plunge. That day, Richard A. Clarke sent National Security Advisor Condoleezza Rice warning of the risks, and decrying the inaction of the CIA.
Al-Qaeda and the "Global War on Terrorism"
thumb|The CIA prepared a series of leaflets announcing bounties for those who turned in or denounced individual suspected of association with the Taliban or al Qaeda.
The CIA had long been dealing with terrorism originating from abroad, and in 1986 had set up a Counterterrorist Center to deal specifically with the problem. At first confronted with secular terrorism, the Agency found Islamist terrorism looming increasingly large on its scope.
In January 1996, the CIA created an experimental "virtual station," the Bin Laden Issue Station, under the Counterterrorist Center, to track Bin Laden's developing activities. Al-Fadl, who defected to the CIA in spring 1996, began to provide the Station with a new image of the Al Qaeda leader: he was not only a terrorist financier, but a terrorist organizer, too. FBI Special Agent Dan Coleman (who together with his partner Jack Cloonan had been "seconded" to the Bin Laden Station) called him Qaeda's "Rosetta Stone".
In 1999, CIA chief George Tenet launched a grand "Plan" to deal with al-Qaeda. The Counterterrorist Center, its new chief Cofer Black and the center's Bin Laden unit were the Plan's developers and executors. Once it was prepared Tenet assigned CIA intelligence chief Charles E. Allen to set up a "Qaeda cell" to oversee its tactical execution. In 2000, the CIA and USAF jointly ran a series of flights over Afghanistan with a small remote-controlled reconnaissance drone, the Predator; they obtained probable photos of Bin Laden. Cofer Black and others became advocates of arming the Predator with missiles to try to assassinate Bin Laden and other al-Qaeda leaders. After the Cabinet-level Principals Committee meeting on terrorism of September 4, 2001, the CIA resumed reconnaissance flights, the drones now being weapons-capable.
September 11 attacks and its aftermath
thumb|right|US Special Forces help Northern Alliance troops away from a CIA-operated MI-17 Hip helicopter at Bagram Airbase, 2002
On September 11, 2001, 19 Al-Qaeda members hijacked four passenger jets within the Northeastern United States in a series of coordinated terrorist attacks. Two planes crashed into the Twin Towers of the World Trade Center in New York City, the third into the Pentagon in Arlington County, Virginia, and the fourth inadvertently into a field near Shanksville, Pennsylvania. The attacks cost the lives of 2,996 people (including the 19 hijackers), caused the destruction of the Twin Towers, and damaged the western side of the Pentagon. Soon after 9/11, the New York Times released a story stating that the CIA's New York field office was destroyed in the wake of the attacks. According to unnamed CIA sources, while first responders, military personnel and volunteers were conducting rescue efforts at the World Trade Center site, a special CIA team was searching the rubble for both digital and paper copies of classified documents. This was done according to well-rehearsed document recovery procedures put in place after the Iranian takeover of the United States Embassy in Tehran in 1979. While it was not confirmed whether the agency was able to retrieve the classified information, it is known that all agents present that day fled the building safely.
While the CIA insists that those who conducted the attacks on 9/11 were not aware that the agency was operating at 7 World Trade Center under the guise of another (unidentified) federal agency, this center was the headquarters for many notable criminal terrorism investigations. Though the New York field offices' main responsibilities were to monitor and recruit foreign officials stationed at the United Nations, the field office also handled the investigations of the August 1998 bombings of United States Embassies in East Africa and the October 2000 bombing of the USS Cole. Despite the fact that the CIA's New York branch may have been damaged by the 9/11 attacks and they had to loan office space from the US Mission to the United Nations and other federal agencies, there was an upside for the CIA. In the months immediately following 9/11, there was a huge increase in the amount of applications for CIA positions. According to CIA representatives that spoke with the New York Times, pre-9/11 the agency received approximately 500 to 600 applications a week, in the months following 9/11 the agency received that number daily.
The intelligence community as a whole, and especially the CIA, were involved in presidential planning immediately after the 9/11 attacks. In his address to the nation at 8:30pm on September 11, 2001, George W. Bush mentioned the intelligence community: "The search is underway for those who are behind these evil acts, I've directed the full resource of our intelligence and law enforcement communities to find those responsible and bring them to justice."
The involvement of the CIA in the newly coined "War on Terror" was further increased on September 15, 2001. During a meeting at Camp David George W. Bush agreed to adopt a plan proposed by CIA director George Tenet. This plan consisted of conducting a covert war in which CIA paramilitary officers would cooperate with anti-Taliban guerillas inside Afghanistan. They would later be joined by small special operations forces teams which would call in precision airstrikes on Taliban and Al Qaeda fighters. This plan was codified on September 16, 2001 with Bush's signature of an official Memorandum of Notification that allowed the plan to proceed.
thumb|Former CIA director Robert Gates meets with Russian Minister of Defense and ex-KGB officer Sergei Ivanov, 2007
On November 25–27, 2001, Taliban prisoners revolt at the Qala Jangi prison west of Mazar-e-Sharif. Though several days of struggle occurred between the Taliban prisoners and the Northern Alliance members present, the prisoners did gain the upperhand and obtain North Alliance weapons. At some point during this period Johnny "Mike" Spann, a CIA officer sent to question the prisoners, was beaten to death. He became the first American to die in combat in the war in Afghanistan.
After 9/11, the CIA came under criticism for not having done enough to prevent the attacks. Tenet rejected the criticism, citing the Agency's planning efforts especially over the preceding two years. He also considered that the CIA's efforts had put the Agency in a position to respond rapidly and effectively to the attacks, both in the "Afghan sanctuary" and in "ninety-two countries around the world". The new strategy was called the "Worldwide Attack Matrix".
Anwar al-Awlaki, a Yemeni-American U.S. citizen and al-Qaeda member, was killed on September 30, 2011, by an air attack carried out by the Joint Special Operations Command. After several days of surveillance of Awlaki by the Central Intelligence Agency, armed drones took off from a new, secret American base in the Arabian Peninsula, crossed into northern Yemen, and fired a number of Hellfire missiles at al-Awlaki's vehicle. Samir Khan, a Pakistani-American al-Qaeda member and editor of the jihadist Inspire magazine, also reportedly died in the attack. The combined CIA/JSOC drone strike was the first in Yemen since 2002 – there have been others by the military's Special Operations forces – and was part of an effort by the spy agency to duplicate in Yemen the covert war which has been running in Afghanistan and Pakistan.
Use of vaccination programs
The agency attracted widespread criticism after it used a doctor in Pakistan to set up a vaccination program in Abbottabad in 2011 to obtain DNA samples from the occupants of a compound where it was suspected bin Laden was living.
Failures in intelligence analysis
A major criticism is failure to forestall the September 11 attacks. The 9/11 Commission Report identifies failures in the IC as a whole. One problem, for example, was the FBI failing to "connect the dots" by sharing information among its decentralized field offices.
The report concluded that former DCI George Tenet failed to adequately prepare the agency to deal with the danger posed by al-Qaeda prior to the attacks of September 11, 2001.See pages 198 to 202 of The report was finished in June 2005 and was partially released to the public in an agreement with Congress, over the objections of current DCI General Michael Hayden. Hayden said its publication would "consume time and attention revisiting ground that is already well plowed." Tenet disagreed with the report's conclusions, citing his planning efforts vis-à-vis al-Qaeda, particularly from 1999.
Abuses of CIA authority, 1970s–1990s
Conditions worsened in the mid-1970s, around the time of Watergate. A dominant feature of political life during that period were the attempts of Congress to assert oversight of the U.S. Presidency and the executive branch of the U.S. government. Revelations about past CIA activities, such as assassinations and attempted assassinations of foreign leaders (most notably Fidel Castro and Rafael Trujillo) and illegal domestic spying on U.S. citizens, provided the opportunities to increase Congressional oversight of U.S. intelligence operations.
thumb|Nixon Oval Office meeting with H.R. Haldeman "Smoking Gun" Conversation June 23, 1972, Full Transcript
Hastening the CIA's fall from grace were the burglary of the Watergate headquarters of the Democratic Party by former CIA officers, and President Richard Nixon's subsequent attempt to use the CIA to impede the FBI's investigation of the burglary. In the famous "smoking gun" recording that led to President Nixon's resignation, Nixon ordered his chief of staff, H. R. Haldeman, to tell the CIA that further investigation of Watergate would "open the whole can of worms" about the Bay of Pigs Invasion of Cuba. In this way Nixon and Haldemann ensured that the CIA's No. 1 and No. 2 ranking officials, Richard Helms and Vernon Walters, communicated to FBI Director L. Patrick Gray that the FBI should not follow the money trail from the burglars to the Committee to Re-elect the President, as it would uncover CIA informants in Mexico. The FBI initially agreed to this due to a long-standing agreement between the FBI and CIA not to uncover each other's sources of information, though within a couple of weeks the FBI demanded this request in writing, and when no such formal request came, the FBI resumed its investigation into the money trail. Nonetheless, when the smoking gun tapes were made public, damage to the public's perception of CIA's top officials, and thus to the CIA as a whole, could not be avoided.
thumb|left|President Gerald Ford meets with CIA Director-designate George H. W. Bush, December 17, 1975
Repercussions from the Iran-Contra affair arms smuggling scandal included the creation of the Intelligence Authorization Act in 1991. It defined covert operations as secret missions in geopolitical areas where the U.S. is neither openly nor apparently engaged. This also required an authorizing chain of command, including an official, presidential finding report and the informing of the House and Senate Intelligence Committees, which, in emergencies, requires only "timely notification."
Iraq War
72 days after the 9/11 attacks President Bush told his Secretary of Defense to update the US plan for an invasion of Iraq, but not to tell anyone. SecDef Rumsfeld asked Bush if he could bring DCI Tenet into the loop, to which Bush agreed.
Feelers the CIA had put out to Iraq in the form of 8 of their best officers in Kurdish territory in Northern Iraq hit a goldmine, unprecedented in the famously closed, almost fascist Hussein government. By December 2002 the CIA had close to a dozen good networks in Iraq and would advance so far that they would penetrate Iraq's SSO, and even tap the encrypted communications of the Deputy Prime Minister, even the bodyguard of Hussein's son became an agent. As time passed, the CIA would become more and more frantic about the possibility of their networks being compromised, "rolled up". To the CIA, the Invasion had to occur before the end of February 2003 if their sources inside Hussein's government were to survive. The rollup would happen as predicted, 37 CIA sources recognized by their Thuraya satellite telephones provided for them by the CIA.
thumb|Former CIA deputy director Michael Morell (left) apologized to Colin Powell for the CIA’s erroneous assessments of Iraq’s WMD programs."Morell "wanted to apologize" to Powell about WMD evidence". CBS News. May 11, 2015.
The case Colin Powell presented before the United Nations (purportedly proving an Iraqi WMD program) was wishful thinking. DDCI John E. McLaughlin was part of a long discussion in the CIA about equivocation. McLaughlin, who would make, among others, the "slam dunk" presentation to the President, "felt that they had to dare to be wrong to be clearer in their judgements". The Al Qaeda connection, for instance, was from a single source, extracted through torture, and was later denied. Curveball was a known liar, and the sole source for the mobile chemical weapons factories. A postmortem of the intelligence failures in the lead up to Iraq led by former DDCI Richard Kerr would conclude that the CIA had been a casualty of the cold war, wiped out in a way "analogous to the effect of the meteor strikes on the dinosaurs."
The opening days of the Invasion of Iraq would see successes and defeats for the CIA. With its Iraq networks compromised, and its strategic, and tactical information shallow, and often wrong, the intelligence side of the invasion itself would be a black eye for the Agency. The CIA would see some success with its "Scorpion" paramilitary teams composed of CIA Special Activities Division agents, along with friendly Iraqi partisans. CIA SAD officers would also help the US 10th Special Forces. The occupation of Iraq would be a low point in the history of the CIA. At the largest CIA station in the world agents would rotate through 1-3 month tours. In Iraq almost 500 transient agents would be trapped inside the Green Zone while Iraq Station Chiefs would rotate with only a little less frequency.
2004, DNI takes over CIA top-level functions
The Intelligence Reform and Terrorism Prevention Act of 2004 created the office of the Director of National Intelligence (DNI), who took over some of the government and intelligence community (IC)-wide functions that had previously been the CIA's. The DNI manages the United States Intelligence Community and in so doing it manages the intelligence cycle. Among the functions that moved to the DNI were the preparation of estimates reflecting the consolidated opinion of the 16 IC agencies, and preparation of briefings for the president. On July 30, 2008, President Bush issued Executive Order 13470 amending Executive Order 12333 to strengthen the role of the DNI.
Previously, the Director of Central Intelligence (DCI) oversaw the Intelligence Community, serving as the president's principal intelligence advisor, additionally serving as head of the CIA. The DCI's title now is "Director of the Central Intelligence Agency" (D/CIA), serving as head of the CIA.
Currently, the CIA reports to the Director of National Intelligence. Prior to the establishment of the DNI, the CIA reported to the President, with informational briefings to congressional committees. The National Security Advisor is a permanent member of the National Security Council, responsible for briefing the President with pertinent information collected by all U.S. intelligence agencies, including the National Security Agency, the Drug Enforcement Administration, etc. All 16 Intelligence Community agencies are under the authority of the Director of National Intelligence.
Operation Neptune Spear
On May 1, 2011, President Barack Obama announced that Osama bin Laden was killed earlier that day by "a small team of Americans" operating in Abbottabad, Pakistan, during a CIA operation. The raid was executed from a CIA forward base in Afghanistan by elements of the U.S. Navy's Naval Special Warfare Development Group and CIA paramilitary operatives.
It resulted in the acquisition of extensive intelligence on the future attack plans of al-Qaeda.
The operation was a result of years of intelligence work that included the CIA's capture and interrogation of Khalid Sheik Mohammad (KSM), which led to the identity of a courier of Bin Laden's, the tracking of the courier to the compound by Special Activities Division paramilitary operatives and the establishing of a CIA safe house to provide critical tactical intelligence for the operation.
Syrian Civil War
Under the aegis of operation Timber Sycamore and other clandestine activities, CIA operatives and U.S. special operations troops have trained and armed nearly 10,000 rebel fighters at a cost of $1 billion a year. The CIA has been sending weapons to anti-government rebels in Syria since at least 2012. These weapons have been reportedly falling into hands of extremists, such as al-Nusra Front and ISIL.
Reorganization
On March 6, 2015, the office of the D/CIA issued an unclassified edition a statement by the Director, titled 'Our Agency's Blueprint for the Future', as a press release for public consumption. The press release announced sweeping plans for the reorganization and reform of the CIA, which the Director believes will bring the CIA more in line with the Agency doctrine called the 'Strategic Direction'. Among the principal changes disclosed include the establishment of a new directorate, the Directorate of Digital Innovation, which is responsible for designing and crafting the digital technology to be used by the Agency, to keep the CIA always ahead of its enemies. The Directorate of Digital Innovation will also train CIA staff in the use of this technology, to prepare the CIA for the future, and it will also use the technological revolution to deal with cyber-terrorism and other perceived threats. The new directorate will be the chief cyber-espionage arm of the Agency going forward.
Other changes which were announced include the formation of a Talent Development Center of Excellence, the enhancement and expansion of the CIA University and the creation of the office of the Chancellor to head the CIA University in order to consolidate and unify recruitment and training efforts. The office of the Executive Director will be empowered and expanded and the secretarial offices serving the Executive Director will be streamlined. The restructuring of the entire Agency is to be revamped according to a new model whereby governance is modelled after the structure and hierarchy of corporations, said to increase the efficiency of workflow and to greatly enable the Executive Director to manage day-to-day activity. As well, another stated intention was to establish 'Mission Centers', each one to deal with a specific geographic region of the world, which will bring the full collaboration and joint efforts of the five Directorates together under one roof. While the Directorate heads will still retain ultimate authority over their respective Directorate, the Missions Centers will be led by an Assistant Director who will work with the capabilities and talents of all five Directorates on mission specific goals for the parts of the world which they are given responsibility for.
The unclassified version of the document ends with the announcement that the National Clandestine Service (NCS) will be reverting to its original Directorate name, the Directorate of Operations. The Directorate of Intelligence is also being renamed, it will now be the Directorate of Analysis.
Open Source Intelligence
Until the 2004 reorganization of the intelligence community, one of the "services of common concern" that the CIA provided was Open Source Intelligence from the Foreign Broadcast Information Service (FBIS). FBIS, which had absorbed the Joint Publication Research Service, a military organization that translated documents, moved into the National Open Source Enterprise under the Director of National Intelligence.
During the Reagan administration, Michael Sekora (assigned to the DIA), worked with agencies across the intelligence community, including the CIA, to develop and deploy a technology-based competitive strategy system called Project Socrates. Project Socrates was designed to utilize open source intelligence gathering almost exclusively. The technology-focused Socrates system supported such programs as the Strategic Defense Initiative in addition to private sector projects.
As part of its mandate to gather intelligence, the CIA is looking increasingly online for information, and has become a major consumer of social media. "We're looking at YouTube, which carries some unique and honest-to-goodness intelligence," said Doug Naquin, director of the DNI Open Source Center (OSC) at CIA headquarters. "We're looking at chat rooms and things that didn't exist five years ago, and trying to stay ahead." CIA launched a Twitter account in June 2014.Pfeiffer, Eric. "CIA outwits impersonators by embracing Twitter, Facebook" Yahoo News, June 6, 2014.
Outsourcing and privatization
Many of the duties and functions of Intelligence Community activities, not the CIA alone, are being outsourced and privatized. Mike McConnell, former Director of National Intelligence, was about to publicize an investigation report of outsourcing by U.S. intelligence agencies, as required by Congress. However, this report was then classified. Hillhouse speculates that this report includes requirements for the CIA to report:
different standards for government employees and contractors;
contractors providing similar services to government workers;
analysis of costs of contractors vs. employees;
an assessment of the appropriateness of outsourced activities;
an estimate of the number of contracts and contractors;
comparison of compensation for contractors and government employees;
attrition analysis of government employees;
descriptions of positions to be converted back to the employee model;
an evaluation of accountability mechanisms;
an evaluation of procedures for "conducting oversight of contractors to ensure identification and prosecution of criminal violations, financial waste, fraud, or other abuses committed by contractors or contract personnel"; and
an "identification of best practices of accountability mechanisms within service contracts."
According to investigative journalist Tim Shorrock:
Congress has required an outsourcing report by March 30, 2008.
Part of the contracting problem comes from Congressional restrictions on the number of employees in the IC. According to Hillhouse, this resulted in 70% of the de facto workforce of the CIA's National Clandestine Service being made up of contractors. "After years of contributing to the increasing reliance upon contractors, Congress is now providing a framework for the conversion of contractors into federal government employees—more or less."
As with most government agencies, building equipment often is contracted. The National Reconnaissance Office (NRO), responsible for the development and operation of airborne and spaceborne sensors, long was a joint operation of the CIA and the United States Department of Defense. NRO had been significantly involved in the design of such sensors, but the NRO, then under DCI authority, contracted more of the design that had been their tradition, and to a contractor without extensive reconnaissance experience, Boeing. The next-generation satellite Future Imagery Architecture project "how does heaven look", which missed objectives after $4 billion in cost overruns, was the result of this contract.
Some of the cost problems associated with intelligence come from one agency, or even a group within an agency, not accepting the compartmented security practices for individual projects, requiring expensive duplication.
Controversies
thumb|Supplemental material used in Maxwell Taylor's report on the Bay of Pigs invasion
THE CIA: a forgotten history, by William Blum and Legacy of Ashes: The History of the CIA by Tim Weiner have accused the CIA of various covert actions, and human rights abuses. The CIA has responded to the claims made in Weiner's book, and Jeffrey T. Richelson of the National Security Archive has also been critical of it. Intelligence expert David Wise faulted Weiner for portraying Allen Dulles as "a doddering old man" rather than the "shrewd professional spy" he knew and for refusing "to concede that the agency's leaders may have acted from patriotic motives or that the CIA ever did anything right," but concluded: "Legacy of Ashes succeeds as both journalism and history, and it is must reading for anyone interested in the CIA or American intelligence since World War II."
Extraordinary rendition
thumb|The US Senate Report on CIA Detention Interrogation Program that details the use of torture during CIA detention and interrogation.
Extraordinary rendition is the apprehension and extrajudicial transfer of a person from one country to another.Michael John Garcia, Legislative Attorney American Law Division. Renditions: Constraints Imposed by Laws on Torture September 8, 2009; link from the United States Counter-Terrorism Training and Resources for Law Enforcement web site
The term "torture by proxy" is used by some critics to describe situations in which the CIA"Background Paper on CIA's Combined Use of Interrogation Techniques". December 30, 2004. Retrieved January 2, 2010."New CIA Docs Detail Brutal 'Extraordinary Rendition' Process". Huffington Post. August 28, 2009. Retrieved January 2, 2010.Fact sheet: Extraordinary rendition, American Civil Liberties Union. Retrieved March 29, 2007 and other US agencies have transferred suspected terrorists to countries known to employ torture, whether they meant to enable torture or not. It has been claimed, though, that torture has been employed with the knowledge or acquiescence of US agencies (a transfer of anyone to anywhere for the purpose of torture is a violation of US law), although Condoleezza Rice (then the United States Secretary of State) stated that:
the United States has not transported anyone, and will not transport anyone, to a country when we believe he will be tortured. Where appropriate, the United States seeks assurances that transferred persons will not be tortured.
Whilst the Obama administration has tried to distance itself from some of the harshest counterterrorism techniques, it has also said that at least some forms of renditions will continue."Obama preserves renditions as counter-terrorism tool". LA Times February 1, 2009. Access November 21, 2011. Currently the administration continues to allow rendition only "to a country with jurisdiction over that individual (for prosecution of that individual)" when there is a diplomatic assurance "that they will not be treated inhumanely." Panetta's clarification of current US "Rendition policy".
The US programme has also prompted several official investigations in Europe into alleged secret detentions and unlawful inter-state transfers involving Council of Europe member states. A June 2006 report from the Council of Europe estimated 100 people had been kidnapped by the CIA on EU territory (with the cooperation of Council of Europe members), and rendered to other countries, often after having transited through secret detention centres ("black sites") used by the CIA, some located in Europe. According to the separate European Parliament report of February 2007, the CIA has conducted 1,245 flights, many of them to destinations where suspects could face torture, in violation of article 3 of the United Nations Convention Against Torture.Resolution 1507 (2006). Alleged secret detentions and unlawful inter-state transfers of detainees involving Council of Europe member states
Following the 11 September 2001 attacks the United States, in particular the CIA, has been accused of rendering hundreds of people suspected by the government of being terrorists—or of aiding and abetting terrorist organisations—to third-party states such as Egypt, Jordan, Morocco, and Uzbekistan. Such "ghost detainees" are kept outside judicial oversight, often without ever entering US territory, and may or may not ultimately be devolved to the custody of the United States.Mayer, Jane. The New Yorker, February 14, 2005. According to former CIA case officer Bob Baer, "If you want a serious interrogation, you send a prisoner to Jordan. If you want them to be tortured, you send them to Syria. If you want someone to disappear—never to see them again—you send them to Egypt." The CIA's Rendition Flights to Secret Prisons: The Torture-Go-Round By Lila Rajiva in CounterPunch, December 5, 2005
On October 4, 2001, a secret arrangement is made in Brussels, by all members of NATO. Lord George Robertson, British defence secretary and later NATO's secretary-general, will later explain NATO members agree to provide "blanket overflight clearances for the United States and other allies' aircraft for military flights related to operations against terrorism."
Security failures
thumb|Critics assert that funding the Afghan mujahideen (Operation Cyclone) played a role in causing the September 11 attacks.
On December 30, 2009, a suicide attack occurred in the Forward Operating Base Chapman attack in the province of Khost, Afghanistan. Seven CIA officers, including the chief of the base, were killed and six others seriously wounded in the attack.Rubin, Alissa J.; Mazzetti, Mark (December 31, 2009). "Afghan Base Hit by Attack Has Pivotal Role in Conflict". New York Times. NYtimes.com. Retrieved January 1, 2010.
Counterintelligence failures
Perhaps the most disruptive period involving counterintelligence was James Jesus Angleton's search for a mole, based on the statements of a Soviet defector, Anatoliy Golitsyn. A second defector, Yuri Nosenko, challenged Golitsyn's claims, with the two calling one another Soviet double agents. Many CIA officers fell under career-ending suspicion; the details of the relative truths and untruths from Nosenko and Golitsyn may never be released, or, in fact, may not be fully understood. The accusations also crossed the Atlantic to the British intelligence services, who also were damaged by molehunts.
Edward Lee Howard, David Henry Barnett, both field operations officers sold secrets to Russia, and William Kampiles, a low-level worker in the CIA 24-hour Operations Center. Kampiles sold the Soviets the detailed operational manual for the KH-11 reconnaissance satellite.
Human rights concerns
thumb|Operation Condor participants. Green: active members. Blue: collaborator (USA).
The CIA has been called into question for, at times, using torture, funding and training of groups and organizations that would later participate in killing of civilians and other non-combatants and would try or succeed in overthrowing democratically elected governments, human experimentation, and targeted killings and assassinations. The CIA has also been accused of a lack of financial and whistleblower controls which has led to waste and fraud.
The Institute on Medicine as a Profession and the non-profit organization Open Society Foundations reviewed public records into the medical professions alleging complicity in the abuse of prisoners suspected of terrorism who were held in U.S. custody during the years after 9/11." The reports found that health professionals "Aided cruel and degrading interrogations; Helped devise and implement practices designed to maximize disorientation and anxiety so as to make detainees more malleable for interrogation; and Participated in the application of excruciatingly painful methods of force-feeding of mentally competent detainees carrying out hunger strikes" are not all that surprising. Medical professionals were sometimes used at black sites to monitor detainee health. Whether or not the physicians were compelled is an open question.
External investigations and document releases
Several investigations (e.g., the Church Committee, Rockefeller Commission, Pike Committee, etc.) have been conducted about the CIA, and many documents have been declassified.
Influencing public opinion and law enforcement
The CIA sometimes finds itself in conflict with other parts of the government when there is disagreement over the legality of specific covert programs. There is always the risk that one part of the government may make the covert operations of another part of the government public.
Drug trafficking
Two offices of CIA Directorate of Analysis have analytical responsibilities in this area. The Office of Transnational Issues applies unique functional expertise to assess existing and emerging threats to U.S. national security and provides the most senior U.S. policymakers, military planners, and law enforcement with analysis, warning, and crisis support.
CIA Crime and Narcotics Center researches information on international narcotics trafficking and organized crime for policymakers and the law enforcement community. Since CIA has no domestic police authority, it sends its analytic information to the Federal Bureau of Investigation (FBI), Immigration and Customs Enforcement (ICE) and other law enforcement organizations, such as the Drug Enforcement Administration (DEA) and the Office of Foreign Assets Control of the United States Department of the Treasury (OFAC).
Another part of CIA, the Directorate of Operations, collects human intelligence (HUMINT) in these areas.
Research by Dr. Alfred W. McCoy, Gary Webb, and others has pointed to CIA involvement in narcotics trafficking across the globe, although the CIA officially denies such allegations.Gary Webb Dark AllianceSolomon, Norman, (January/February 1997), . Extra! During the Cold War, when numerous soldiers participated in transport of Southeast Asian heroin to the United States by the airline Air America, the CIA's role in such traffic was reportedly rationalized as "recapture" of related profits to prevent possible enemy control of such assets.
Alleged lying to Congress
Former Speaker of the United States House of Representatives Nancy Pelosi has stated that the CIA repeatedly misled the Congress since 2001 about waterboarding and other torture, though Pelosi admitted to being told about the programs.BBC News, May 14, 2009, "Pelosi says CIA lied on 'torture'" News.bbc.co.uk Six members of Congress have claimed that Director of the CIA Leon Panetta admitted that over a period of several years since 2001 the CIA deceived Congress, including affirmatively lying to Congress. Some congressmen believe that these "lies" to Congress are similar to CIA lies to Congress from earlier periods.BBC News, July 9, 2009, "CIA 'often lied to congressmen'" News.bbc.co.uk
Covert programs hidden from Congress
On July 10, 2009, House Intelligence subcommittee Chairwoman Representative Jan Schakowsky (D, IL) announced the termination of an unnamed CIA covert program described as "very serious" in nature which had been kept secret from Congress for eight years.
CIA Director Panetta had ordered an internal investigation to determine why Congress had not been informed about the covert program. Chairman of the House Intelligence Committee Representative Silvestre Reyes announced that he is considering an investigation into alleged CIA violations of the National Security Act, which requires with limited exception that Congress be informed of covert activities. Investigations and Oversight Subcommittee Chairwoman Schakowsky indicated that she would forward a request for congressional investigation to HPSCI Chairman Silvestre Reyes.
As mandated by Title 50 of the United States Code Chapter 15, Subchapter III, when it becomes necessary to limit access to covert operations findings that could affect vital interests of the U.S., as soon as possible the President must report at a minimum to the Gang of Eight (the leaders of each of the two parties from both the Senate and House of Representatives, and the chairs and ranking members of both the Senate Committee and House Committee for intelligence). The House is expected to support the 2010 Intelligence Authorization Bill including a provision that would require the President to inform more than 40 members of Congress about covert operations. The Obama administration threatened to veto the final version of a bill that included such a provision. On July 16, 2008, the fiscal 2009 Intelligence Authorization Bill was approved by House majority containing stipulations that 75% of money sought for covert actions would be held until all members of the House Intelligence panel were briefed on sensitive covert actions. Under the George W. Bush administration, senior advisers to the President issued a statement indicating that if a bill containing this provision reached the President, they would recommend that he veto the bill.
The program was rumored vis-à-vis leaks made by anonymous government officials on July 23, to be an assassinations program, but this remains unconfirmed. "The whole committee was stunned....I think this is as serious as it gets," stated Anna Eshoo, Chairman, Subcommittee on Intelligence Community Management, U.S. House Permanent Select Committee on Intelligence (HPSCI).
Allegations by Director Panetta indicate that details of a secret counterterrorism program were withheld from Congress under orders from former U.S. Vice President Dick Cheney. This prompted Senator Feinstein and Senator Patrick Leahy, chairman of the Senate Judiciary Committee to insist that no one should go outside the law. "The agency hasn't discussed publicly the nature of the effort, which remains classified," said agency spokesman Paul Gimigliano.
The Wall Street Journal reported, citing former intelligence officials familiar with the matter, that the program was an attempt to carry out a 2001 presidential authorization to capture or kill al-Qaeda operatives.
Intelligence Committee investigation
On July 17, 2009, the House Intelligence Committee said it was launching a formal investigation into the secret program. Representative Silvestre Reyes announced the probe will look into "whether there was any past decision or direction to withhold information from the committee".
Congresswoman Jan Schakowsky (D, IL), Chairman of the Subcommittee on Oversight and Investigations, who called for the investigation, stated that the investigation was intended to address CIA failures to inform Congress fully or accurately about four issues: C.I.A. involvement in the downing of a missionary plane mistaken for a narcotics flight in Peru in 2001, and two "matters that remain classified", as well as the rumored-assassinations question. In addition, the inquiry is likely to look at the Bush administration's program of eavesdropping without warrants and its detention and interrogation program. U.S. Intelligence Chief Dennis Blair testified before the House Intelligence Committee on February 3, 2010, that the U.S. intelligence community is prepared to kill U.S. citizens if they threaten other Americans or the United States. The American Civil Liberties Union has said this policy is "particularly troubling" because U.S. citizens "retain their constitutional right to due process even when abroad." The ACLU also "expressed serious concern about the lack of public information about the policy and the potential for abuse of unchecked executive power."
Improper search of computers used by Senate investigators
In July 2014 CIA Director John O. Brennan had to apologize to lawmakers because five CIA employees (two lawyers and three computer specialists) had surreptitiously searched Senate Intelligence Committee files and reviewed some committee staff members' e-mail on computers that were supposed to be exclusively for congressional investigators. Brennan ordered the creation of an internal personnel board, led by former senator Evan Bayh, to review the agency employees' conduct and determine "potential disciplinary measures." However, according to some reports, Brennan didn't apologize for spying or doing anything wrong at all, even though his agency had been improperly accessing computers of the Senate Select Intelligence Committee (SSCI) and then, in the words of investigative reporter Dan Froomkin, "speaking a lie". This accusation was based on the CIA Director's earlier denials of Senator Dianne Feinstein's claims that the surreptitious CIA search of the SSCI computers occurred, was inappropriate, or "violated the separation of powers principles embodied in the United States Constitution, including the Speech and Debate clause" or other laws.
In fiction
Fictional depictions of the CIA exist in many books, films and video games. Some fiction draws, at least in parts, on actual historical events, while other works are entirely fictional. Films include Charlie Wilson's War (2007), based on the story of U.S. Congressman Charlie Wilson and CIA operative Gust Avrakotos, who supported the Afghan mujahideen, and The Good Shepherd (2006), a fictional spy film produced and directed by Robert De Niro based loosely on the development of counter-intelligence in the CIA. The fictional character Jack Ryan in Tom Clancy's books is a CIA analyst. Graham Greene's The Quiet American is about a CIA agent operating in Southeast Asia. Fictional depictions of the CIA are also used in video games, such as Tom Clancy's Splinter Cell, Call of Duty: Modern Warfare 2 and Call of Duty: Black Ops.
See also
Abu Omar case
Blue sky memo
History of the Central Intelligence Agency
George Bush Center for Intelligence
Intellipedia
Kryptos
National Intelligence Board
Operation Peter Pan
Project MKUltra
Reagan Doctrine
Title 32 of the Code of Federal Regulations
U.S. Army and CIA interrogation manuals
United States and state-sponsored terrorism
United States Department of Homeland Security
United States Intelligence Community
The World Factbook, published by the CIA
Notes
References
Further reading
Dujmovic, Nicholas, "Drastic Actions Short of War: The Origins and Application of CIA's Covert Paramilitary Function in the Early Cold War," Journal of Military History, 76 (July 2012), 775–808
External links
CIA Freedom of Information Act Electronic Reading Room
Landscapes of Secrecy: The CIA in History, Fiction and Memory (2011)
Review of Legacy of Ashes: The History of CIA, CIA (June 26, 2008)
Central Intelligence Collection at Internet Archive
Category:1947 establishments in the United States
Category:Government agencies established in 1947
Category:McLean, Virginia
Category:United States intelligence agencies
Category:Cold War in popular culture | 5,183,633 | 2017-01 |
Film speed | Film speed is the measure of a photographic film's sensitivity to light, determined by sensitometry and measured on various numerical scales, the most recent being the ISO system. A closely related ISO system is used to measure the sensitivity of digital imaging systems.
Relatively insensitive film, with a correspondingly lower speed index, requires more exposure to light to produce the same image density as a more sensitive film, and is thus commonly termed a slow film. Highly sensitive films are correspondingly termed fast films. In both digital and film photography, the reduction of exposure corresponding to use of higher sensitivities generally leads to reduced image quality (via coarser film grain or higher image noise of other types). In short, the higher the sensitivity, the grainier the image will be. Ultimately sensitivity is limited by the quantum efficiency of the film or sensor.
thumb|right|This film container denotes its speed as ISO 100/21°, including both arithmetic (100 ASA) and logarithmic (21 DIN) components. The second is often dropped, making (e.g.) "ISO 100" effectively equivalent to the older ASA speed. (As is common, the "100" in the film name alludes to its ISO rating).
Film speed measurement systems
Historical systems
Warnerke
The first known practical sensitometer, which allowed measurements of the speed of photographic materials, was invented by the Polish engineer Leon Warnerke – pseudonym of (1837–1900) – in 1880, among the achievements for which he was awarded the Progress Medal of the Photographic Society of Great Britain in 1882. It was commercialized since 1881.
The Warnerke Standard Sensitometer consisted of a frame holding an opaque screen with an array of typically 25 numbered, gradually pigmented squares brought into contact with the photographic plate during a timed test exposure under a phosphorescent tablet excited before by the light of a burning Magnesium ribbon. The speed of the emulsion was then expressed in 'degrees' Warnerke (sometimes seen as Warn. or °W.) corresponding with the last number visible on the exposed plate after development and fixation. Each number represented an increase of 1/3 in speed, typical plate speeds were between 10° and 25° Warnerke at the time.
His system saw some success but proved to be unreliable due to its spectral sensitivity to light, the fading intensity of the light emitted by the phosphorescent tablet after its excitation as well as high built-tolerances. The concept, however, was later built upon in 1900 by Henry Chapman Jones (1855–1932) in the development of his plate tester and modified speed system.
Hurter & Driffield
Another early practical system for measuring the sensitivity of an emulsion was that of Hurter and Driffield (H&D), originally described in 1890, by the Swiss-born Ferdinand Hurter (1844–1898) and British Vero Charles Driffield (1848–1915). In their system, speed numbers were inversely proportional to the exposure required. For example, an emulsion rated at 250 H&D would require ten times the exposure of an emulsion rated at 2500 H&D.
The methods to determine the sensitivity were later modified in 1925 (in regard to the light source used) and in 1928 (regarding light source, developer and proportional factor)—this later variant was sometimes called "H&D 10". The H&D system was officially accepted as a standard in the former Soviet Union from 1928 until September 1951, when it was superseded by GOST 2817-50.
Scheiner
The Scheinergrade (Sch.) system was devised by the German astronomer Julius Scheiner (1858–1913) in 1894 originally as a method of comparing the speeds of plates used for astronomical photography. Scheiner's system rated the speed of a plate by the least exposure to produce a visible darkening upon development. Speed was expressed in degrees Scheiner, originally ranging from 1° Sch. to 20° Sch., where an increment of 19° Sch. corresponded to a hundredfold increase in sensitivity, which meant that an increment of 3° Sch. came close to a doubling of sensitivity.
The system was later extended to cover larger ranges and some of its practical shortcomings were addressed by the Austrian scientist Josef Maria Eder (1855–1944) and Flemish-born botanist (1896–1960), (who, in 1919/1920, jointly developed their Eder–Hecht neutral wedge sensitometer measuring emulsion speeds in Eder–Hecht grades). Still, it remained difficult for manufactures to reliably determine film speeds, often only by comparing with competing products, so that an increasing number of modified semi-Scheiner-based systems started to spread, which no longer followed Scheiner's original procedures and thereby defeated the idea of comparability.
Scheiner's system was eventually abandoned in Germany, when the standardized DIN system was introduced in 1934. In various forms, it continued to be in widespread use in other countries for some time.
DIN
The DIN system, officially DIN standard 4512 by (but still named (DNA) at this time), was published in January 1934. It grew out of drafts for a standardized method of sensitometry put forward by as proposed by the committee for sensitometry of the since 1930 and presented by (1868–1945) and Emanuel Goldberg (1881–1970) at the influential VIII. International Congress of Photography (German: ) held in Dresden from August 3 to 8, 1931.
The DIN system was inspired by Scheiner's system, but the sensitivities were represented as the base 10 logarithm of the sensitivity multiplied by 10, similar to decibels. Thus an increase of 20° (and not 19° as in Scheiner's system) represented a hundredfold increase in sensitivity, and a difference of 3° was much closer to the base 10 logarithm of 2 (0.30103…):
As in the Scheiner system, speeds were expressed in 'degrees'. Originally the sensitivity was written as a fraction with 'tenths' (for example "18/10° DIN"), where the resultant value 1.8 represented the relative base 10 logarithm of the speed. 'Tenths' were later abandoned with DIN 4512:1957-11, and the example above would be written as "18° DIN". The degree symbol was finally dropped with DIN 4512:1961-10. This revision also saw significant changes in the definition of film speeds in order to accommodate then-recent changes in the American ASA PH2.5-1960 standard, so that film speeds of black-and-white negative film effectively would become doubled, that is, a film previously marked as "18° DIN" would now be labeled as "21 DIN" without emulsion changes.
Originally only meant for black-and-white negative film, the system was later extended and regrouped into nine parts, including DIN 4512-1:1971-04 for black-and-white negative film, DIN 4512-4:1977-06 for color reversal film and DIN 4512-5:1977-10 for color negative film.
On an international level the German DIN 4512 system has been effectively superseded in the 1980s by ISO 6:1974, ISO 2240:1982, and ISO 5800:1979 where the same sensitivity is written in linear and logarithmic form as "ISO 100/21°" (now again with degree symbol). These ISO standards were subsequently adopted by DIN as well. Finally, the latest DIN 4512 revisions were replaced by corresponding ISO standards, DIN 4512-1:1993-05 by DIN ISO 6:1996-02 in September 2000, DIN 4512-4:1985-08 by DIN ISO 2240:1998-06 and DIN 4512-5:1990-11 by DIN ISO 5800:1998-06 both in July 2002.
BSI
The film speed scale recommended by the British Standards Institution (BSI) was almost identical to the DIN system except that the BS number was 10 degrees greater than the DIN number.
Weston
Before the advent of the ASA system, the system of Weston film speed ratings was introduced by Edward Faraday Weston (1878–1971) and his father Dr. Edward Weston (1850–1936), a British-born electrical engineer, industrialist and founder of the US-based Weston Electrical Instrument Corporation, with the Weston model 617, one of the earliest photo-electric exposure meters, in August 1932. The meter and film rating system were invented by William Nelson Goodwin, Jr., who worked for them and later received a Howard N. Potts Medal for his contributions to engineering.
The company tested and frequently published speed ratings for most films of the time. Weston film speed ratings could since be found on most Weston exposure meters and were sometimes referred to by film manufactures and third parties in their exposure guidelines. Since manufactures were sometimes creative about film speeds, the company went as far as to warn users about unauthorized uses of their film ratings in their "Weston film ratings" booklets.
The Weston Cadet (model 852 introduced in 1949), Direct Reading (model 853 introduced 1954) and Master III (models 737 and S141.3 introduced in 1956) were the first in their line of exposure meters to switch and utilize the meanwhile established ASA scale instead. Other models used the original Weston scale up until ca. 1955. The company continued to publish Weston film ratings after 1955, but while their recommended values often differed slightly from the ASA film speeds found on film boxes, these newer Weston values were based on the ASA system and had to be converted for use with older Weston meters by subtracting 1/3 exposure stop as per Weston's recommendation. Vice versa, "old" Weston film speed ratings could be converted into "new" Westons and the ASA scale by adding the same amount, that is, a film rating of 100 Weston (up to 1955) corresponded with 125 ASA (as per ASA PH2.5-1954 and before). This conversion was not necessary on Weston meters manufactured and Weston film ratings published since 1956 due to their inherent use of the ASA system; however the changes of the ASA PH2.5-1960 revision may be taken into account when comparing with newer ASA or ISO values.
General Electric
Prior to the establishment of the ASA scale and similar to Weston film speed ratings another manufacturer of photo-electric exposure meters, General Electric, developed its own rating system of so-called General Electric film values (often abbreviated as G-E or GE) around 1937.
Film speed values for use with their meters were published in regularly updated General Electric Film Values leaflets and in the General Electric Photo Data Book.
General Electric switched to use the ASA scale in 1946. Meters manufactured since February 1946 are equipped with the ASA scale (labeled "Exposure Index") already. For some of the older meters with scales in "Film Speed" or "Film Value" (e.g. models DW-48, DW-49 as well as early DW-58 and GW-68 variants), replaceable hoods with ASA scales were available from the manufacturer. The company continued to publish recommended film values after that date, however, they were then aligned to the ASA scale.
ASA
Based on earlier research work by Loyd Ancile Jones (1884–1954) of Kodak and inspired by the systems of Weston film speed ratings and General Electric film values, the American Standards Association (now named ANSI) defined a new method to determine and specify film speeds of black-and-white negative films in 1943. ASA Z38.2.1-1943 was revised in 1946 and 1947 before the standard grew into ASA PH2.5-1954. Originally, ASA values were frequently referred to as American standard speed numbers or ASA exposure-index numbers. (See also: Exposure Index (EI).)
The ASA scale is a linear scale, that is, a film denoted as having a film speed of 200 ASA is twice as fast as a film with 100 ASA.
The ASA standard underwent a major revision in 1960 with ASA PH2.5-1960, when the method to determine film speed was refined and previously applied safety factors against under-exposure were abandoned, effectively doubling the nominal speed of many black-and-white negative films. For example, an Ilford HP3 that had been rated at 200 ASA before 1960 was labeled 400 ASA afterwards without any change to the emulsion. Similar changes were applied to the DIN system with DIN 4512:1961-10 and the BS system with BS 1380:1963 in the following years.
In addition to the established arithmetic speed scale, ASA PH2.5-1960 also introduced logarithmic ASA grades (100 ASA = 5° ASA), where a difference of 1° ASA represented a full exposure stop and therefore the doubling of a film speed. For some while, ASA grades were also printed on film boxes, and they saw life in the form of the APEX speed value Sv (without degree symbol) as well.
ASA PH2.5-1960 was revised as ANSI PH2.5-1979, without the logarithmic speeds, and later replaced by NAPM IT2.5-1986 of the National Association of Photographic Manufacturers, which represented the US adoption of the international standard ISO 6. The latest issue of ANSI/NAPM IT2.5 was published in 1993.
The standard for color negative film was introduced as ASA PH2.27-1965 and saw a string of revisions in 1971, 1976, 1979 and 1981, before it finally became ANSI IT2.27-1988 prior to its withdrawal.
Color reversal film speeds were defined in ANSI PH2.21-1983, which was revised in 1989 before it became ANSI/NAPM IT2.21 in 1994, the US adoption of the ISO 2240 standard.
On an international level, the ASA system was superseded by the ISO film speed system between 1982 and 1987, however, the arithmetic ASA speed scale continued to live on as the linear speed value of the ISO system.
GOST
(Cyrillic: ) was an arithmetic film speed scale defined in GOST 2817-45 and GOST 2817-50. It was used in the former Soviet Union since October 1951, replacing Hurter & Driffield (H&D, Cyrillic: ХиД) numbers, which had been used since 1928.
GOST 2817-50 was similar to the ASA standard, having been based on a speed point at a density 0.2 above base plus fog, as opposed to the ASA's 0.1. GOST markings are only found on pre-1987 photographic equipment (film, cameras, lightmeters, etc.) of Soviet Union manufacture.
On 1 January 1987, the GOST scale was realigned to the ISO scale with GOST 10691-84,
This evolved into multiple parts including GOST 10691.6-88 and GOST 10691.5-88, which both became functional on 1 January 1991.
Current system: ISO
The ASA and DIN film speed standards have been combined into the ISO standards since 1974.
The current International Standard for measuring the speed of color negative film is ISO 5800:2001 (first published in 1979, revised in November 1987) from the International Organization for Standardization (ISO). Related standards ISO 6:1993 (first published in 1974) and ISO 2240:2003 (first published in July 1982, revised in September 1994, and corrected in October 2003) define scales for speeds of black-and-white negative film and color reversal film, respectively.
The determination of ISO speeds with digital still-cameras is described in ISO 12232:2006 (first published in August 1998, revised in April 2006, and corrected in October 2006).
The ISO system defines both an arithmetic and a logarithmic scale. The arithmetic ISO scale corresponds to the arithmetic ASA system, where a doubling of film sensitivity is represented by a doubling of the numerical film speed value. In the logarithmic ISO scale, which corresponds to the DIN scale, adding 3° to the numerical value constitutes a doubling of sensitivity. For example, a film rated ISO 200/24° is twice as sensitive as one rated ISO 100/21°.
Commonly, the logarithmic speed is omitted; for example, "ISO 100" denotes "ISO 100/21°", while logarithmic ISO speeds are written as "ISO 21°" as per the standard.
Conversion between current scales
Conversion from arithmetic speed S to logarithmic speed S° is given by
and rounding to the nearest integer; the log is base 10. Conversion from logarithmic speed to arithmetic speed is given by
and rounding to the nearest standard arithmetic speed in Table 1 below.
+ Table 1. Comparison of various film speed scales APEX Sv (1960–) ISO (1974–)arith./log.° Camera mfrs. (2009–) ASA (1960–1987)arith. DIN (1961–2002)log. GOST (1951–1986)arith. Example of film stockwith this nominal speed −2 0.8/0° 0.8 0 1/1° 1 1 (1) 1.2/2° 1.2 2 (1) −1 1.6/3° 1.6 3 1.4 2/4° 2 4 (2) 2.5/5° 2.5 5 (2) 0 3/6° 3 6 2.8 4/7° 4 7 (4) 5/8° 5 8 (4) 1 6/9° 6 9 5.5 original Kodachrome 8/10° 8 10 (8) Polaroid PolaBlue 10/11° 10 11 (8) Kodachrome 8 mm film 2 12/12° 12 12 11 Gevacolor 8 mm reversal film, later Agfa Dia-Direct 16/13° 16 13 (16) Agfacolor 8 mm reversal film 20/14° 20 14 (16) Adox CMS 20 3 25/15° 25 15 22 old Agfacolor, Kodachrome II and (later) Kodachrome 25, Efke 25 32/16° 32 16 (32) Kodak Panatomic-X 40/17° 40 17 (32) Kodachrome 40 (movie) 4 50/18° 50 18 45 Fuji RVP (Velvia), Ilford Pan F Plus, Kodak Vision2 50D 5201 (movie), AGFA CT18, Efke 50, Polaroid type 55 64/19° 64 19 (65) Kodachrome 64, Ektachrome-X, Polaroid type 64T 80/20° 80 20 (65) Ilford Commercial Ortho, Polaroid type 669 5 100/21° 100 21 90 Kodacolor Gold, Kodak T-Max (TMX), Fujichrome Provia 100F, Efke 100, Fomapan/Arista 100 125/22° 125 22 (130) Ilford FP4+, Kodak Plus-X Pan, Svema Color 125 160/23° 160 23 (130) Fujicolor Pro 160C/S, Kodak High-Speed Ektachrome, Kodak Portra 160NC and 160VC 6 200/24° 200 24 180 Fujicolor Superia 200, Agfa Scala 200x, Fomapan/Arista 200, Wittner Chrome 200D/Agfa Aviphot Chrome 200 PE1 250/25° 250 25 (250) Tasma Foto-250 320/26° 320 26 (250) Kodak Tri-X Pan Professional (TXP) 7 400/27° 400 27 350 Kodak T-Max (TMY), Kodak Tri-X 400, Ilford HP5+, Fujifilm Superia X-tra 400, Fujichrome Provia 400X, Fomapan/Arista 400 500/28° 500 28 (500) Kodak Vision3 500T 5219 (movie) 640/29° 640 29 (500) Polaroid 600 8 800/30° 800 30 700 Fuji Pro 800Z, Fuji Instax 1000/31° 1000 31 (1000) Kodak P3200 TMAX, Ilford Delta 3200 (see Marketing anomalies below) 1250/32° 1250 32 (1000) Kodak Royal-X Panchromatic 9 1600/33° 1600 33 1400 (1440) Fujicolor 1600 2000/34° 2000 34 (2000) 2500/35° 2500 35 (2000) 10 3200/36° 3200 36 2800 (2880) Konica 3200, Polaroid type 667, Fujifilm FP-3000B 4000/37° 37 (4000) 5000/38° 38 (4000) 11 6400/39° 6400 39 5600 8000/40° 10000/41° 12 12500/42° 12800 12500 No ISO speeds greater than 10000 have been assigned officially as of 2013. 16000/43° 20000/44° Polaroid type 612 13 25000/45° 25600 32000/46° 40000/47° 14 50000/48° 51200 64000/49° 80000/50° 15 100000/51° 102400 51 Nikon D3s and Canon EOS-1D Mark IV (2009) 125000/52° 160000/53° 16 200000/54° 204800 Canon EOS-1D X (2011), Nikon D4 (2012), Pentax 645Z (2014) 250000/55° 320000/56° 17 400000/57° 409600 Nikon D4s, Sony α ILCE-7S (2014), Canon EOS 1D X Mark II (2016) 500000/58° 640000/59° 18 800000/60° 1000000/61° 1250000/62° 19 1600000/63° 2000000/64° 2500000/65° 20 3200000/66° 3280000 Nikon D5 (2016) 4000000/67° 4560000 Canon ME20F-SH (2015)
Table notes:
Speeds shown in bold under APEX, ISO and ASA are values actually assigned in speed standards from the respective agencies; other values are calculated extensions to assigned speeds using the same progressions as for the assigned speeds.
APEX Sv values 1 to 10 correspond with logarithmic ASA grades 1° to 10° found in ASA PH2.5-1960.
ASA arithmetic speeds from 4 to 5 are taken from ANSI PH2.21-1979 (Table 1, p. 8).
ASA arithmetic speeds from 6 to 3200 are taken from ANSI PH2.5-1979 (Table 1, p. 5) and ANSI PH2.27-1979.
ISO arithmetic speeds from 4 to 3200 are taken from ISO 5800:1987 (Table "ISO speed scales", p. 4).
ISO arithmetic speeds from 6 to 10000 are taken from ISO 12232:1998 (Table 1, p. 9).
ISO 12232:1998 does not specify speeds greater than 10000. However, the upper limit for Snoise 10000 is given as 12500, suggesting that ISO may have envisioned a progression of 12500, 25000, 50000, and 100000, similar to that from 1250 to 10000. This is consistent with ASA PH2.12-1961. For digital cameras, Nikon, Canon, Sony, Pentax, and Fujifilm apparently chose to express the greater speeds in an exact power-of-2 progression from the highest previously realized speed (6400) rather than rounding to an extension of the existing progression.
Most of the modern 35 mm film SLRs support an automatic film speed range from ISO 25/15° to 5000/38° with DX-coded films, or ISO 6/9° to 6400/39° manually (without utilizing exposure compensation). The film speed range with support for TTL flash is smaller, typically ISO 12/12° to 3200/36° or less.
The Booster accessory for the Canon Pellix QL (1965) and Canon FT QL (1966) supported film speeds from 25 to 12800 ASA.
The film speed dial of the Canon A-1 (1978) supported a speed range from 6 to 12800 ASA (but already called ISO film speeds in the manual). On this camera exposure compensation and extreme film speeds were mutually exclusive.
The Leica R8 (1996) and R9 (2002) officially supported film speeds of 8000/40°, 10000/41° and 12800/42° (in the case of the R8) or 12500/42° (in the case of the R9), and utilizing its ±3 EV exposure compensation the range could be extended from ISO 0.8/0° to ISO 100000/51° in half exposure steps.
Digital camera manufacturers' arithmetic speeds from 12800 to 409600 are from specifications by Nikon (12800, 25600, 51200, 102400 in 2009, 204800 in 2012, 409600 in 2014), Canon (12800, 25600, 51200, 102400 in 2009, 204800 in 2011, 4000000 in 2015), Sony (12800 in 2009, 25600 in 2010, 409600 in 2014), Pentax (12800, 25600, 51200 in 2010, 102400, 204800 in 2014) and Fujifilm (12800 in 2011).
Historic ASA DIN conversion
thumb|300px|Film speed conversion table of the 50s
thumb|300px|Classic camera Tessina exposure guide
Before ASA and DIN standard's unification, the conversion of ASA and DIN is different from the current. The attachment is ASA DIN conversion in a 1952 photography book 戴淮清 《摄影入门》 1952 Singapore In which DIN 21 was converted to ASA80 instead of ASA100.
Some classic camera's exposure guide also has the old conversion, for example the exposure guide of classic camera Tessina, DIN 21 is related to ASA80, DIN 18 to ASA 40. User of classic cameras who does not know the historic background, may be confused.
Determining film speed
thumb|540px|right|ISO 6:1993 method of determining speed for black-and-white film.
Film speed is found from a plot of optical density vs. log of exposure for the film, known as the D–log H curve or Hurter–Driffield curve. There typically are five regions in the curve: the base + fog, the toe, the linear region, the shoulder, and the overexposed region. For black-and-white negative film, the "speed point" m is the point on the curve where density exceeds the base + fog density by 0.1 when the negative is developed so that a point n where the log of exposure is 1.3 units greater than the exposure at point m has a density 0.8 greater than the density at point m. The exposure Hm, in lux-s, is that for point m when the specified contrast condition is satisfied. The ISO arithmetic speed is determined from:
This value is then rounded to the nearest standard speed in Table 1 of ISO 6:1993.
Determining speed for color negative film is similar in concept but more complex because it involves separate curves for blue, green, and red. The film is processed according to the film manufacturer’s recommendations rather than to a specified contrast. ISO speed for color reversal film is determined from the middle rather than the threshold of the curve; it again involves separate curves for blue, green, and red, and the film is processed according to the film manufacturer’s recommendations.
Applying film speed
Film speed is used in the exposure equations to find the appropriate exposure parameters. Four variables are available to the photographer to obtain the desired effect: lighting, film speed, f-number (aperture size), and shutter speed (exposure time). The equation may be expressed as ratios, or, by taking the logarithm (base 2) of both sides, by addition, using the APEX system, in which every increment of 1 is a doubling of exposure; this increment is commonly known as a "stop". The effective f-number is proportional to the ratio between the lens focal length and aperture diameter, the diameter itself being proportional to the square root of the aperture area. Thus, a lens set to allows twice as much light to strike the focal plane as a lens set to 2. Therefore, each f-number factor of the square root of two (approximately 1.4) is also a stop, so lenses are typically marked in that progression: 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32, etc.
The ISO arithmetic speed has a useful property for photographers without the equipment for taking a metered light reading. Correct exposure will usually be achieved for a frontlighted scene in bright sun if the aperture of the lens is set to f/16 and the shutter speed is the reciprocal of the ISO film speed (e.g. 1/100 second for 100 ISO film). This known as the sunny 16 rule.
Exposure index
Exposure index, or EI, refers to speed rating assigned to a particular film and shooting situation in variance to the film's actual speed. It is used to compensate for equipment calibration inaccuracies or process variables, or to achieve certain effects. The exposure index may simply be called the speed setting, as compared to the speed rating.
For example, a photographer may rate an ISO 400 film at EI 800 and then use push processing to obtain printable negatives in low-light conditions. The film has been exposed at EI 800.
Another example occurs where a camera's shutter is miscalibrated and consistently overexposes or underexposes the film; similarly, a light meter may be inaccurate. One may adjust the EI rating accordingly in order to compensate for these defects and consistently produce correctly exposed negatives.
Reciprocity
Upon exposure, the amount of light energy that reaches the film determines the effect upon the emulsion. If the brightness of the light is multiplied by a factor and the exposure of the film decreased by the same factor by varying the camera's shutter speed and aperture, so that the energy received is the same, the film will be developed to the same density. This rule is called reciprocity. The systems for determining the sensitivity for an emulsion are possible because reciprocity holds. In practice, reciprocity works reasonably well for normal photographic films for the range of exposures between 1/1000 second to 1/2 second. However, this relationship breaks down outside these limits, a phenomenon known as reciprocity failure.
Film sensitivity and grain
thumb|Grainy high-speed B&W film negative
The size of silver halide grains in the emulsion affects film sensitivity, which is related to granularity because larger grains give film greater sensitivity to light. Fine-grain film, such as film designed for portraiture or copying original camera negatives, is relatively insensitive, or "slow", because it requires brighter light or a longer exposure than a "fast" film. Fast films, used for photographing in low light or capturing high-speed motion, produce comparatively grainy images.
Kodak has defined a "Print Grain Index" (PGI) to characterize film grain (color negative films only), based on perceptual just-noticeable difference of graininess in prints. They also define "granularity", a measurement of grain using an RMS measurement of density fluctuations in uniformly exposed film, measured with a microdensitometer with 48 micrometre aperture. Granularity varies with exposure — underexposed film looks grainier than overexposed film.
Marketing anomalies
Some high-speed black-and-white films, such as Ilford Delta 3200 and Kodak T-MAX P3200, are marketed with film speeds in excess of their true ISO speed as determined using the ISO testing method. For example, the Ilford product is actually an ISO 1000 film, according to its data sheet. The manufacturers do not indicate that the 3200 number is an ISO rating on their packaging. Kodak and Fuji also marketed E6 films designed for pushing (hence the "P" prefix), such as Ektachrome P800/1600 and Fujichrome P1600, both with a base speed of ISO 400.
Digital camera ISO speed and exposure index
thumb|300px|A CCD image sensor, 2/3 inch size
In digital camera systems, an arbitrary relationship between exposure and sensor data values can be achieved by setting the signal gain of the sensor. The relationship between the sensor data values and the lightness of the finished image is also arbitrary, depending on the parameters chosen for the interpretation of the sensor data into an image color space such as sRGB.
For digital photo cameras ("digital still cameras"), an exposure index (EI) rating—commonly called ISO setting—is specified by the manufacturer such that the sRGB image files produced by the camera will have a lightness similar to what would be obtained with film of the same EI rating at the same exposure. The usual design is that the camera's parameters for interpreting the sensor data values into sRGB values are fixed, and a number of different EI choices are accommodated by varying the sensor's signal gain in the analog realm, prior to conversion to digital. Some camera designs provide at least some EI choices by adjusting the sensor's signal gain in the digital realm. A few camera designs also provide EI adjustment through a choice of lightness parameters for the interpretation of sensor data values into sRGB; this variation allows different tradeoffs between the range of highlights that can be captured and the amount of noise introduced into the shadow areas of the photo.
Digital cameras have far surpassed film in terms of sensitivity to light, with ISO equivalent speeds of up to 102,400, a number that is unfathomable in the realm of conventional film photography. Faster processors, as well as advances in software noise reduction techniques allow this type of processing to be executed the moment the photo is captured, allowing photographers to store images that have a higher level of refinement and would have been prohibitively time consuming to process with earlier generations of digital camera hardware.
The ISO 12232:2006 standard
The ISO standard ISO 12232:2006 gives digital still camera manufacturers a choice of five different techniques for determining the exposure index rating at each sensitivity setting provided by a particular camera model. Three of the techniques in ISO 12232:2006 are carried over from the 1998 version of the standard, while two new techniques allowing for measurement of JPEG output files are introduced from CIPA DC-004. Depending on the technique selected, the exposure index rating can depend on the sensor sensitivity, the sensor noise, and the appearance of the resulting image. The standard specifies the measurement of light sensitivity of the entire digital camera system and not of individual components such as digital sensors, although Kodak has reported using a variation to characterize the sensitivity of two of their sensors in 2001.
The Recommended Exposure Index (REI) technique, new in the 2006 version of the standard, allows the manufacturer to specify a camera model’s EI choices arbitrarily. The choices are based solely on the manufacturer’s opinion of what EI values produce well-exposed sRGB images at the various sensor sensitivity settings. This is the only technique available under the standard for output formats that are not in the sRGB color space. This is also the only technique available under the standard when multi-zone metering (also called pattern metering) is used.
The Standard Output Sensitivity (SOS) technique, also new in the 2006 version of the standard, effectively specifies that the average level in the sRGB image must be 18% gray plus or minus 1/3 stop when the exposure is controlled by an automatic exposure control system calibrated per ISO 2721 and set to the EI with no exposure compensation. Because the output level is measured in the sRGB output from the camera, it is only applicable to sRGB JPEG—and not to output files in raw image format. It is not applicable when multi-zone metering is used.
The CIPA DC-004 standard requires that Japanese manufacturers of digital still cameras use either the REI or SOS techniques, and DC-008 updates the Exif specification to differentiate between these values. Consequently, the three EI techniques carried over from ISO 12232:1998 are not widely used in recent camera models (approximately 2007 and later). As those earlier techniques did not allow for measurement from images produced with lossy compression, they cannot be used at all on cameras that produce images only in JPEG format.
The saturation-based (SAT or Ssat) technique is closely related to the SOS technique, with the sRGB output level being measured at 100% white rather than 18% gray. The SOS value is effectively 0.704 times the saturation-based value. Because the output level is measured in the sRGB output from the camera, it is only applicable to sRGB images—typically TIFF—and not to output files in raw image format. It is not applicable when multi-zone metering is used.
The two noise-based techniques have rarely been used for consumer digital still cameras. These techniques specify the highest EI that can be used while still providing either an "excellent" picture or a "usable" picture depending on the technique chosen.
Measurements and calculations
ISO speed ratings of a digital camera are based on the properties of the sensor and the image processing done in the camera, and are expressed in terms of the luminous exposure H (in lux seconds) arriving at the sensor. For a typical camera lens with an effective focal length f that is much smaller than the distance between the camera and the photographed scene, H is given by
where L is the luminance of the scene (in candela per m²), t is the exposure time (in seconds), N is the aperture f-number, and
is a factor depending on the transmittance T of the lens, the vignetting factor v(θ), and the angle θ relative to the axis of the lens. A typical value is q = 0.65, based on θ = 10°, T = 0.9, and v = 0.98.
Saturation-based speed
The saturation-based speed is defined as
where is the maximum possible exposure that does not lead to a clipped or bloomed camera output. Typically, the lower limit of the saturation speed is determined by the sensor itself, but with the gain of the amplifier between the sensor and the analog-to-digital converter, the saturation speed can be increased. The factor 78 is chosen such that exposure settings based on a standard light meter and an 18-percent reflective surface will result in an image with a grey level of 18%/√2 = 12.7% of saturation. The factor √2 indicates that there is half a stop of headroom to deal with specular reflections that would appear brighter than a 100% reflecting white surface.
Noise-based speed
thumb|Digital noise at 3200 ISO vs. 100 ISO
The noise-based speed is defined as the exposure that will lead to a given signal-to-noise ratio on individual pixels. Two ratios are used, the 40:1 ("excellent image quality") and the 10:1 ("acceptable image quality") ratio. These ratios have been subjectively determined based on a resolution of 70 pixels per cm (178 DPI) when viewed at 25 cm (9.8 inch) distance. The signal-to-noise ratio is defined as the standard deviation of a weighted average of the luminance and color of individual pixels. The noise-based speed is mostly determined by the properties of the sensor and somewhat affected by the noise in the electronic gain and AD converter.
Standard output sensitivity (SOS)
In addition to the above speed ratings, the standard also defines the standard output sensitivity (SOS), how the exposure is related to the digital pixel values in the output image. It is defined as
where is the exposure that will lead to values of 118 in 8-bit pixels, which is 18 percent of the saturation value in images encoded as sRGB or with gamma = 2.2.
Discussion
The standard specifies how speed ratings should be reported by the camera. If the noise-based speed (40:1) is higher than the saturation-based speed, the noise-based speed should be reported, rounded downwards to a standard value (e.g. 200, 250, 320, or 400). The rationale is that exposure according to the lower saturation-based speed would not result in a visibly better image. In addition, an exposure latitude can be specified, ranging from the saturation-based speed to the 10:1 noise-based speed. If the noise-based speed (40:1) is lower than the saturation-based speed, or undefined because of high noise, the saturation-based speed is specified, rounded upwards to a standard value, because using the noise-based speed would lead to overexposed images. The camera may also report the SOS-based speed (explicitly as being an SOS speed), rounded to the nearest standard speed rating.
For example, a camera sensor may have the following properties: , , and . According to the standard, the camera should report its sensitivity as
ISO 100 (daylight)
ISO speed latitude 50–1600
ISO 100 (SOS, daylight).
The SOS rating could be user controlled. For a different camera with a noisier sensor, the properties might be , , and . In this case, the camera should report
ISO 200 (daylight),
as well as a user-adjustable SOS value. In all cases, the camera should indicate for the white balance setting for which the speed rating applies, such as daylight or tungsten (incandescent light).
Despite these detailed standard definitions, cameras typically do not clearly indicate whether the user "ISO" setting refers to the noise-based speed, saturation-based speed, or the specified output sensitivity, or even some made-up number for marketing purposes. Because the 1998 version of ISO 12232 did not permit measurement of camera output that had lossy compression, it was not possible to correctly apply any of those measurements to cameras that did not produce sRGB files in an uncompressed format such as TIFF. Following the publication of CIPA DC-004 in 2006, Japanese manufacturers of digital still cameras are required to specify whether a sensitivity rating is REI or SOS.
As should be clear from the above, a greater SOS setting for a given sensor comes with some loss of image quality, just like with analog film. However, this loss is visible as image noise rather than grain. Current (January 2010) APS and 35mm sized digital image sensors, both CMOS and CCD based, do not produce significant noise until about ISO 1600.
See also
Frame rate
Lens speed
References
ISO 6:1974, ISO 6:1993 (1993-02). Photography — Black-and-white pictorial still camera negative film/process systems — Determination of ISO speed. Geneva: International Organization for Standardization.
ISO 2240:1982 (1982-07), ISO 2240:1994 (1994-09), ISO 2240:2003 (2003–10). Photography — Colour reversal camera films — Determination of ISO speed. Geneva: International Organization for Standardization.
ISO 2720:1974. General Purpose Photographic Exposure Meters (Photoelectric Type) — Guide to Product Specification. Geneva: International Organization for Standardization.
ISO 5800:1979, ISO 5800:1987 (1987-11), ISO 5800:1987/Cor 1:2001 (2001–06). Photography — Colour negative films for still photography — Determination of ISO speed. Geneva: International Organization for Standardization.
ISO 12232:1998 (1998-08), ISO 12232:2006 (2006-04-15), ISO 12232:2006 (2006-10-01). Photography — Digital still cameras — Determination of exposure index, ISO speed ratings, standard output sensitivity, and recommended exposure index. Geneva: International Organization for Standardization.
ASA Z38.2.1-1943, ASA Z38.2.1-1946, ASA Z38.2.1-1947 (1947-07-15). American Standard Method for Determining Photographic Speed and Speed Number. New York: American Standards Association. Superseded by ASA PH2.5-1954.
ASA PH2.5-1954, ASA PH2.5-1960. American Standard Method for Determining Speed of photographic Negative Materials (Monochrome, Continuous Tone). New York: United States of America Standards Institute (USASI). Superseded by ANSI PH2.5-1972.
ANSI PH2.5-1972, ANSI PH2.5-1979 (1979-01-01), ANSI PH2.5-1979(R1986). Speed of photographic negative materials (monochrome, continuous tone), method for determining). New York: American National Standards Institute. Superseded by NAPM IT2.5-1986.
NAPM IT2.5-1986, ANSI/ISO 6-1993 ANSI/NAPM IT2.5-1993 (1993-01-01). Photography — Black-and-White Pictorial Still Camera Negative Film/Process Systems — Determination of ISO Speed (same as ANSI/ISO 6-1993). National Association of Photographic Manufacturers. This represents the US adoption of ISO 6.
ASA PH2.12-1957, ASA PH2.12-1961. American Standard, General-Purpose Photographic Exposure Meters (photoelectric type). New York: American Standards Association. Superseded by ANSI PH3.49-1971.
ANSI PH2.21-1983 (1983-09-23), ANSI PH2.21-1983(R1989). Photography (Sensitometry) Color reversal camera films - Determination of ISO speed. New York: American Standards Association. Superseded by ANSI/ISO 2240-1994 ANSI/NAPM IT2.21-1994.
ANSI/ISO 2240-1994 ANSI/NAPM IT2.21-1994. Photography - Colour reversal camera films - determination of ISO speed. New York: American National Standards Institute. This represents the US adoption of ISO 2240.
ASA PH2.27-1965 (1965-07-06), ASA PH2.27-1971, ASA PH2.27-1976, ANSI PH2.27-1979, ANSI PH2.27-1981, ANSI PH2.27-1988 (1988-08-04). Photography - Colour negative films for still photography - Determination of ISO speed (withdrawn). New York: American Standards Association. Superseded by ANSI IT2.27-1988.
ANSI IT2.27-1988 (1994-08/09?). Photography Color negative films for still photography - Determination of ISO speed. New York: American National Standards Institute. Withdrawn. This represented the US adoption of ISO 5800.
ANSI PH3.49-1971, ANSI PH3.49-1971(R1987). American National Standard for general-purpose photographic exposure meters (photoelectric type). New York: American National Standards Institute. After several revisions, this standard was withdrawn in favor of ANSI/ISO 2720:1974.
ANSI/ISO 2720:1974, ANSI/ISO 2720:1974(R1994) ANSI/NAPM IT3.302-1994. General Purpose Photographic Exposure Meters (Photoelectric Type) — Guide to Product Specification. New York: American National Standards Institute. This represents the US adoption of ISO 2720.
BSI BS 1380:1947, BSI BS 1380:1963. Speed and exposure index. British Standards Institution. Superseded by BSI BS 1380-1:1973 (1973-12), BSI BS 1380-2:1984 (1984-09), BSI BS 1380-3:1980 (1980-04) and others.
BSI BS 1380-1:1973 (1973-12-31). Speed of sensitized photographic materials: Negative monochrome material for still and cine photography. British Standards Institution. Replaced by BSI BS ISO 6:1993, superseded by BSI BS ISO 2240:1994.
BSI BS 1380-2:1984 ISO 2240:1982 (1984-09-28). Speed of sensitized photographic materials. Method for determining the speed of colour reversal film for still and amateur cine photography. British Standards Institution. Superseded by BSI BS ISO 2240:1994.
BSI BS 1380-3:1980 ISO 5800:1979 (1980-04-30). Speed of sensitized photographic materials. Colour negative film for still photography. British Standards Institution. Superseded by BSI BS ISO 5800:1987.
BSI BS ISO 6:1993 (1995-03-15). Photography. Black-and-white pictorial still camera negative film/process systems. Determination of ISO speed. British Standards Institution. This represents the British adoption of ISO 6:1993.
BSI BS ISO 2240:1994 (1993-03-15), BSI BS ISO 2240:2003 (2004-02-11). Photography. Colour reversal camera films. Determination of ISO speed. British Standards Institution. This represents the British adoption of ISO 2240:2003.
BSI BS ISO 5800:1987 (1995-03-15). Photography. Colour negative films for still photography. Determination of ISO speed. British Standards Institution. This represents the British adoption of ISO 5800:1987.
DIN 4512:1934-01, DIN 4512:1957-11 (Blatt 1), DIN 4512:1961-10 (Blatt 1). Photographische Sensitometrie, Bestimmung der optischen Dichte. Berlin: Deutscher Normenausschuß (DNA). Superseded by DIN 4512-1:1971-04, DIN 4512-4:1977-06, DIN 4512-5:1977-10 and others.
DIN 4512-1:1971-04, DIN 4512-1:1993-05. Photographic sensitometry; systems of black and white negative films and their process for pictorial photography; determination of speed. Berlin: Deutsches Institut für Normung (before 1975: Deutscher Normenausschuß (DNA)). Superseded by DIN ISO 6:1996-02.
DIN 4512-4:1977-06, DIN 4512-4:1985-08. Photographic sensitometry; determination of the speed of colour reversal films. Berlin: Deutsches Institut für Normung. Superseded by DIN ISO 2240:1998-06.
DIN 4512-5:1977-10, DIN 4512-5:1990-11. Photographic sensitometry; determination of the speed of colour negative films. Berlin: Deutsches Institut für Normung. Superseded by DIN ISO 5800:1998-06.
DIN ISO 6:1996-02. Photography - Black-and-white pictorial still camera negative film/process systems - Determination of ISO speed (ISO 6:1993). Berlin: Deutsches Institut für Normung. This represents the German adoption of ISO 6:1993.
DIN ISO 2240:1998-06, DIN ISO 2240:2005-10. Photography - Colour reversal camera films - Determination of ISO speed (ISO 2240:2003). Berlin: Deutsches Institut für Normung. This represents the German adoption of ISO 2240:2003.
DIN ISO 5800:1998-06, DIN ISO 5800:2003-11. Photography - Colour negative films for still photography - Determination of ISO speed (ISO 5800:1987 + Corr. 1:2001). Berlin: Deutsches Institut für Normung. This represents the German adoption of ISO 5800:2001.
Leslie B. Stroebel, John Compton, Ira Current, Richard B. Zakia. Basic Photographic Materials and Processes, second edition. Boston: Focal Press, 2000. ISBN 0-240-80405-8.
External links
What is the meaning of ISO for digital cameras? Digital Photography FAQ
Signal-dependent noise modeling, estimation, and removal for digital imaging sensors
Category:Science of photography
Category:Physical quantities | 168,568 | 2017-01 |
Himachal Pradesh | Himachal Pradesh (; literally "Snow-abode") is a state of India located in Northern India. It is bordered by Jammu and Kashmir on the north, Punjab and Chandigarh on the west, Haryana on the south-west, Uttarakhand on the south-east and by the Tibet Autonomous Region on the east. The name was coined from Sanskrit him 'snow' and anchal 'lap', by Acharya Diwakar Datt Sharma, one of the state's most eminent Sanskrit scholars.
Himachal Pradesh is famous for its natural beauty, hill stations, and temples. Himachal Pradesh has been ranked fifteenth in the list of the highest per capita incomes of Indian states and union territories for year 2013-14. Many perennial rivers flow in the state, and numerous hydroelectric projects set up. Himachal produces surplus hydroelectricity and sells it to other states such as Delhi, Punjab, and Rajasthan. Hydroelectric power projects, tourism, and agriculture form important parts of the state's economy.
The state has several valleys, and more than 90% of the population living in rural areas. Practically all houses have a toilet and 100% hygiene has been achieved in the state. The villages have good connectivity with roads, public health centres, and now with high-speed broadband.
Shimla district has maximum urban population of 25%. It has incorporated environmental protection and tourism development has been aided with a government ban on the use of polyethylene bags, reducing litter, and tobacco products, to aid people's health.
According to a 2005 Transparency International survey, Himachal Pradesh was ranked the second-least corrupt state in the country, after Kerala.
History
The history of the area that now constitutes Himachal Pradesh dates to the Indus valley civilisation that flourished between 2250 and 1750 BCE. Tribes such as the Koili, Hali, Dagi, Dhaugri, Dasa, Khasa, Kinnar, and Kirat inhabited the region from the prehistoric era.
During the Vedic period, several small republics known as Janapada existed which were later conquered by the Gupta Empire. After a brief period of supremacy by King Harshavardhana, the region was divided into several local powers headed by chieftains, including some Rajput principalities. These kingdoms enjoyed a large degree of independence and were invaded by Delhi Sultanate a number of times. Mahmud Ghaznavi conquered Kangra at the beginning of the 10th century. Timur and Sikander Lodi also marched through the lower hills of the state and captured a number of forts and fought many battles. Several hill states acknowledged Mughal suzerainty and paid regular tribute to the Mughals.
thumb|left|Sansar Chand (c. 1765–1823)
The Gurkha people, a martial tribe, came to power in Nepal in the year 1768. They consolidated their military power and began to expand their territory. Gradually, the Gorkhas annexed Sirmour and Shimla. Under the leadership of Amar Singh Thapa, the Gurkha laid siege to Kangra. They managed to defeat Sansar Chand Katoch, the ruler of Kangra, in 1806 with the help of many provincial chiefs. However, the Gurkha could not capture Kangra fort which came under Maharaja Ranjeet Singh in 1809. After the defeat, the Gurkha began to expand towards the south of the state. However, Raja Ram Singh, Raja of Siba State, captured the fort of Siba from the remnants of Lahore Darbar in Samvat 1846, during the First Anglo-Sikh War.
They came into direct conflict with the British along the tarai belt after which the British expelled them from the provinces of the Satluj. The British gradually emerged as the paramount power in the region. In the revolt of 1857, or first Indian war of independence, arising from a number of grievances against the British, the people of the hill states were not as politically active as were those in other parts of the country. They and their rulers, with the exception of Bushahr, remained more or less inactive. Some, including the rulers of Chamba, Bilaspur, Bhagal and Dhami, rendered help to the British government during the revolt.
thumb|Rock Cut Temple, Masroor|218x218px
The British territories came under the British Crown after Queen Victoria's proclamation of 1858. The states of Chamba, Mandi and Bilaspur made good progress in many fields during the British rule. During World War I, virtually all rulers of the hill states remained loyal and contributed to the British war effort, both in the form of men and materials. Among these were the states of Kangra, Jaswan, Datarpur, Guler, Nurpur, Chamba, Suket, Mandi, and Bilaspur.
After independence, the Chief Commissioner's Province of H.P. was organized on 15 April 1948 as a result of integration of 28 petty princely states (including feudal princes and zaildars) in the promontories of the western Himalaya. These were known as the Simla Hills States and four Punjab southern hill states under the Himachal Pradesh (Administration) Order, 1948 under Sections 3 and 4 of the Extra-Provincial Jurisdiction Act, 1947 (later renamed as the Foreign Jurisdiction Act, 1947 vide A.O. of 1950). The State of Bilaspur was merged into Himachal Pradesh on 1 April 1954 by the Himachal Pradesh and Bilaspur (New State) Act, 1954.
Himachal became a part C state on 26 January 1950 with the implementation of the Constitution of India and the Lieutenant Governor was appointed. The Legislative Assembly was elected in 1952. Himachal Pradesh became a union territory on 1 November 1956. Some areas of Punjab State—namely Simla, Kangra, Kulu and Lahul and Spiti Districts, Nalagarh tehsil of Ambala District, Lohara, Amb and Una kanungo circles, some area of Santokhgarh kanungo circle and some other specified area of Una tehsil of Hoshiarpur District, besides some parts of Dhar Kalan Kanungo circle of Pathankot tehsil of Gurdaspur District—were merged with Himachal Pradesh on 1 November 1966 on enactment by Parliament of Punjab Reorganisation Act, 1966. On 18 December 1970, the State of Himachal Pradesh Act was passed by Parliament, and the new state came into being on 25 January 1971. Himachal was the 18th state of the Indian Union.
Geography and climate
thumb|218x218px|Amazing climate in Triund during September
Himachal is in the western Himalayas. Covering an area of , it is a mountainous state. Most of the state lies on the foothills of the Dhauladhar Range. At 6,816 m Reo Purgyil is the highest mountain peak in the state of Himachal Pradesh.
The drainage system of Himachal is composed both of rivers and glaciers. Himalayan rivers criss-cross the entire mountain chain.
Himachal Pradesh provides water to both the Indus and Ganges basins. The drainage systems of the region are the Chandra Bhaga or the Chenab, the Ravi, the Beas, the Sutlej, and the Yamuna. These rivers are perennial and are fed by snow and rainfall. They are protected by an extensive cover of natural vegetation.
Due to extreme variation in elevation, great variation occurs in the climatic conditions of Himachal . The climate varies from hot and subhumid tropical in the southern tracts to, with more elevation, cold, alpine, and glacial in the northern and eastern mountain ranges. The state has areas like Dharamsala that receive very heavy rainfall, as well as those like Lahaul and Spiti that are cold and almost rainless. Broadly, Himachal experiences three seasons: summer, winter, and rainy season. Summer lasts from mid-April till the end of June and most parts become very hot (except in the alpine zone which experiences a mild summer) with the average temperature ranging from . Winter lasts from late November till mid March. Snowfall is common in alpine tracts (generally above i.e. in the higher and trans-Himalayan region).
Flora and fauna
upright|thumb|Asian paradise flycatcher in Kullu
thumb|Himalayan monal at Birds Park in Shimla
thumb|right|Black Bulbul (Hypsipetes leucocephalus). Solan (Himachal Pradesh). 28-July-2013
According to 2003 Forest Survey of India report, legally defined forest areas constitute 66.52% of the area of Himachal Pradesh. Vegetation in the state is dictated by elevation and precipitation. The state endows with a high diversity of medicinal and aromatic plants.Kala, C.P. (2002) Medicinal Plants of Indian Trans-Himalaya: Focus on Tibetan Use of Medicinal Resources. Bishen Singh Mahendra Pal Singh, Dehradun, India. 200 pp. Lahaul-Spiti region of the state, being a cold desert, supports unique plants of medicinal value including Ferula jaeschkeana, Hyoscyamus Niger, Lancea tibetica, and Saussurea bracteata.Kala, C.P. (2000) Status and conservation of rare and endangered medicinal plants in the Indian trans-Himalaya. Biological Conservation, 93 (3): 371-379.Kala, C.P. (2005) Health traditions of Buddhist community and role of amchis in trans-Himalayan region of India. Current Science, 89 (8): 1331-1338.
Himachal is also said to be the fruit bowl of the country,http://hpmc.gov.in/himachal.htm with orchards being widespread. Meadows and pastures are also seen clinging to steep slopes. After the winter season, the hillsides and orchards bloom with wild flowers, while gladiolas, carnations, marigolds, roses, chrysanthemums, tulips and lilies are carefully cultivated. The state government is gearing up to make Himachal Pradesh as the flower basket of the world.
Himachal Pradesh has around 463 bird 77 mammalian, 44 reptile and 80 fish species. Great Himalayan National Park, a UNESCO World Heritage Site and Pin Valley National Park are the national Parks located in the state. The state also has 30 wildlife sanctuaries and 3 conservation reserves.
Government
|left|thumb|Town Hall in Shimla
The Legislative Assembly of Himachal Pradesh has no pre-Constitution history. The State itself is a post-Independence creation. It came into being as a centrally administered territory on 15 April 1948 from the integration of thirty erstwhile princely states.
Himachal Pradesh is governed through a parliamentary system of representative democracy, a feature the state shares with other Indian states. Universal suffrage is granted to residents. The legislature consists of elected members and special office bearers such as the Speaker and the Deputy Speaker who are elected by the members. Assembly meetings are presided over by the Speaker or the Deputy Speaker in the Speaker's absence. The judiciary is composed of the Himachal Pradesh High Court and a system of lower courts. Executive authority is vested in the Council of Ministers headed by the Chief Minister, although the titular head of government is the Governor. The Governor is the head of state appointed by the President of India. The leader of the party or coalition with a majority in the Legislative Assembly is appointed as the Chief Minister by the Governor, and the Council of Ministers are appointed by the Governor on the advice of the Chief Minister. The Council of Ministers reports to the Legislative Assembly. The Assembly is unicameral with 68 Members of the Legislative Assembly (MLA). Terms of office run for 5 years, unless the Assembly is dissolved prior to the completion of the term. Auxiliary authorities known as panchayats, for which local body elections are regularly held, govern local affairs.
In the assembly elections held in November 2012, the Congress secured an absolute majority. The Congress won 36 of the 68 seats while the BJP won only 26 of the 68 seats. Virbhadra Singh was sworn-in as Himachal Pradesh's Chief Minister for a record sixth term in Shimla on 25 December 2012. Virbhadra Singh who has held the top office in Himachal five times in the past, was administered the oath of office and secrecy by Governor Urmila Singh at an open ceremony at the historic Ridge Maidan in Shimla.
Administrative Divisions
The state of Himachal Pradesh is divided into 12 districts which are grouped into three divisions, Shimla, Kangra and Mandi. The districts are further divided into 62 subdivisions, 78 blocks and 149 Tehsils.
Divisions Districts Kangra Chamba, Kangra, Una Mandi Bilaspur, Hamirpur, Kullu, Lahaul and Spiti, Mandi Shimla Kinnaur, Shimla, Sirmaur, Solan
Administrative StructureDistricts12Divisions3Sub-Divisions62Blocks78Tehsils149Urban Local Bodies49Towns59Gram Panchayats3226Villages20690Police Stations125Lok Sabha Seats4Rajya Sabha Seats3Assembly Constituencies68
Economy
thumb|left|200px|Shimla, the capital city of Himachal Pradesh. Shimla Montage - Clockwise from top: Skyline at Shimla Southern Side, Rashtrapati Niwas, Town hall, Night view of Shimla and Christ Church.
+ Gross State Domestic Product at Current Pricesfigures in crores of Indian Rupees Year Gross State Domestic Product 1980 794 1985 1,372 1990 2,815 1995 6,698 2000 13,590 2005 23,024 2007 25,435201057,452201382,5852016110,511
The era of planning in Himachal Pradesh started in 1948 along with the rest of India. The first five-year plan allocated 52.7 million to Himachal. More than 50% of this expenditure was incurred on road construction since it was felt that without proper transport facilities, the process of planning and development could not be carried to the people, who mostly lived an isolated existence in faraway areas. Himachal now ranks fourth in per capita income among the states of the Indian Union.
Agriculture contributes over 45% to the net state domestic product. It is the main source of income and employment in Himachal. Over 93% of the population in Himachal depends directly upon agriculture, which provides direct employment to 71% of its people. The main cereals grown are wheat, maize, rice and barley.Economy of Himachal by Agriculture @ webindia123.com Suni System (P) Ltd. Retrieved on- 2015-07-28 Apple is the principal cash crop of the state grown principally in the districts of Shimla, Kinnaur, Kullu, Mandi, Chamba and some parts of Sirmaur and Lahaul-Spiti with an average annual production of 5 lakh tonnes and per hectare production of 8 to 10 tonnes. The apple cultivation constitute 49 per cent of the total area under fruit crops and 85% of total fruit production in the state with an estimated economy of 3500 crore. Apples from Himachal are exported to other Indian states and even other countries. In 2011-12, the total area under apple cultivation was 1.04 lakh hectares, increased from 90,347 hectares in 2000-01.
Hydropower is also one of the major sources of income generation for the state. The identified Hydroelectric Potential for the state is 27,436 MW in five river basins and annual hydroelectricity production is 8,418 MW.
As per the current prices, the total GDP was estimated at 254 billion as against 230 billion in the year 2004–05, showing an increase of 10.5%.
Agriculture
thumb|Himalayas from Kullu Valley
Land husbandry initiatives such as the Mid-Himalayan Watershed Development Project, which includes the Himachal Pradesh Reforestation Project (HPRP), the world's largest clean development mechanism (CDM) undertaking, have improved agricultural yields and productivity, and raised rural household incomes.
Heritage
Himachal has a rich heritage of handicrafts. These include woolen and pashmina shawls, carpets, silver and metal ware, embroidered chappals, grass shoes, Kangra and Gompa style paintings, wood work, horse-hair bangles, wooden and metal utensils and various other house hold items. These aesthetic and tasteful handicrafts declined under competition from machine made goods and also because of lack of marketing facilities. But now the demand for handicrafts has increased within and outside the country.
Tourism
thumb|Kalpa in June 2015.
Tourism in Himachal Pradesh is a major contributor to the state's economy and growth. The mountainous state with its diverse and beautiful Himalayan landscapes attracts tourists from all over the world. Hill stations like Shimla, Manali, Dalhousie, Chamba, Dharamsala and Kullu are popular destinations for both domestic and foreign tourists. The state has many important pilgrimage centres with prominent Hindu temples like Naina Devi Temple, Vajreshwari Devi Temple, Jwala Ji Temple, Chintpurni, Chamunda Devi Temple, Baijnath Temple, Bhimakali Temple, Bijli Mahadev, Renuka Lake and Jakhoo Temple. Like Uttarakhand, the state is also referred to as "Dev Bhoomi" (literally meaning Abode of Gods) due to its mention in ancient holy texts and occurrence of large number of historical temples in the state.
thumb|Triund is a famous campsite for travellers and trekkers on the way to Mount Dhauladhar.
The state is also known for its adventure tourism activities like ice skating in Shimla, paragliding in Bir-billing and Solang valley, rafting in Kullu, skiing in Manali boating in Bilaspur and trekking, horse riding and fishing in different parts in the state. Spiti Valley in Lahaul & Spiti District situated at an altitude of over 3000 metres with its picturesque landscapes is an important destination for adventure seekers. The region also has some of the oldest Buddhist Monasteries in Asia.
The state is also a famous destination for film shooting. Movies like Roja, Henna, Jab We Met, Veer-Zaara, Yeh Jawaani Hai Deewani and Highway have been filmed in Himachal Pradesh.
Himachal hosted the first Paragliding World Cup in India from 24 October to 31 October in 2015. Venue for paragliding world cup was Bir Billing, which is 70 km from famous tourist town Macleod ganj, located in the heart of Himachal in Kangra District. Bir Billing is the centre for aero sports in Himachal and considered as best for paragliding. Buddhist monasteries, trekking to tribal villages, mountain biking are other activities to do here.
Transportation
thumb|Aircraft at Shimla airport
thumb|Kalka-Shimla Railway
Air
Himachal has three domestic airport in Kangra, Kullu and Shimla districts. The air routes connect the state with Delhi and Chandigarh.
Bhuntar Airport is in Kullu district, around from district headquarters.
Gaggal Airport is in Kangra district, which is around 10 kilometres from Kangra
Shimla Airport is around west of the city.
Railway
Himachal is famous for its narrow-gauge railways. One is the Kalka-Shimla Railway, a UNESCO World Heritage Site, and another is the Pathankot-Jogindernagar Railway. The total length of these two tracks is . The Kalka-Shimla Railway passes through many tunnels, while the Pathankot–Jogindernagar meanders through a maze of hills and valleys. It also has broad-gauge railway track, which connects Amb (Una district) to Delhi. A survey is being conducted to extend this railway line to Kangra (via Nadaun). Other proposed railways in the state are Baddi-Bilaspur, Dharamsala-Palampur and Bilaspur-Manali-Leh.
Road
Roads are the major mode of transport in the hilly terrains. The state has road network of , including eight National Highways (NH) that constitute and 19 State Highways with a total length of . Some roads get closed during winter and monsoon seasons due to snow and landslides. Hamirpur has the highest road density in the state.
Demographics
Population
thumb|Traditional home, Manali
Himachal Pradesh has a total population of 6,864,602 including 3,481,873 males and 3,382,729 females as per the final results of the Census of India 2011. This is only 0.57 per cent of India's total population, recording a growth of 12.81 per cent. The total fertility rate (TFR) per woman is 1.8, one of lowest in India.
In the census, the state is placed 21st on the population chart, followed by Tripura at 22nd place. Kangra district was top ranked with a population strength of 1,507,223 (21.98%), Mandi district 999,518 (14.58%), Shimla district 813,384 (11.86%), Solan district 576,670 (8.41%), Sirmaur district 530,164 (7.73%), Una district 521,057 (7.60%), Chamba district 518,844 (7.57%), Hamirpur district 454,293 (6.63%), Kullu district 437,474 (6.38%), Bilaspur district 382,056 (5.57%), Kinnaur district 84,298 (1.23%) and Lahaul Spiti 31,528 (0.46%).
The life expectancy at birth in Himachal Pradesh is 62.8 years (higher than the national average of 57.7 years) for 1986–1990. The infant mortality rate stood at 40 in 2010, and the crude birth rate has declined from 37.3 in 1971 to 16.9 in 2010, below the national average of 26.5 in 1998. The crude death rate was 6.9 in 2010. Himachal Pradesh's literacy rate almost doubled between 1981 and 2011 (see table to right).
Languages
Hindi is the sole official language of Himachal Pradesh and is spoken by the majority of the population (89.01%). English is given the status of an additional official language.
Religion
Hinduism is the main religion in Himachal Pradesh, which ranks first in India in terms of the proportion of Hindus present within it. More than 95% of the total population belongs to the Hindu faith, the distribution of which is evenly spread throughout the state.
Himachal Pradesh thus has the one of the highest proportions of Hindu population in India (95.17%).
Other religions that form a small percentage are Islam, Buddhism and Sikhism. Muslims are mainly concentrated in Sirmaur, Chamba, Kangra and Una districts where they form 1.31-6.27% of the population. The Lahaulis of Lahaul and Spiti region are mainly Buddhists. Sikhs mostly live in towns and cities and constitute 1.16% of the state population. The Buddhists, who constitute 1.15%, are mainly natives and tribals from Lahaul and Spiti, where they form a majority of 62%, and Kinnaur, where they form 21.5%.
Culture
thumb|Himalayan landscape, Nako Lake and village shown
thumb|Nako Village
Himachal Pradesh was one of the few states that had remained largely untouched by external customs, largely due to its difficult terrain. With the technological advancements, the state has changed very rapidly. Himachal Pradesh is a multireligional, multicultural as well as multilingual state like other Indian states. Some of the most commonly spoken languages are Hindi, Punjabi, Pahari, Dogri, Mandeali, Kangri and Kinnauri. The Hindu communities residing in Himachal include the Brahmins, Rajputs, Kannets, Rathis and Kolis. There are also tribal population in the state which mainly comprise Gaddis, Kinnars, Gujjars, Pangawals and Lahaulis.
Himachal is well known for its handicrafts. The carpets, leather works, shawls, Kangra paintings, Chamba rumals, metalware, woodwork and paintings are worth appreciating. Pashmina shawl is one of the products which is highly in demand not only in Himachal but all over the country. Himachali caps are also famous art work of the people.
Local music and dance reflects the cultural identity of the state. Through their dance and music, they entreat their gods during local festivals and other special occasions.
Apart from the fairs and festivals that are celebrated all over India, there are number of other fairs and festivals, including the temple fairs in nearly every region that are of great significance to Himachal Pradesh.
The day to day food of Himachalis is very similar to the rest of the north India. They too have lentil, broth, rice, vegetables and bread. As compared to other states in north India non-vegetarian cuisine is more preferred. Some of the specialities of Himachal include Mhanee,Madhra,Pateer, Chouck, Bhagjery and chutney of Til.
Shimla, the state capital, is home to Asia's only natural ice skating rink.
Notable people
thumb| Kalachakra Temple in the main street of Mcleod ganj
Prominent people associated with Himachal include:
Shanta Kumar (member of Lok Sabha)
Jagat Prakash Nadda (member of Lok Sabha and Health Minister of India)
Anurag Thakur (member of Lok Sabha and President of Board of Control for Cricket in India)
Swatantra Kumar Head NGT, ex Justice, Supreme Court of India.
Sobha Singh (Painter)
The Great Khali, professional wrestler
Dev Anand, an Indian actor studied here.
Anupam Kher, an Indian actor
Amrish Puri (who studied here),
Prem Chopra (brought up here),
Subhash Chander, News Personality
Mohit Chauhan, an Indian singer
Anand Sharma (member of Rajya Sabha and former Union Cabinet Minister for Commerce and Industry of the Government of India),
Mehr Chand Mahajan Third Supreme Court Chief-Justice and former chief Minister of Kashmir in 1947,
Shahid Javed Burki economist and former vice-president of World Bank,
Pritam Singh, is the brand ambassador of the state
Preity Zinta, Bollywood actress
Kangana Ranaut, Bollywood actress,
Yami Gautam, Bollywood actress,
Siddharth Chauhan, Independent Filmmaker
Namrata Singh Gujral, an American actress
Satyananda Stokes who introduced apples to the region,
Idries Shah writer, Sufi teacher and sage,
Allan Octavian Hume ornithologist had his home here,
Muhammad Zia-ul-Haq former general of Pakistan who studied here,
Hamid Karzai president of Afghanistan who studied here,
Vijay Kumar won silver medal in 25m shooting in 2012 Summer Olympics,
Ms. Suman Rawat Mehta,Arjuna Awardee.Won a bronze medal in the 1986 Asian Games in 3000m race.
Major Som Nath Sharma, PVC (1923–1947) was the first recipient of the Param Vir Chakra,
Captain Vikram Batra PVC (9 September 1974 – 7 July 1999) posthumously awarded with the Param Vir Chakra,
Captain Saurabh Kalia (1976–1999) posthumously awarded with the Maha Vir Chakra
Naib Subedar Sanjay Kumar , PVC ( 3 March 1976) is an Indian Army soldier, a Junior Commissioned Officer and recipient of the Param Vir Chakra, India's highest military award.
Shyam Saran Negi, named as the first voter of independent India.
Ram Kumar, abstract artist.
Natural Resources
Himachal has been blessed with abundance of resources like forests, rivers and lakes. It's hydro-electric power production is still to be fully utilized.
One of the oldest Shanan power house at Joginder Nagar commissioned in 1928, used to feed Lahore city and helped in building legendary Bhakra Nangal Project.
Himachal forests are known for conifurous trees. Pine, Kail, Devdar, Baan. Rich flora and fauna adds to the beauty of this land. Herbs and medicinal plants amply contribute to many local and national pharmacies. Not so famous Kangra tea is mostly organic and health booster. Himachal honey is also in great demand.
Education
thumb|Indira Gandhi Medical College and Hospital at Shimla
thumb|Indian Institute of Advanced Study at Shimla
Hamirpur District is among the top districts in the country for literacy. Education rates among women are quite encouraging in the state. The standard of education in the state has reached a considerably high level as compared to other states in India with several reputed educational institutes for higher studies.
The Indian Institute of Technology Mandi, Himachal Pradesh University Shimla, Institute of Himalayan Bioresource Technology (IHBT, CSIR Lab), Palampur, the National Institute of Technology, Hamirpur,
Indian Institute of Information Technology Una, the Central University Dharamshala, AP Goyal (Alakh Prakash Goyal) Shimla University, the Bahra University (Waknaghat, Solan), the Baddi University of Emerging Sciences and Technologies Baddi, IEC University, Shoolini University of Biotechnology and Management Sciences, Solan, Manav Bharti University Solan, the Jaypee University of Information Technology Waknaghat, Eternal University, Sirmaur & Chitkara University Solan are some of the pioneer universities in the state. CSK Himachal Pradesh Krishi Vishwavidyalya Palampur is one of the most renowned hill agriculture institutes in world. Dr. Yashwant Singh Parmar University of Horticulture and Forestry has earned a unique distinction in India for imparting teaching, research and extension education in horticulture, forestry and allied disciplines. Further, state-run Jawaharlal Nehru Government Engineering College started in 2006 at Sundernagar.
There are over 10,000 primary schools, 1,000 secondary schools and more than 1,300 high schools in Himachal. The state government has decided to start three major nursing colleges to develop the health system in the state. In meeting the constitutional obligation to make primary education compulsory, Himachal has become the first state in India to make elementary education accessible to every child.
The state has Indira Gandhi Medical College and Hospital, Homoeopathic Medical College & Hospital, Kumarhatti. Besides that there is Himachal Dental College which is the state's first recognised dental institute.
State profile
thumb|Kalpa, a typical town in Himachal Pradesh
thumb|Sunrise in Himachal Pradesh, at Kinnaur Kailash
thumb|Sunshine on a snowy mountain at Himachal Pradesh
thumb|Snowy mountain range appears to be in sky
Source: Department of Information and Public Relations.
Area 55673 km2 Total population 6,864,602 Males 3,481,873 Females 3,382,729 Population density 123 Sex ratio 972census 2011 Rural population 6,176,050 Urban population 688,552 Scheduled Caste population 1,729,252 Scheduled Tribe population 392,126 Literacy rate 83.78% Male literacy 90.83% Female literacy 76.60% Districts 12 Sub-divisions 62 Tehsils 149 Sub-tehsils 35 Developmental blocks 78 Towns 59 Panchayats 3,226 Panchayat smities 77 Zila parishad 12 Urban local bodies 49 Nagar nigams 2 Nagar parishads 25 Nagar panchayats 23 Census villages 20,690 Inhabited villages 17,495 Health institutions 3,866 Educational institutions 17,000 Motorable roads 33,722 km National highways 8 Identified hydroelectric potential 23,000.43 MW in five rivers basins i.e. (Yamuna, Satluj, Beas, Ravi, Chenab and Himurja) Potential harnessed 10,264 MW Food grain production 1579,000 tonnes Vegetable production 900,000 tonnes Fruit production 1,027,000 tonnes Per capita income 130,067 (2015–16) Social Security pensions 237,250 persons, annual expenditure: over 600 million Investment in industrial areas 273.80 billion, employment opportunities: Over 337,391 Employment generated in government sector 80,000
Census 2011-
Largest District (km²)
(1) Lahul and Spiti 13841
(2) Chamba 6522
(3) Kinnaur 6401
(4) Kangra 5739
(5) Kullu 5503
Percentage of Child
(1) Chamba 13.55%
(2) Sirmaur 13.14%
(3) Solan 11.74%
(4) Kullu 11.52%
(5) Una 11.36%
High Density
(1) Hamirpur 407
(2) Una 338
(3) Bilaspur 327
(4) Solan 300
(5) Kangra 263
Top Population Growth
(1) Una 16.26%
(2) Solan 15.93%
(3) Sirmaur 15.54%
(4) Kullu 14.76%
(5) Kangra 12.77%
High Literacy
(1) Hamirpur 89.01%
(2) Una 87.23%
(3) Kangra 86.49%
(4) Blaspur 85.87%
(5) Solan 85.02%
High Sex Ratio
(1) Hamirpur 2042
(2) Kangra 1012
(3) Mandi 1007
(4) Chamba 986
(5) Bilaspur 981
See also
Outline of Himachal Pradesh
Bittu Bhaizee
Geography of Himachal Pradesh
List of districts of Himachal Pradesh
Tourism in Himachal Pradesh
Outline of India
Bibliography of India
Index of India-related articles
Notes
References
Statistics and Data, Planning Department, Government of Himachal Pradesh
External links
Government
The Official Site of Himachal Pradesh
The Official Tourism Site of Himachal Pradesh, India
General information
Himachal Pradesh Encyclopædia Britannica entry
*
Category:States and territories established in 1971
Category:1971 establishments in India
Category:Punjabi-speaking countries and territories | 14,190 | 2017-01 |
Phonology | Phonology is a branch of linguistics concerned with the systematic organization of sounds in languages. It has traditionally focused largely on the study of the systems of phonemes in particular languages (and therefore used to be also called phonemics, or phonematics), but it may also cover any linguistic analysis either at a level beneath the word (including syllable, onset and rime, articulatory gestures, articulatory features, mora, etc.) or at all levels of language where sound is considered to be structured for conveying linguistic meaning.
Phonology also includes the study of equivalent organizational systems in sign languages.
Terminology
The word phonology (as in the phonology of English) can also refer to the phonological system (sound system) of a given language. This is one of the fundamental systems which a language is considered to comprise, like its syntax and its vocabulary.
Phonology is often distinguished from phonetics. While phonetics concerns the physical production, acoustic transmission and perception of the sounds of speech, phonology describes the way sounds function within a given language or across languages to encode meaning. For many linguists, phonetics belongs to descriptive linguistics, and phonology to theoretical linguistics, although establishing the phonological system of a language is necessarily an application of theoretical principles to analysis of phonetic evidence. Note that this distinction was not always made, particularly before the development of the modern concept of the phoneme in the mid 20th century. Some subfields of modern phonology have a crossover with phonetics in descriptive disciplines such as psycholinguistics and speech perception, resulting in specific areas like articulatory phonology or laboratory phonology.
Derivation and definitions
The word phonology comes from Ancient Greek , phōnḗ, "voice, sound," and the suffix -logy (which is from Greek , lógos, "word, speech, subject of discussion"). Definitions of the term vary. Nikolai Trubetzkoy in Grundzüge der Phonologie (1939) defines phonology as "the study of sound pertaining to the system of language," as opposed to phonetics, which is "the study of sound pertaining to the act of speech" (the distinction between language and speech being basically Saussure's distinction between langue and parole).Trubetzkoy N., Grundzüge der Phonologie (published 1939), translated by C. Baltaxe as Principles of Phonology, University of California Press, 1969 More recently, Lass (1998) writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language, while in more narrow terms, "phonology proper is concerned with the function, behavior and organization of sounds as linguistic items." According to Clark et al. (2007), it means the systematic use of sound to encode meaning in any spoken human language, or the field of linguistics studying this use.
History
The history of phonology may be traced back to the Ashtadhyayi, the Sanskrit grammar composed by Pāṇini in the 4th century BC. In particular the Shiva Sutras, an auxiliary text to the Ashtadhyayi, introduces what can be considered a list of the phonemes of the Sanskrit language, with a notational system for them that is used throughout the main text, which deals with matters of morphology, syntax and semantics.
The Polish scholar Jan Baudouin de Courtenay (together with his students, Mikołaj Kruszewski and Lev Shcherba) shaped the modern usage of the term phoneme in 1876–7,Baudouin de Courtenay (1876–7), A detailed programme of lectures for the academic year 1876-77, p. 115. which had been coined in 1873 by the French linguist A. Dufriche-DesgenettesAnon. (1873). "Sur la nature des consonnes nasales". [Summary (probably written by Louis Havet) of a paper read at the 24th of May meeting of the Société de Linguistique de Paris.] Revue critique d'histoire et de littérature 13, No. 23, p. 368. who proposed it as a one-word equivalent for the German Sprachlaut.Roman Jakobson, Selected Writings: Word and Language, Volume 2, Walter de Gruyter, 1971, p. 396. Baudouin de Courtenay's work, though often unacknowledged, is considered to be the starting point of modern phonology. He also worked on the theory of phonetic alternations (what is now called allophony and morphophonology), and may have had an influence on the work of Saussure according to E. F. K. Koerner.E. F. K. Koerner, Ferdinand de Saussure: Origin and Development of His Linguistic Thought in Western Studies of Language. A contribution to the history and theory of linguistics, Braunschweig: Friedrich Vieweg & Sohn [Oxford & Elmsford, N.Y.: Pergamon Press], 1973.
thumb|right|160px|Nikolai Trubetzkoy, 1920s
An influential school of phonology in the interwar period was the Prague school. One of its leading members was Prince Nikolai Trubetzkoy, whose Grundzüge der Phonologie (Principles of Phonology), published posthumously in 1939, is among the most important works in the field from this period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, although this concept had also been recognized by de Courtenay. Trubetzkoy also developed the concept of the archiphoneme. Another important figure in the Prague school was Roman Jakobson, who was one of the most prominent linguists of the 20th century.
In 1968 Noam Chomsky and Morris Halle published The Sound Pattern of English (SPE), the basis for generative phonology. In this view, phonological representations are sequences of segments made up of distinctive features. These features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, and Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set, and have the binary values + or −. There are at least two levels of representation: underlying representation and surface phonetic representation. Ordered phonological rules govern how underlying representation is transformed into the actual pronunciation (the so-called surface form). An important consequence of the influence SPE had on phonological theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the generativists folded morphophonology into phonology, which both solved and created problems.
Natural phonology is a theory based on the publications of its proponent David Stampe in 1969 and (more explicitly) in 1979. In this view, phonology is based on a set of universal phonological processes that interact with one another; which ones are active and which are suppressed is language-specific. Rather than acting on segments, phonological processes act on distinctive features within prosodic groups. Prosodic groups can be as small as a part of a syllable or as large as an entire utterance. Phonological processes are unordered with respect to each other and apply simultaneously (though the output of one process may be the input to another). The second most prominent natural phonologist is Patricia Donegan (Stampe's wife); there are many natural phonologists in Europe, and a few in the U.S., such as Geoffrey Nathan. The principles of natural phonology were extended to morphology by Wolfgang U. Dressler, who founded natural morphology.
In 1976 John Goldsmith introduced autosegmental phonology. Phonological phenomena are no longer seen as operating on one linear sequence of segments, called phonemes or feature combinations, but rather as involving some parallel sequences of features which reside on multiple tiers. Autosegmental phonology later evolved into feature geometry, which became the standard theory of representation for theories of the organization of phonology as different as lexical phonology and optimality theory.
Government phonology, which originated in the early 1980s as an attempt to unify theoretical notions of syntactic and phonological structures, is based on the notion that all languages necessarily follow a small set of principles and vary according to their selection of certain binary parameters. That is, all languages' phonological structures are essentially the same, but there is restricted variation that accounts for differences in surface realizations. Principles are held to be inviolable, though parameters may sometimes come into conflict. Prominent figures in this field include Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, and John Harris.
In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed optimality theory—an overall architecture for phonology according to which languages choose a pronunciation of a word that best satisfies a list of constraints ordered by importance; a lower-ranked constraint can be violated when the violation is necessary in order to obey a higher-ranked constraint. The approach was soon extended to morphology by John McCarthy and Alan Prince, and has become a dominant trend in phonology. The appeal to phonetic grounding of constraints and representational elements (e.g. features) in various approaches has been criticized by proponents of 'substance-free phonology', especially Mark Hale and Charles Reiss.
Broadly speaking, government phonology (or its descendant, strict-CV phonology) has a greater following in the United Kingdom, whereas optimality theory is predominant in the United States.
An integrated approach to phonological theory that combines synchronic and diachronic accounts to sound patterns was initiated with Evolutionary Phonology in recent years.Blevins, Juliette. 2004. Evolutionary phonology: The emergence of sound patterns. Cambridge University Press.
Analysis of phonemes
An important part of traditional, pre-generative schools of phonology is studying which sounds can be grouped into distinctive units within a language; these units are known as phonemes. For example, in English, the "p" sound in pot is aspirated (pronounced ) while that in spot is not aspirated (pronounced ). However, English speakers intuitively treat both sounds as variations (allophones) of the same phonological category, that is of the phoneme . (Traditionally, it would be argued that if an aspirated were interchanged with the unaspirated in spot, native speakers of English would still hear the same words; that is, the two sounds are perceived as "the same" .) In some other languages, however, these two sounds are perceived as different, and they are consequently assigned to different phonemes. For example, in Thai, Hindi, and Quechua, there are minimal pairs of words for which aspiration is the only contrasting feature (two words can have different meanings but with the only difference in pronunciation being that one has an aspirated sound where the other has an unaspirated one).
thumb|256px|The vowels of modern (Standard) Arabic and (Israeli) Hebrew from the phonemic point of view. Note the intersection of the two circles—the distinction between short a, i and u is made by both speakers, but Arabic lacks the mid articulation of short vowels, while Hebrew lacks the distinction of vowel length.
thumb|256px|The vowels of modern (Standard) Arabic and (Israeli) Hebrew from the phonetic point of view. Note that the two circles are totally separate—none of the vowel-sounds made by speakers of one language is made by speakers of the other.
Part of the phonological study of a language therefore involves looking at data (phonetic transcriptions of the speech of native speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language is. The presence or absence of minimal pairs, as mentioned above, is a frequently used criterion for deciding whether two sounds should be assigned to the same phoneme. However, other considerations often need to be taken into account as well.
The particular contrasts which are phonemic in a language can change over time. At one time, and , two sounds that have the same place and manner of articulation and differ in voicing only, were allophones of the same phoneme in English, but later came to belong to separate phonemes. This is one of the main factors of historical change of languages as described in historical linguistics.
The findings and insights of speech perception and articulation research complicate the traditional and somewhat intuitive idea of interchangeable allophones being perceived as the same phoneme. First, interchanged allophones of the same phoneme can result in unrecognizable words. Second, actual speech, even at a word level, is highly co-articulated, so it is problematic to expect to be able to splice words into simple segments without affecting speech perception.
Different linguists therefore take different approaches to the problem of assigning sounds to phonemes. For example, they differ in the extent to which they require allophones to be phonetically similar. There are also differing ideas as to whether this grouping of sounds is purely a tool for linguistic analysis, or reflects an actual process in the way the human brain processes a language.
Since the early 1960s, theoretical linguists have moved away from the traditional concept of a phoneme, preferring to consider basic units at a more abstract level, as a component of morphemes; these units can be called morphophonemes, and analysis using this approach is called morphophonology.
Other topics in phonology
In addition to the minimal units that can serve the purpose of differentiating meaning (the phonemes), phonology studies how sounds alternate, i.e. replace one another in different forms of the same morpheme (allomorphs), as well as, for example, syllable structure, stress, feature geometry, accent, and intonation.
Phonology also includes topics such as phonotactics (the phonological constraints on what sounds can appear in what positions in a given language) and phonological alternation (how the pronunciation of a sound changes through the application of phonological rules, sometimes in a given order which can be feeding or bleeding,Goldsmith 1995:1.) as well as prosody, the study of suprasegmentals and topics such as stress and intonation.
The principles of phonological analysis can be applied independently of modality because they are designed to serve as general analytical tools, not language-specific ones. The same principles have been applied to the analysis of sign languages (see Phonemes in sign languages), even though the sub-lexical units are not instantiated as speech sounds.
See also
Absolute neutralisation
Cherology
English phonology
List of phonologists (also :Category: Phonologists)
Morphophonology
Phoneme
Phonological development
Phonological hierarchy
Prosody (linguistics)
Phonotactics
Second language phonology
Phonological rule
Notes
Bibliography
Anderson, John M.; and Ewen, Colin J. (1987). Principles of dependency phonology. Cambridge: Cambridge University Press.
Bloomfield, Leonard. (1933). Language. New York: H. Holt and Company. (Revised version of Bloomfield's 1914 An introduction to the study of language).
Brentari, Diane (1998). A prosodic model of sign language phonology. Cambridge, MA: MIT Press.
Chomsky, Noam. (1964). Current issues in linguistic theory. In J. A. Fodor and J. J. Katz (Eds.), The structure of language: Readings in the philosophy language (pp. 91–112). Englewood Cliffs, NJ: Prentice-Hall.
Chomsky, Noam; and Halle, Morris. (1968). The sound pattern of English. New York: Harper & Row.
Clements, George N.; and Samuel J. Keyser. (1983). CV phonology: A generative theory of the syllable. Linguistic inquiry monographs (No. 9). Cambridge, MA: MIT Press. ISBN 0-262-53047-3 (pbk); ISBN 0-262-03098-5 (hbk).
Donegan, Patricia. (1985). On the Natural Phonology of Vowels. New York: Garland. ISBN 0-8240-5424-5.
Goldsmith, John A. (1979). The aims of autosegmental phonology. In D. A. Dinnsen (Ed.), Current approaches to phonological theory (pp. 202–222). Bloomington: Indiana University Press.
Goldsmith, John A. (1989). Autosegmental and metrical phonology: A new synthesis. Oxford: Basil Blackwell.
Gussenhoven, Carlos & Jacobs, Haike. "Understanding Phonology", Hodder & Arnold, 1998. 2nd edition 2005.
Halle, Morris. (1959). The sound pattern of Russian. The Hague: Mouton.
Harris, Zellig. (1951). Methods in structural linguistics. Chicago: Chicago University Press.
Hockett, Charles F. (1955). A manual of phonology. Indiana University publications in anthropology and linguistics, memoirs II. Baltimore: Waverley Press.
Jakobson, Roman; Fant, Gunnar; and Halle, Morris. (1952). Preliminaries to speech analysis: The distinctive features and their correlates. Cambridge, MA: MIT Press.
Kaisse, Ellen M.; and Shaw, Patricia A. (1985). On the theory of lexical phonology. In E. Colin and J. Anderson (Eds.), Phonology Yearbook 2 (pp. 1–30).
Kenstowicz, Michael. Phonology in generative grammar. Oxford: Basil Blackwell.
Ladefoged, Peter. (1982). A course in phonetics (2nd ed.). London: Harcourt Brace Jovanovich.
Napoli, Donna Jo (1996). Linguistics: An Introduction. New York: Oxford University Press.
Sandler, Wendy and Lillo-Martin, Diane. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press
de Saussure, Ferdinand. (1916). Cours de linguistique générale. Paris: Payot.
Stampe, David. (1979). A dissertation on natural phonology. New York: Garland.
Trubetzkoy, Nikolai. (1939). Grundzüge der Phonologie. Travaux du Cercle Linguistique de Prague 7.
Twaddell, William F. (1935). On defining the phoneme. Language monograph no. 16. Language.
External links
| 23,247 | 2017-01 |
Canadian Armed Forces | The Canadian Armed Forces (CAF; , FAC), or Canadian Forces (CF) (, FC), are the unified armed forces of Canada, as constituted by the National Defence Act, which states: "The Canadian Forces are the armed forces of Her Majesty raised by Canada and consist of one Service called the Canadian Armed Forces."
This unified institution consists of sea, land, and air elements referred to as the Royal Canadian Navy (RCN), Canadian Army, and Royal Canadian Air Force (RCAF). Personnel may belong to either the Regular Force or the Reserve Force, which has four sub-components: the Primary Reserve, Supplementary Reserve, Cadet Organizations Administration and Training Service, and the Canadian Rangers. Under the National Defence Act, the Canadian Armed Forces are an entity separate and distinct from the Department of National Defence (the federal government department responsible for administration and formation of defence policy), which also exists as the civilian support system for the Forces.
The Commander-in-Chief of the Canadian Armed Forces is the reigning , who is represented by the Governor General of Canada. The Canadian Armed Forces is led by the Chief of the Defence Staff, who is advised and assisted by the Armed Forces Council.
Defence policy
thumb|right|Flag of the Canadian Armed Forces
Since the Second World War, Canadian defence policy has consistently stressed three overarching objectives:
The defence of Canada itself;
The defence of North America in co-operation with US forces;
Contributing to broader international security.
During the Cold War, a principal focus of Canadian defence policy was contributing to the security of Europe in the face of the Soviet military threat. Toward that end, Canadian ground and air forces were based in Europe from the early 1950s until the early 1990s.
However, since the end of the Cold War, as the North Atlantic Treaty Organization (NATO) has moved much of its defence focus "out of area", the Canadian military has also become more deeply engaged in international security operations in various other parts of the world – most notably in Afghanistan since 2002.
Canadian defence policy today is based on the Canada First Defence Strategy, introduced in 2008. Based on that strategy, the Canadian military is oriented and being equipped to carry out six core missions within Canada, in North America and globally. Specifically, the Canadian Armed Forces are tasked with having the capacity to:
Conduct daily domestic and continental operations, including in the Arctic and through NORAD (the North American Aerospace Defense Command);
Support a major international event in Canada, such as the 2010 Winter Olympics;
Respond to a major terrorist attack;
Support civilian authorities during a crisis in Canada such as a natural disaster;
Lead and/or conduct a major international operation for an extended period; and
Deploy forces in response to crises elsewhere in the world for shorter periods.Department of National Defence http://www.forces.gc.ca/site/focus/first-premier/defstra/summary-sommaire-eng.asp
Consistent with the missions and priorities outlined above, the Canadian Armed Forces also contribute to the conduct of Canadian defence diplomacy through a range of activities, including the deployment of Canadian Defence Attachés, participation in bilateral and multilateral military forums (e.g. the System of Cooperation Among the American Air Forces), ship and aircraft visits, military training and cooperation,For example, through the Military Training and Cooperation Program and its ancillary activities http://www.forces.gc.ca/admpol/mtcp-eng.html and other such outreach and relationship-building efforts.
History
Origins and establishment
Prior to Confederation in 1867, residents of the colonies in what is now Canada served as regular members of French and British forces and in local militia groups. The latter aided in the defence of their respective territories against attacks by other European powers, Aboriginal peoples, and later American forces during the American Revolutionary War and War of 1812, as well as in the Fenian raids, Red River Rebellion, and North-West Rebellion. Consequently, the lineages of some Canadian army units stretch back to the early 19th century, when militia units were formed to assist in the defence of British North America against invasion by the United States.
thumb|left|Canadian troops of the Stormont, Dundas and Glengarry Highlanders welcomed by liberated crowds in Leeuwarden, Netherlands, 16 April 1945.
The responsibility for military command remained with the British Crown-in-Council, with a commander-in-chief for North America stationed at Halifax until the final withdrawal of British Army and Royal Navy units from that city in 1906. Thereafter, the Royal Canadian Navy was formed, and, with the advent of military aviation, the Royal Canadian Air Force. These forces were organised under the Department of Militia and Defence, and split into the Permanent and Non-Permanent Active Militiasfrequently shortened to simply The Militia. By 1923, the department was merged into the Department of National Defence, but land forces in Canada were not referred to as the Canadian Army until November 1940.
thumb|right|2nd Canadian Division soldiers advance behind a tank during the battle of Vimy Ridge.
The first overseas deployment of Canadian military forces occurred during the Second Boer War, when several units were raised to serve under British command. Similarly, when the United Kingdom entered into conflict with Germany in the First World War, Canadian troops were called to participate in European theatres. The Canadian Crown-in-Council then decided to send its forces into the Second World War, as well as the Korean War.
Since 1947, Canadian military units have participated in more than 200 operations worldwide, and completed 72 international operations. Canadian soldiers, sailors, and aviators came to be considered world-class professionals through conspicuous service during these conflicts and the country's integral participation in NATO during the Cold War, First Gulf War, Kosovo War, and in United Nations Peacekeeping operations, such as the Suez Crisis, Golan Heights, Cyprus, Croatia, Bosnia, Afghanistan, and Libya. Canada maintained an aircraft carrier from 1957 to 1970 during the Cold War, which never saw combat but participated in patrols during the Cuban Missile Crisis.
Battles which are particularly notable to the Canadian military include the Battle of Vimy Ridge, the Dieppe Raid, the Battle of Ortona, the Battle of Passchendaele, the Normandy Landings, the Battle for Caen, the Battle of the Scheldt, the Battle of Britain, the Battle of the Atlantic, the strategic bombing of German cities, and more recently the Battle of Medak Pocket, in Croatia.
At the end of the Second World War, Canada possessed the fourth-largest air force and fifth-largest naval surface fleet in the world, as well as the largest volunteer army ever fielded.World War – Willmott, H.P. et al.; Dorling Kindersley Limited, London, 2004, Page 168 Conscription for overseas service was introduced only near the end of the war, and only 2,400 conscripts actually made it into battle. Originally, Canada was thought to have had the third-largest navy in the world, but with the fall of the Soviet Union, new data based on Japanese and Soviet sources found that to be incorrect.http://www.navalreview.ca/wp-content/uploads/public/vol5num3/vol5num3art2.pdf
Since unification
The current iteration of the Canadian Armed Forces dates from 1 February 1968, when the Royal Canadian Navy, Canadian Army, and Royal Canadian Air Force were merged into a unified structure and superseded by elemental commands. Its roots, however, lie in colonial militia groups that served alongside garrisons of the French and British armies and navies; a structure that remained in place until the early 20th century. Thereafter, a distinctly Canadian army and navy was established, followed by an air force, that, because of the constitutional arrangements at the time, remained effectively under the control of the British government until Canada gained legislative independence from the United Kingdom in 1931, in part due to the distinguished achievement and sacrifice of the Canadian Corps in the First World War.
After the 1980s, the use of the "Canadian Armed Forces" name gave way to "Canadian Forces"; The "Canadian Armed Forces" name returned in 2013.
Land Forces during this period also deployed in support of peacekeeping operations within United Nations sanctioned conflicts. The nature of the Canadian Forces has continued to evolve. They have been deployed in Afghanistan until 2011, under the NATO-led United Nations International Security Assistance Force (ISAF), at the request of the Government of Afghanistan.
thumb|left|At sunset a convoy of Canadian armoured vehicles watches over the area near Khadan Village, Afghanistan.
The Armed Forces are today funded by approximately $20.1 billion annually and are presently ranked 74th in size compared to the world's other armed forces by number of total personnel, and 58th in terms of active personnel, standing at a strength of roughly 68,000, plus 27,000 reservists, 5000 Rangers, and 19,000 supplementary reserves, bringing the total force to approximately 119,000. The number of primary reserve personnel is expected to go up to 30,000 by 2020, and the number of active to at least 70,000. In addition, 5000 rangers and 19,000 supplementary personnel will be serving. If this happens the total strength would be around 124,000. These individuals serve on numerous CF bases located in all regions of the country, and are governed by the Queen's Regulations and Orders and the National Defence Act.
In 2008 the Government of Canada made efforts, through the Canada First Defence Strategy, to modernize the Canadian Armed Forces, through the purchase of new equipment, improved training and readiness, as well as the establishment of the Canadian Special Operations Regiment. More funds were also put towards recruitment, which had been dwindling throughout the 1980s and '90s, possibly because the Canadian populace had come to perceive the CAF as peacekeepers rather than as soldiers, as shown in a 2008 survey conducted for the Department of National Defence. The poll found that nearly two thirds of Canadians agreed with the country's participation in the invasion of Afghanistan, and that the military should be stronger, but also that the purpose of the forces should be different, such as more focused on responding to natural disasters. Then CDS, Walter Natynczyk, said later that year that while recruiting has become more successful, the CF was facing a problem with its rate of loss of existing members, which increased between 2006 and 2008 from 6% to 9.2% annually.
thumb|right|Canadian Armed Forces personnel carry the coffin of a fallen comrade onto an aircraft at Kandahar Air Field, 1 February 2009
The 2006 renewal and re-equipment effort has resulted in the acquisition of specific equipment (main battle tanks, artillery, unmanned air vehicles and other systems) to support the mission in Afghanistan. It has also encompassed initiatives to renew certain so-called "core capabilities" (such as the air force's medium range transport aircraft fleet – the C-130 Hercules – and the army's truck and armoured vehicle fleets). In addition, new systems (such as C-17 Globemaster III strategic transport aircraft and CH-47 Chinook heavy-lift helicopters) have also been acquired for the Armed Forces. Although the viability of the Canada First Defence Strategy continues to suffer setbacks from challenging and evolving fiscal and other factors, it originally aimed to:
Increase the number of military personnel to 70,000 Regular Forces and 30,000 primary Reserve Forces;
Replace the Royal Canadian Navy's current auxiliary oiler ships with 2–3 new vessels under the Joint Support Ship Project;
Build 15 warships to replace existing destroyers and frigates under the Single Class Surface Combatant Project;
Acquire new Arctic patrol vessels under the Arctic Patrol Ship Project;
Replace the current maritime patrol aircraft with 10 to 12 new patrol aircraft;
Strengthen readiness and operational capabilities; and,
Improve and modernize defence infrastructure.
Role of women
thumb|right|RCAF CC-177 Globemaster on approach to CFB Trenton
In the 1950s, the recruitment of women was open to roles in medicine, communication, logistics, and administration. The roles of women in the CAF began to expand in 1971, after the Department reviewed the recommendations of the Royal Commission on the Status of Women, at which time it lifted the ceiling of 1,500 women personnel, and gradually expanded employment opportunities into the non-traditional areas—vehicle drivers and mechanics, aircraft mechanics, air-traffic controllers, military police, and firefighters. The Department further reviewed personnel policies in 1978 and 1985, after Parliament passed the Canadian Human Rights Act and the Canadian Charter of Rights and Freedoms. As a result of these reviews, the Department changed its policies to permit women to serve at sea in replenishment ships and in a diving tender, with the army service battalions, in military police platoons and field ambulance units, and in most air squadrons.
In 1987, occupations and units with the primary role of preparing for direct involvement in combat on the ground or at sea were still closed to women: infantry, armoured corps, field artillery, air-defence artillery, signals, field engineers, and naval operations. On 5 February 1987, the Minister of National Defence created an office to study the impact of employing men and women in combat units. These trials were called Combat-Related Employment of Women.
All military occupations were open to women in 1989, with the exception of submarine service, which opened in 2000. Throughout the 1990s, the introduction of women into the combat arms increased the potential recruiting pool by about 100 percent. It also provided opportunities for all persons to serve their country to the best of their abilities.
Women were fully integrated in all occupations and roles by the government of Jean Chrétien, and by 8 March 2000, even allowed to serve on submarines.
All equipment must be suitable for a mixed-gender force. Combat helmets, rucksacks, combat boots, and flak jackets are designed to ensure women have the same level of protection and comfort as their male colleagues. The women's uniform is similar in design to the men's uniform, but conforms to the female figure, and is functional and practical. Women are also provided with an annual financial entitlement for the purchase of brassiere undergarments.
Current structure
The following is the hierarchy of the Canadian Armed Forces. It begins at the top with the most senior-ranking personnel and works its way into lower organizations.
Commander-in-Chief Majesty , represented by Governor General David Johnston Level Zero Organization (L0) Chief of the Defence Staff Level One Organizations (L1s) Vice Chief of the Defence Staff Commander of the Royal Canadian Navy Commander of the Royal Canadian Air Force Commander of the Canadian Army Chief of Military Personnel Commander of the Canadian Special Operations Forces Command Commander of the Canadian Joint Operations Command Commander of the Canadian Forces Intelligence Command Level Two Organizations (L2s) Level Three Organizations (L3s)
The Canadian constitution determines that the Commander-in-Chief of the Canadian Armed Forces is the country's sovereign, who, since 1904, has authorized his or her viceroy, the governor general, to exercise the duties ascribed to the post of Commander-in-Chief and to hold the associated title since 1905. All troop deployment and disposition orders, including declarations of war, fall within the royal prerogative and are issued as Orders in Council, which must be signed by either the monarch or governor general. Under the Westminster system's parliamentary customs and practices, however, the monarch and viceroy must generally follow the advice of his or her ministers in Cabinet, including the prime minister and minister of national defence, who are accountable to the elected House of Commons.
The Armed Forces' 115,349 personnel are divided into a hierarchy of numerous ranks of officers and non-commissioned members. The governor general appoints, on the advice of the prime minister, the Chief of the Defence Staff (CDS) as the highest ranking commissioned officer in the Armed Forces and who, as head of the Armed Forces Council, is in command of the Canadian Forces. The Armed Forces Council generally operates from National Defence Headquarters (NDHQ) in Ottawa, Ontario. On the Armed Forces Council sit the heads of Canadian Joint Operations Command and Canadian Special Operations Forces Command, the Vice Chief of the Defence Staff, and the heads of the Royal Canadian Navy, the Canadian Army, the Royal Canadian Air Force and other key Level 1 organizations. The sovereign and most other members of the Canadian Royal Family also act as colonels-in-chief, honorary air commodores, air commodores-in-chief, admirals, and captains-general of Canadian Forces units, though these positions are ceremonial.
Canada's Armed forces operate out of 27 Canadian Forces bases (CFB) across the country, including NDHQ. This number has been gradually reduced since the 1970s with bases either being closed or merged. Both officers and non-commissioned members receive their basic training at the Canadian Forces Leadership and Recruit School in Saint-Jean-sur-Richelieu. Officers will generally either directly enter the Canadian Armed Forces with a degree from a civilian university, or receive their commission upon graduation from the Royal Military College of Canada. Specific element and trade training is conducted at a variety of institutions throughout Canada, and to a lesser extent, the world.
Royal Canadian Navy
thumb|right|HMCS Algonquin, a guided-missile destroyer
The Royal Canadian Navy (RCN), headed by the Commander of the Royal Canadian Navy, includes 33 warships and submarines deployed in two fleets: Maritime Forces Pacific (MARPAC) at CFB Esquimalt on the west coast, and Maritime Forces Atlantic (MARLANT) at Majesty's Canadian Dockyard in Halifax on the east coast, as well as one formation: the Naval Reserve Headquarters (NAVRESHQ) at Quebec City, Quebec. The fleet is augmented by various aircraft and supply vessels. The RCN participates in NATO exercises and operations, and ships are deployed all over the world in support of multinational deployments.
Canadian Army
thumb|right|Soldiers from the Canadian Grenadier Guards in the Kandahar Province of Afghanistan
The Canadian Army is headed by the Commander of the Canadian Army and administered through four divisions—the 2nd Canadian Division, the 3rd Canadian Division, the 4th Canadian Division and the 5th Canadian Division—the Canadian Army Doctrine and Training System and the Canadian Army Headquarters.
Currently, the Regular Force component of the Army consists of three field-ready brigade groups: 1 Canadian Mechanized Brigade Group, at CFB Edmonton and CFB Shilo; 2 Canadian Mechanized Brigade Group, at CFB Petawawa and CFB Gagetown; and 5 Canadian Mechanized Brigade Group, at CFB Valcartier and Quebec City. Each contains one regiment each of artillery, armour, and combat engineers, three battalions of infantry (all scaled in the British fashion), one battalion for logistics, a squadron for headquarters/signals, and several smaller support organizations. A tactical helicopter squadron and a field ambulance are co-located with each brigade, but do not form part of the brigade's command structure.
The 2nd, 3rd and 4th Canadian Divisions each has a Regular Force brigade group, and each division except the 1st has two to three Reserve Force brigades groups. In total, there are ten Reserve Force brigade groups. The 5th Canadian Division and the 2nd Canadian Division each have two Reserve Force brigade groups, while the 4th Canadian Division and the 3rd Canadian Division each have three Reserve Force brigade groups. Major training and support establishments exist at CFB Gagetown, CFB Montreal and CFB Wainwright.
Royal Canadian Air Force
thumb|right|CF-18 Hornet launches laser-guided bomb
The Royal Canadian Air Force (RCAF) is headed by the Commander of the Royal Canadian Air Force. The commander of 1 Canadian Air Division and Canadian NORAD Region, based in Winnipeg, is responsible for the operational command and control of Air Force activities throughout Canada and worldwide. 1 Canadian Air Division operations are carried out through eleven wings located across Canada. The commander of 2 Canadian Air Division is responsible for training and support functions. 2 Canadian Air Division operations are carried out at two wings. Wings represent the grouping of various squadrons, both operational and support, under a single tactical commander reporting to the operational commander and vary in size from several hundred personnel to several thousand.
Major air bases are located in British Columbia, Alberta, Saskatchewan, Manitoba, Ontario, Quebec, Nova Scotia, and Newfoundland and Labrador, while administrative and command and control facilities are located in Winnipeg and North Bay. A Canadian component of the NATO Airborne Early Warning Force is also based at NATO Air Base Geilenkirchen near Geilenkirchen, Germany.
The RCAF and Joint Task Force (North) (JTFN) also maintain at various points throughout Canada's northern region a chain of forward operating locations, each capable of supporting fighter operations. Elements of CF-18 squadrons periodically deploy to these airports for short training exercises or Arctic sovereignty patrols.
Canadian Joint Operations Command
The Canadian Joint Operations Command is an operational element established in October 2012 with the merger of Canada Command, the Canadian Expeditionary Force Command and the Canadian Operational Support Command. The new command, created as a response to the cost-cutting measures in the 2012 federal budget, combines the resources, roles and responsibilities of the three former commands under a single headquarters.
Canadian Special Operations Forces Command
The Canadian Special Operations Forces Command (CANSOFCOM) is a formation capable of operating independently but primarily focused on generating special operations forces (SOF) elements to support CJOC. The command includes Joint Task Force 2 (JTF2), the Canadian Joint Incident Response Unit (CJIRU) based at CFB Trenton, as well as the Canadian Special Operations Regiment (CSOR) and 427 Special Operations Aviation Squadron (SOAS) based at CFB Petawawa.
Information Management Group
Among other things, the Information Management Group is responsible for the conduct of electronic warfare and the protection of the Armed Forces' communications and computer networks. Within the group, this operational role is fulfilled by the Canadian Forces Information Operations Group, headquartered at CFS Leitrim in Ottawa, which operates the following units: the Canadian Forces Information Operations Group Headquarters (CFIOGHQ), the Canadian Forces Electronic Warfare Centre (CFEWC), the Canadian Forces Network Operation Centre (CFNOC), the Canadian Forces Signals Intelligence Operations Centre (CFSOC), the Canadian Forces Station (CFS) Leitrim, and the 764 Communications Squadron. In June 2011 the Canadian Armed Forces Chief of Force Development announced the establishment of a new organization, the Directorate of Cybernetics, headed by a Brigadier General, the Director General Cyber (DG Cyber). Within that directorate the newly established CAF Cyber Task Force, has been tasked to design and build cyber warfare capabilities for the Canadian Armed Forces.The Maple Leaf, 22 June 2011, Vol. 14, No. 22, p.3Khang Pham, Cyber Security: Do Your Part, The Maple Leaf, Vol. 15, No. 2, February 2012, p.12
Canadian Forces Health Services Group
The Health Services Group is a joint formation that includes over 120 general or specialized units and detachments providing health services to the Canadian Armed Forces. With few exceptions, all elements are under command of the Surgeon General for domestic support and force generation, or temporarily assigned under command of a deployed Joint Task Force through Canadian Joint Operations Command."Canadian Forces Health Services website" Retrieved on 18 February 2012"Canadian Forces Health Services Group Surgeon General’s Report 2010" Retrieved on 18 February 2012
Canadian Armed Forces Reserve Force
The Canadian Armed Forces have a total reserve force of approximately 50,000 primary and supplementary that can be called upon in times of national emergency or threat. For the components and sub-components of the Canadian Armed Forces Reserve Force, the order of precedence follows:
(1) Primary Reserve (26,000),
(2) Supplementary Reserve (11,000) Prior to 2002 this consisted of:
(a) Supplementary Ready Reserve, and
(b) Supplementary Holding Reserve,
after 2002 there is no sub division of the Supplementary Reserve.
(3) Cadet Organizations Administration and Training Service (7,500), and
(4) Canadian Rangers (5,000).
Primary Reserve
Approximately 26,000 citizen soldiers, sailors, and airmen and women, trained to the level of and interchangeable with their Regular Force counterparts, and posted to CAF operations or duties on a casual or ongoing basis, make up the Primary Reserve. This group is represented, though not commanded, at NDHQ by the Chief of Reserves and Cadets, who is usually a major general or rear admiral, and is divided into four components that are each operationally and administratively responsible to its corresponding environmental command in the Regular Forcethe Naval Reserve (NAVRES), Land Force Reserve (LFR), and Air Reserve (AIRRES)in addition to one force that does not fall under an environmental command, the Health Services Reserve under the Canadian Forces Health Services Group.
Cadet Organizations Administration and Training Service
The Cadet Organizations Administration and Training Service (COATS)"Administrative Order: Implementation of Cadet Organizations Administration and Training Service", NDHQ 1085-30 (D Cdts 6) dated 2 July 2009. consists of officers and non-commissioned members who conduct training, safety, supervision and administration of nearly 60,000 cadets aged 12 to 18 years in the Canadian Cadet Movement. The majority of members in COATS are officers of the Cadet Instructors Cadre (CIC) branch of the CAF. Members of the Reserve Force Sub-Component COATS who are not employed part-time (Class A) or full-time (Class B) may be held on the "Cadet Instructor Supplementary Staff List" (CISS List) in anticipation of employment in the same manner as other reservists are held as members of the Supplementary Reserve.
Canadian Rangers
The Canadian Rangers, who provide surveillance and patrol services in Canada's arctic and other remote areas, are an essential reserve force component used for Canada's exercise of sovereignty over its northern territory.
Uniforms
thumb|right|A sergeant of The British Columbia Regiment (Duke of Connaught's Own) wearing the ceremonial dress of the regiment, standing behind a C9 LMG
Although the Canadian Armed Forces are a single service, there are three similar but distinctive environmental uniforms (DEUs): navy blue (which is actually black) for the navy, rifle green for the army, and light blue for the air force. CAF members in operational occupations generally wear the DEU to which their occupation "belongs." CAF members in non-operational occupations (the "purple" trades) are allocated a uniform according to the "distribution" of their branch within the CAF, association of the branch with one of the former services, and the individual's initial preference. Therefore, on any given day, in any given CAF unit, all three coloured uniforms may be seen.
The uniforms of the CAF are sub-divided into five orders of dress:Canada – National Defence: "A-AD-265-000/AG-001 CANADIAN FORCES DRESS INSTRUCTIONS"
Ceremonial dress, including regimental full dress, patrol dress, naval "high-collar" whites, and service dress uniforms with ceremonial accoutrements such as swords, white web belts, gloves, etc.;
Mess dress, which ranges from full mess kit with mess jacket, cummerbund, or waistcoat, etc., to service dress with bow tie;
Service dress, also called a walking-out or duty uniform, is the military equivalent of the business suit, with an optional white summer uniform for naval CF members;
Operational dress, an originally specialized uniform for wear in an operational environment, now for everyday wear on base or in garrison; and
Occupational dress, which is specialized uniform articles for particular occupations (e.g., medical / dental).
Only service dress is suitable for CAF members to wear on any occasion, barring "dirty work" or combat. With gloves, swords, and medals (No. 1 or 1A), it is suitable for ceremonial occasions and "dressed down" (No. 3 or lower), it is suitable for daily wear. Generally, after the elimination of base dress (although still defined for the Air Force uniform), operational dress is now the daily uniform worn by most members of the CF, unless service dress is prescribed (such as at the NDHQ, on parades, at public events, etc.). Approved parkas are authorized for winter wear in cold climates and a light casual jacket is also authorized for cooler days. The navy, most army, and some other units have, for very specific occasions, a ceremonial/regimental full dress, such as the naval "high-collar" white uniform, kilted Highland, Scottish, and Irish regiments, and the scarlet uniforms of the Royal Military Colleges.
thumbnail|The Canadian Army, Royal Canadian Navy and Royal Canadian Air Force each have a distinctive service dress uniform differentiated by colour, cut and headdress.
Authorized headdress for the Canadian Armed Forces are the: beret, wedge cap, ballcap, Yukon cap, and tuque (toque). Each is coloured according to the distinctive uniform worn: navy (white or navy blue), army (rifle green or "regimental" colour), air force (light blue). Adherents of the Sikh faith may wear uniform turbans (dastar) (or patka, when operational) and Muslim women may wear uniform tucked hijabs under their authorized headdress. Jews may wear yarmulke under their authorized headdress and when bareheaded. The beret is probably the most widely worn headgear and is worn with almost all orders of dress (with the exception of the more formal orders of Navy and Air Force dress), and the colour of which is determined by the wearer's environment, branch, or mission. Naval personnel, however, seldom wear berets, preferring either service cap or authorized ballcaps (shipboard operational dress), which only the Navy wear. Air Force personnel, particularly officers, prefer the wedge cap to any other form of headdress. There is no naval variant of the wedge cap. The Yukon cap and tuque are worn only with winter dress, although clearance and combat divers may wear tuques year-round as a watch cap. Soldiers in Highland, Scottish, and Irish regiments generally wear alternative headdress, including the glengarry, balmoral, tam o'shanter, and caubeen instead of the beret. The officer cadets of both Royal Military Colleges wear gold-braided "pillbox" (cavalry) caps with their ceremonial dress and have a unique fur "Astrakhan" for winter wear. The Canadian Army wears the CG634 helmet.
Military expenditures
thumb|right|Actros Armoured Heavy Support Vehicle System trucks in Valcartier.
The Constitution of Canada gives the federal government exclusive responsibility for national defence, and expenditures are thus outlined in the federal budget. For the 2016–17 fiscal year, the amount allocated for defence spending was CAD$18.6 billion.
See also
Authorized marches of the Canadian Forces
Canada First Defence Strategy
Canadian Forces ranks and insignia
Canadian Cadet Organizations
Canadian Coast Guard
Canadian Forces order of precedence
Canadian Forces Radio and Television
Code of Service Discipline
Communications Security Establishment Canada
Defence diplomacy
List of Canadian military occupations
List of Canadian military operations
List of conflicts in Canada
List of infantry weapons and equipment of the Canadian military
North Warning System
Planned Canadian Forces projects
Upholder/Victoria-class submarine
Notes
References
Further reading
Beaudet, Normand (1993). Le Mythe de la défense canadienne. Montréal: Éditions Écosociété. ISBN 2-921561-11-5
Rennick, Joanne Benham (2013). "Canadian Values and Military Operations in the Twenty-First Century," Armed Forces & Society 39, No. 3, pp. 511–30
Leuprecht, Christian & Sokolsky, Joel. (2014). Defense Policy "Walmart Style" Canadian Lessons in "not so grand" Grand Strategy. Armed Forces & Society Journal Online First. http://afs.sagepub.com/content/early/2014/07/02/0095327X14536562.abstract
Faces of War at Library and Archives Canada
External links
Official website of the Canadian Armed Forces
Official website of the Canadian Army
Official website of the Royal Canadian Navy
Official website of the Royal Canadian Air Force
Combat Camera – Official CF photo website
Canadian Military Documentary channel on YouTube
*
Category:Department of National Defence (Canada)
Category:Uniformed services of Canada
Category:Military education and training in Canada | 182,792 | 2017-01 |
Muammar Gaddafi | Muammar Mohammed Abu Minyar Gaddafi (; ; 20 October 2011), commonly known as Colonel Gaddafi, was a Libyan revolutionary, politician, and political theorist. He governed Libya as Revolutionary Chairman of the Libyan Arab Republic from 1969 to 1977 and then as the "Brotherly Leader" of the Great Socialist People's Libyan Arab Jamahiriya from 1977 to 2011. He was initially ideologically committed to Arab nationalism and Arab socialism, but he came to rule according to his own Third International Theory before embracing Pan-Africanism.
Born near Sirte, Gaddafi was the son of an impoverished Bedouin goat herder. He became involved in politics while at school in Sabha, later enrolling in the Royal Military Academy, Benghazi. He founded a revolutionary cell within the military; in 1969, they seized power from the monarchy of King Idris in a bloodless coup. Gaddafi became Chairman of the governing Revolutionary Command Council (RCC); he then abolished the monarchy and proclaimed the Republic, ruling by decree. He implemented measures to remove what he viewed as foreign imperialist influence from Libya, and strengthened ties to Arab nationalist governments, particularly Gamal Abdel Nasser's Egypt. He was intent on pushing Libya towards "Islamic socialism", introducing sharia as the basis for the legal system and nationalising the oil industry, using the increased revenues to bolster the military, implement social programs, and fund revolutionary militants across the world. In 1973, he initiated a "Popular Revolution" with the formation of General People's Committees (GPCs), purported to be a system of direct democracy, but retained personal control over major decisions. He outlined his Third International Theory that year, publishing these ideas in The Green Book.
In 1977, Gaddafi dissolved the Republic and created a new socialist state called the Jamahiriya ("state of the masses"), officially adopting a symbolic role in governance. He retained power as military commander-in-chief and head of the Revolutionary Committees responsible for policing and suppressing opponents. He oversaw unsuccessful border conflicts with Egypt and Chad, and his support for foreign militants and alleged responsibility for the Lockerbie bombing led to Libya's label of "international pariah". A particularly hostile relationship developed with the United States and United Kingdom, resulting in the 1986 U.S. bombing of Libya and United Nations-imposed economic sanctions. Gaddafi rejected his earlier ideological commitments and encouraged economic privatisation from 1999, seeking rapprochement with Western nations while also embracing Pan-Africanism and serving as Chairperson of the African Union from 2009–10. Amid the Arab Spring in 2011, an anti-Gaddafist uprising broke out in eastern Libya, led by the National Transitional Council (NTC) and resulting in the Libyan Civil War. NATO intervened militarily on the side of the NTC, bringing about the government's downfall. Retreating to Sirte, Gaddafi was captured and killed by NTC militants.
Gaddafi dominated Libya's politics for four decades and was the subject of a pervasive cult of personality. A controversial and highly divisive world figure, he was decorated with various awards and lauded for both his anti-imperialist stance and his support for Pan-Africanism and Pan-Arabism. Conversely, he was internationally condemned as a dictator and autocrat whose authoritarian administration violated the human rights of Libyan citizens and supported irredentist movements, tribal warfare, and terrorism in many other nations.
Early life
Childhood: 1942/43–50
Muammar Mohammed Abu Minyar Gaddafi was born in a tent near Qasr Abu Hadi, a rural area outside the town of Sirte in the deserts of Tripolitania, western Libya. His family came from a small, relatively un-influential tribal group called the Qadhadhfa, who were Arabized Berber in heritage. His mother was named Aisha (died 1978), and his father, Mohammad Abdul Salam bin Hamed bin Mohammad, was known as Abu Meniar (died 1985) and earned a meager subsistence as a goat and camel herder. Nomadic Bedouins, they were illiterate and kept no birth records. As such, Gaddafi's date of birth is not known with certainty, and sources have set it in 1942 or in the spring of 1943, although his biographers David Blundy and Andrew Lycett noted that it could have been pre-1940. His parents' only surviving son, he had three older sisters. Gaddafi's upbringing in Bedouin culture influenced his personal tastes for the rest of his life; he preferred the desert over the city and would retreat there to meditate.
From childhood, Gaddafi was aware of the involvement of European colonialists in Libya; his nation was occupied by Italy, and during the North African Campaign of World War II it witnessed conflict between Italian and British troops. According to later claims, Gaddafi's paternal grandfather, Abdessalam Bouminyar, was killed by the Italian Army during the Italian invasion of 1911. At World War II's end in 1945, Libya was occupied by British and French forces. Although Britain and France intended on dividing the nation between their empires, the General Assembly of the United Nations (UN) declared that the country be granted political independence. In 1951, the UN created the United Kingdom of Libya, a federal state under the leadership of a pro-Western monarch, Idris, who banned political parties and centralised power in his monarchy.
Education and political activism: 1950–63
Gaddafi's earliest education was of a religious nature, imparted by a local Islamic teacher. Subsequently moving to nearby Sirte to attend elementary school, he progressed through six grades in four years. Education in Libya was not free, but his father thought it would greatly benefit his son despite the financial strain. During the week Gaddafi slept in a mosque, and at weekends walked 20 miles to visit his parents. Bullied for being a Bedouin, he was proud of his identity and encouraged pride in other Bedouin children. From Sirte, he and his family moved to the market town of Sabha in Fezzan, south-central Libya, where his father worked as a caretaker for a tribal leader while Muammar attended secondary school, something neither parent had done. Gaddafi was popular at school; some friends made there received significant jobs in his later administration, most notably his best friend Abdul Salam Jalloud.
thumb|left|150px|Egyptian President Nasser was Gaddafi's political hero
Many teachers at Sabha were Egyptian, and for the first time Gaddafi had access to pan-Arab newspapers and radio broadcasts, most notably the Cairo-based Voice of the Arabs. Growing up, Gaddafi witnessed significant events rock the Arab world, including the 1948 Arab–Israeli War, the Egyptian Revolution of 1952, the Suez Crisis of 1956, and the short-lived existence of the United Arab Republic between 1958 and 1961. Gaddafi admired the political changes implemented in the Arab Republic of Egypt under his hero, President Gamal Abdel Nasser. Nasser argued for Arab nationalism; the rejection of Western colonialism, neo-colonialism, and Zionism; and a transition from capitalism to socialism. Gaddafi was influenced by Nasser's book, Philosophy of the Revolution, which outlined how to initiate a coup.
Gaddafi organised demonstrations and distributed posters criticising the monarchy. In October 1961, he led a demonstration protesting against Syria's secession from the United Arab Republic. During this they broke windows in a local hotel that was accused of serving alcohol. To punish Gaddafi, the authorities expelled him and his family from Sabha. Gaddafi moved to Misrata, there attending Misrata Secondary School. Maintaining his interest in Arab nationalist activism, he refused to join any of the banned political parties active in the city—including the Arab Nationalist Movement, the Arab Socialist Ba'ath Party, and the Muslim Brotherhood—claiming that he rejected factionalism. He read voraciously on the subjects of Nasser and the French Revolution of 1789, as well as the works of Syrian political theorist Michel Aflaq and biographies of Abraham Lincoln, Sun Yat-sen, and Mustafa Kemal Atatürk.
Military training: 1963–66
Gaddafi briefly studied History at the University of Libya in Benghazi, before dropping out to join the military. Despite his police record, in 1963 he began training at the Royal Military Academy, Benghazi, alongside several like-minded friends from Misrata. The armed forces offered the only opportunity for upward social mobility for underprivileged Libyans, and Gaddafi recognised it as a potential instrument of political change. Under Idris, Libya's armed forces were trained by the British military; this angered Gaddafi, who viewed the British as imperialists, and accordingly he refused to learn English and was rude to the British officers, ultimately failing his exams. British trainers reported him for insubordination and abusive behaviour, stating their suspicion that he was involved in the assassination of the military academy's commander in 1963. Such reports were ignored and Gaddafi quickly progressed through the course.
thumb|250px|Gaddafi in Piccadilly, London, 1966
With a group of loyal cadres, in 1964 Gaddafi founded the Central Committee of the Free Officers Movement, a revolutionary group named after Nasser's Egyptian predecessor. Led by Gaddafi, they met clandestinely and were organised into a clandestine cell system, offering their salaries into a single fund. Gaddafi travelled around Libya gathering intelligence and developing connections with sympathisers, but the government's intelligence services ignored him, considering him little threat.
Graduating in August 1965, Gaddafi became a communications officer in the army's signal corps. In April 1966, he was assigned to the United Kingdom for further training; over 9 months he underwent an English-language course at Beaconsfield, Buckinghamshire, an Army Air Corps signal instructors course in Bovington Camp, Dorset, and an infantry signal instructors course at Hythe, Kent. Despite later rumours to the contrary, he did not attend the Royal Military Academy Sandhurst.
The Bovington signal course's director reported that Gaddafi successfully overcame problems learning English, displaying a firm command of voice procedure. Noting that Gaddafi's favourite hobbies were reading and playing football, he thought him an "amusing officer, always cheerful, hard-working, and conscientious." Gaddafi disliked England, claiming British Army officers racially insulted him and finding it difficult adjusting to the country's culture; asserting his Arab identity in London, he walked around Piccadilly wearing traditional Libyan robes. He later related that while he travelled to England believing it more advanced than Libya, he returned home "more confident and proud of our values, ideals and social character."
Libyan Arab Republic
Coup d'etat: 1969
Idris' government was increasingly unpopular by the latter 1960s; it had exacerbated Libya's traditional regional and tribal divisions by centralising the country's federal system in order to take advantage of the country's oil wealth, while corruption and entrenched systems of patronage were widespread throughout the oil industry. Arab nationalism was increasingly popular, and protests flared up following Egypt's 1967 defeat in the Six-Day War with Israel; allied to the Western powers, Idris' administration was seen as pro-Israeli. Anti-Western riots broke out in Tripoli and Benghazi, while Libyan workers shut down oil terminals in solidarity with Egypt. By 1969, the U.S. Central Intelligence Agency was expecting segments of Libya's armed forces to launch a coup. Although claims have been made that they knew of Gaddafi's Free Officers Movement, they have since claimed ignorance, stating that they were monitoring Abdul Aziz Shalhi's Black Boots revolutionary group.
In mid-1969, Idris travelled abroad to spend the summer in Turkey and Greece. Gaddafi's Free Officers recognised this as their chance to overthrow the monarchy, initiating "Operation Jerusalem". On 1 September, they occupied airports, police depots, radio stations and government offices in Tripoli and Benghazi. Gaddafi took control of the Berka barracks in Benghazi, while Omar Meheisha occupied Tripoli barracks and Jalloud seized the city's anti-aircraft batteries. Khweldi Hameidi was sent to arrest crown prince Sayyid Hasan ar-Rida al-Mahdi as-Sanussi, and force him to relinquish his claim to the throne. They met no serious resistance, and wielded little violence against the monarchists.
Once Gaddafi removed the monarchical government, he announced the foundation of the Libyan Arab Republic. Addressing the populace by radio, he proclaimed an end to the "reactionary and corrupt" regime, "the stench of which has sickened and horrified us all." Due to the coup's bloodless nature, it was initially labelled the "White Revolution", although was later renamed the "One September Revolution" after the date on which it occurred. Gaddafi insisted that the Free Officers' coup represented a revolution, marking the start of widespread change in the socio-economic and political nature of Libya. He proclaimed that the revolution meant "freedom, socialism, and unity", and over the coming years implemented measures to achieve this.
Consolidating leadership: 1969–73
The 12 member central committee of the Free Officers proclaimed themselves the Revolutionary Command Council (RCC), the government of the new republic. Gaddafi became RCC Chairman, and therefore the de facto head of state, also appointing himself to the rank of Colonel and becoming commander-in-chief of the armed forces. Jalloud became Prime Minister, while a civilian Council of Ministers headed by Sulaiman Maghribi was founded to implement RCC policy. Libya's administrative capital was moved from al-Beida to Tripoli.
thumb|left|The Flag of the Libyan Arab Republic (1969–77)
Although theoretically a collegial body operating through consensus building, Gaddafi dominated the RCC, although some of the others attempted to constrain what they saw as his excesses. Gaddafi remained the government's public face, with the identities of the other RCC members only being publicly revealed on 10 January 1970. All young men from (typically rural) working and middle-class backgrounds, none had university degrees; in this way they were distinct from the wealthy, highly educated conservatives who previously governed the country.
The coup completed, the RCC proceeded with their intentions of consolidating the revolutionary government and modernising the country. They purged monarchists and members of Idris' Senussi clan from Libya's political world and armed forces; Gaddafi believed this elite were opposed to the will of the Libyan people and had to be expunged. "People's Courts" were founded to try various monarchist politicians and journalists, many of whom were imprisoned, although none executed. Idris was sentenced to execution in absentia.
In May 1970, the Revolutionary Intellectuals Seminar was held to bring intellectuals in line with the revolution, while that year's Legislative Review and Amendment united secular and religious law codes, introducing sharia into the legal system.
Ruling by decree, the RCC maintained the monarchy's ban on political parties, in May 1970 banned trade unions, and in 1972 outlawed workers' strikes and suspended newspapers. In September 1971, Gaddafi resigned, claiming to be dissatisfied with the pace of reform, but returned to his position within a month. In February 1973, he resigned again, once more returning the following month.
Economic and social reform
thumb|300px|Gaddafi at an Arab summit in Libya in 1969, shortly after the September Revolution that toppled King Idris I. Gaddafi sits in military uniform in the middle, surrounded by President Gamal Abdel Nasser (left) and Syrian President Nureddin al-Atassi (right).
The RCC's early economic policy has been characterised as being state capitalist in orientation. A number of schemes were established to aid entrepreneurs and develop a Libyan bourgeoisie. Seeking to expand the cultivatable acreage in Libya, in September 1969 the government launched a "Green Revolution" to raise agricultural productivity so that Libya could rely less on imported food. All land that had either been expropriated from Italian settlers or which was not in use was expropriated and redistributed. Irrigation systems were established along the northern coastline and around various inland oases. Production costs often outstripped the value of the produce and thus Libyan agricultural production remained in deficit, relying heavily on state subsidies.
With crude oil as the country's primary export, Gaddafi sought to improve Libya's oil sector. In October 1969, he proclaimed the current trade terms unfair, benefiting foreign corporations more than the Libyan state, and by threatening to reduce production, in December Jalloud successfully increased the price of Libyan oil. In 1970, other OPEC states followed suit, leading to a global increase in the price of crude oil. The RCC followed with the Tripoli Agreement, in which they secured income tax, back-payments and better pricing from the oil corporations; these measures brought Libya an estimated $1 billion in additional revenues in its first year.
Increasing state control over the oil sector, the RCC began a program of nationalization, starting with the expropriation of British Petroleum's share of the British Petroleum-N.B. Hunt Sahir Field in December 1971. In September 1973, it was announced that all foreign oil producers active in Libya were to see 51% of their operation nationalised. For Gaddafi, this was an important step towards socialism. It proved an economic success; while gross domestic product had been $3.8 billion in 1969, it had risen to $13.7 billion in 1974, and $24.5 billion in 1979. In turn, the Libyans' standard of life greatly improved over the first decade of Gaddafi's administration, and by 1979 the average per-capita income was at $8,170, up from $40 in 1951; this was above the average of many industrialised countries like Italy and the U.K.
thumb|left|250px|In 1971, Egypt's Anwar Sadat, Libya's Gaddafi and Syria's Hafez al-Assad signed an agreement to form a federal Union of Arab Republics. The agreement never materialized into a federal union between the three Arab states.
The RCC implemented measures for social reform, adopting sharia as a basis. The consumption of alcohol was banned, night clubs and Christian churches were shut down, traditional Libyan dress was encouraged, and Arabic was decreed as the only language permitted in official communications and on road signs. The RCC doubled the minimum wage, introduced statutory price controls, and implemented compulsory rent reductions of between 30 and 40%.
From 1969 to 1973, it used oil money to fund social welfare programs, which led to house-building projects and improved healthcare and education. House building became a major social priority, designed to eliminate homelessness and to replace the shanty towns created by Libya's growing urbanisation. The health sector was also expanded; by 1978, Libya had 50% more hospitals than it had in 1968, while the number of doctors had grown from 700 to over 3000 in that decade. Malaria was eradicated, and trachoma and tuberculosis greatly curtailed. Compulsory education was expanded from 6 to 9 years old, while adult literacy programs and free university education were introduced. Beida University was founded, while Tripoli University and Benghazi University were expanded. In doing so the government helped to integrate the poorer strata of Libyan society into the education system. Through these measures, the RCC greatly expanded the public sector, providing employment for thousands. These early social programs proved popular within Libya. This popularity was partly due to Gaddafi's personal charisma, youth and underdog status as a Bedouin, as well as his rhetoric emphasising his role as the successor to the anti-Italian fighter Omar Mukhtar.
To combat the country's strong regional and tribal divisions, the RCC promoted the idea of a unified pan-Libyan identity. In doing so, they tried discrediting tribal leaders as agents of the old regime, and in August 1971 a Sabha military court tried many of them for counter-revolutionary activity. Long-standing administrative boundaries were re-drawn, crossing tribal boundaries, while pro-revolutionary modernisers replaced traditional leaders, but the communities they served often rejected them. Realising the failures of the modernisers, Gaddafi created the Arab Socialist Union (ASU) in June 1971, a mass mobilisation vanguard party of which he was president. The ASU recognised the RCC as its "Supreme Leading Authority", and was designed to further revolutionary enthusiasm throughout the country. It remained heavily bureaucratic and failed to mobilise mass support in the way Gaddafi had envisioned.
Foreign relations
thumb|200px|Gaddafi (left) with Egyptian President Nasser in 1969. Nasser privately thought Gaddafi "a nice boy, but terribly naive".
The influence of Nasser's Arab nationalism over the RCC was immediately apparent. The administration was instantly recognised by the neighbouring Arab nationalist regimes in Egypt, Syria, Iraq and Sudan, with Egypt sending experts to aid the inexperienced RCC. Gaddafi propounded Pan-Arab ideas, proclaiming the need for a single Arab state stretching across North Africa and the Middle East. In December 1969, Libya signed the Tripoli Charter alongside Egypt and Sudan. This established the Arab Revolutionary Front, a pan-national union designed as a first step towards the eventual political unification of the three nations. In 1970 Syria declared its intention to join.
Nasser died unexpectedly in November 1970, with Gaddafi playing a prominent role at his funeral. Nasser was succeeded by Anwar Sadat, who suggested that rather than creating a unified state, the Arab states should create a political federation, implemented in April 1971; in doing so, Egypt, Syria and Sudan received large grants of Libyan oil money. In February 1972, Gaddafi and Sadat signed an unofficial charter of merger, but it was never implemented because relations broke down the following year. Sadat became increasingly wary of Libya's radical direction, and the September 1973 deadline for implementing the Federation passed by with no action taken.
After the 1969 coup, representatives of the Four Powers – France, the United Kingdom, the United States and the Soviet Union – were called to meet RCC representatives. The U.K. and the U.S. quickly extended diplomatic recognition, hoping to secure the position of their military bases in Libya and fearing further instability. Hoping to ingratiate themselves with Gaddafi, in 1970 the U.S. informed him of at least one planned counter-coup. Such attempts to form a working relationship with the RCC failed; Gaddafi was determined to reassert national sovereignty and expunge what he described as foreign colonial and imperialist influences. His administration insisted that the U.S. and the U.K. remove their military bases from Libya, with Gaddafi proclaiming that "the armed forces which rose to express the people's revolution [will not] tolerate living in their shacks while the bases of imperialism exist in Libyan territory." The British left in March and the Americans left in June 1970.
Moving to reduce Italian influence, in October 1970 all Italian-owned assets were expropriated and the 12,000-strong Italian community was expelled from Libya alongside the smaller community of Libyan Jews. The day became a national holiday known as "Vengeance Day". Italy complained that this was in contravention of the 1956 Italo-Libyan Treaty, although no U.N. sanctions were forthcoming. Aiming to reduce NATO power in the Mediterranean, in 1971 Libya requested that Malta cease allowing NATO to use its land for a military base, in turn offering Malta foreign aid. Compromising, Malta's government continued allowing NATO to use the island, but only on the condition that NATO would not use it for launching attacks on Arab territory. Over the coming decade, Gaddafi's government developed stronger political and economic links with Dom Mintoff's Maltese administration, and under Libya's urging Malta did not renew the UK's airbases on the island in 1980. Orchestrating a military build-up, the RCC began purchasing weapons from France and the Soviet Union. The commercial relationship with the latter led to an increasingly strained relationship with the U.S., which was then engaged in the Cold War with the Soviets.
thumb|left|250px|A 1972 anti-Gaddafist British newsreel including an interview with Gaddafi about his support for foreign militants
Gaddafi was especially critical of the U.S. due to its support of Israel, and supported the Palestinians in the Israeli–Palestinian conflict, viewing the 1948 creation of the State of Israel as a Western colonial occupation which was forced upon the Arab world. He believed that Palestinian violence against Israeli and Western targets was the justified response of an oppressed people who were fighting against the colonisation of their homeland. Calling on the Arab states to wage "continuous war" against Israel, in 1970 he initiated a Jihad Fund to finance anti-Israeli militants. In June 1972 Gaddafi created the First Nasserite Volunteers Centre to train anti-Israeli guerrillas.
Like Nasser, Gaddafi favoured the Palestinian leader Yasser Arafat and his group, Fatah, over more militant and Marxist Palestinian groups. As the years progressed however, Gaddafi's relationship with Arafat became strained, with Gaddafi considering him too moderate and calling for more violent action. Instead he supported militias like the Popular Front for the Liberation of Palestine, Popular Front for the Liberation of Palestine – General Command, the Democratic Front for the Liberation of Palestine, As-Sa'iqa, the Palestinian Popular Struggle Front, and the Abu Nidal Organization. He funded the Black September Organization whose members perpetrated the 1972 Munich massacre of Israeli athletes in West Germany, and had the killed militants' bodies flown to Libya for a hero's funeral. Gaddafi also welcomed the three surviving attackers in Tripoli following their release in exchange for the hostages of hijacked Lufthansa Flight 615 a few weeks later and allowed them to go into hiding.
Gaddafi financially supported other militant groups across the world, including the Black Panther Party, the Nation of Islam, the Tupamaros, the 19th of April Movement and the Sandinista National Liberation Front in Nicaragua, the ANC among other liberation movements in the fight against Apartheid in South Africa, the Provisional Irish Republican Army, ETA, Action directe, the Red Brigades, and the Red Army Faction in Europe, and the Armenian Secret Army, the Japanese Red Army, the Free Aceh Movement, and the Moro National Liberation Front in the Philippines. Gaddafi was indiscriminate in the causes which he funded, sometimes switching from supporting one side in a conflict to the other, as in the Eritrean War of Independence. Throughout the 1970s these groups received financial support from Libya, which came to be seen as a leader in the Third World's struggle against colonialism and neocolonialism. Though many of these groups were labelled "terrorists" by critics of their activities, Gaddafi rejected this characterisation, instead he considered them to be revolutionaries who were engaged in liberation struggles.
The "Popular Revolution": 1973–77
On 16 April 1973, Gaddafi proclaimed the start of a "Popular Revolution" in a speech at Zuwarah. He initiated this with a 5-point plan, the first point of which dissolved all existing laws, to be replaced by revolutionary enactments. The second point proclaimed that all opponents of the revolution had to be removed, while the third initiated an administrative revolution that Gaddafi proclaimed would remove all traces of bureaucracy and the bourgeoisie. The fourth point announced that the population must form People's Committees and be armed to defend the revolution, while the fifth proclaimed the beginning of a cultural revolution to expunge Libya of "poisonous" foreign influences. He began to lecture on this new phase of the revolution in Libya, Egypt, and France. As a process, it had many similarities with the Cultural Revolution implemented in China.
As part of this Popular Revolution, Gaddafi invited Libya's people to found General People's Committees as conduits for raising political consciousness. Although offering little guidance for how to set up these councils, Gaddafi claimed that they would offer a form of direct political participation that was more democratic than a traditional party-based representative system. He hoped that the councils would mobilise the people behind the RCC, erode the power of the traditional leaders and the bureaucracy, and allow for a new legal system chosen by the people. Many such committees were established in schools and colleges, where they were responsible for vetting staff, courses, and textbooks to determine if they were compatible with the country's revolutionary ideology.
The People's Committees led to a high percentage of public involvement in decision making, within the limits permitted by the RCC, but exacerbated tribal divisions. They also served as a surveillance system, aiding the security services in locating individuals with views critical of the RCC, leading to the arrest of Ba'athists, Marxists, and Islamists. Operating in a pyramid structure, the base form of these Committees were local working groups, who sent elected representatives to the district level, and from there to the national level, divided between the General People's Congress and the General People's Committee. Above these remained Gaddafi and the RCC, who remained responsible for all major decisions. In crossing regional and tribal identities, the committee system aided national integration and centralisation and tightened Gaddafi's control over the state and administrative apparatus.
Third Universal Theory and The Green Book
thumb|250px|Gaddafi's Green Book. He informed an Italian journalist that "the Green Book is the guide to the emancipation of man. The Green Book is the gospel. The new gospel. The gospel of the new era, the era of the masses. In your gospels it's written: 'In the beginning there was the word.' The Green Book is the word. One of its words can destroy the world. Or save it. The Third World only needs my Green Book. My word."
In June 1973, Gaddafi created a political ideology as a basis for the Popular Revolution: Third International Theory. This approach regarded both the U.S. and the Soviet Union as imperialist and thus rejected Western capitalism as well as Eastern bloc communism's atheism. In this respect it was similar to the Three Worlds Theory developed by China's political leader Mao Zedong. As part of this theory, Gaddafi praised nationalism as a progressive force and advocated the creation of a pan-Arab state which would lead the Islamic and Third Worlds against imperialism. Gaddafi saw Islam as having a key role in this ideology, calling for an Islamic revival that returned to the origins of the Qur'an, rejecting scholarly interpretations and the Hadith; in doing so, he angered many Libyan clerics. During 1973 and 1974, his government deepened the legal reliance on sharia, for instance by introducing flogging as punishment for those convicted of adultery or homosexual activity.
Gaddafi summarised Third International Theory in three short volumes published between 1975 and 1979, collectively known as The Green Book. Volume one was devoted to the issue of democracy, outlining the flaws of representative systems in favour of direct, participatory GPCs. The second dealt with Gaddafi's beliefs regarding socialism, while the third explored social issues regarding the family and the tribe. While the first two volumes advocated radical reform, the third adopted a socially conservative stance, proclaiming that while men and women were equal, they were biologically designed for different roles in life. During the years that followed, Gaddafists adopted quotes from The Green Book, such as "Representation is Fraud", as slogans. Meanwhile, in September 1975, Gaddafi implemented further measures to increase popular mobilisation, introducing objectives to improve the relationship between the Councils and the ASU.
In 1975, Gaddafi's government declared a state monopoly on foreign trade. Its increasingly radical reforms, coupled with the large amount of oil revenue being spent on foreign causes, generated discontent in Libya, particularly among the country's merchant class. In 1974, Libya saw its first civilian attack on Gaddafi's government when a Benghazi army building was bombed. Much of the opposition centred around the RCC member Omar Mehishi, and with fellow RCC member Bashir Saghir al-Hawaadi he began plotting a coup against Gaddafi. In 1975 their plot was exposed and the pair fled into exile, receiving asylum from Sadat's Egypt. In the aftermath only five RCC members remained, and power was further concentrated in Gaddafi's hands. This led to the RCC's official abolition in March 1977.
In September 1975, Gaddafi purged the army, arresting around 200 senior officers, and in October he founded the clandestine Office for the Security of the Revolution. In April 1976, he called upon his supporters in universities to establish "revolutionary student councils" and drive out "reactionary elements". During that year, anti-Gaddafist student demonstrations broke out at the universities of Tripoli and Benghazi, resulting in clashes with both Gaddafist students and police. The RCC responded with mass arrests, and introduced compulsory national service for young people. In January 1977, two dissenting students and a number of army officers were publicly hanged; Amnesty International condemned it as the first time in Gaddafist Libya that dissenters had been executed for purely political crimes. Dissent also arose from conservative clerics and the Muslim Brotherhood, who accused Gaddafi of moving towards Marxism and criticised his abolition of private property as being against the Islamic sunnah; these forces were then persecuted as anti-revolutionary, while all privately-owned Islamic colleges and universities were shut down.
Foreign relations
Following Anwar Sadat's ascension to the Egyptian presidency, Libya's relations with Egypt deteriorated. Over the coming years, the two slipped into a state of cold war. Sadat was perturbed by Gaddafi's unpredictability and insistence that Egypt required a cultural revolution akin to that being carried out in Libya. In February 1973, Israeli forces shot down Libyan Arab Airlines Flight 114, which had strayed from Egyptian airspace into Israeli-held territory during a sandstorm. Gaddafi was infuriated that Egypt had not done more to prevent the incident, and in retaliation planned to destroy the , a British ship chartered by American Jews to sail to Haifa for Israel's 25th anniversary. Gaddafi ordered an Egyptian submarine to target the ship, but Sadat cancelled the order, fearing a military escalation.
thumb|left|Gaddafi in 1976 with a child on his lap
Gaddafi was later infuriated when Egypt and Syria planned the Yom Kippur War against Israel without consulting him, and was angered when Egypt conceded to peace talks rather than continuing the war. Gaddafi became openly hostile to Egypt's leader, calling for Sadat's overthrow. When Sudanese President Gaafar Nimeiry took Sadat's side, Gaddafi also spoke out against him, encouraging the Sudan People's Liberation Army's attempt to overthrow Nimeiry. Relations with Syria also soured over the events in the Lebanese Civil War. Initially, both Libya and Syria had contributed troops to the Arab League's peacekeeping force, although after the Syrian army attacked the Lebanese National Movement, Gaddafi openly accused Syrian President Hafez al-Assad of "national treason"; he was the only Arab leader to criticise Syria's actions. Focusing his attention elsewhere in Africa, in late 1972 and early 1973, Libya invaded Chad to annex the uranium-rich Aouzou Strip.
Intent on propagating Islam, in 1973 Gaddafi founded the Islamic Call Society, which had opened 132 centres across Africa within a decade. In 1973 he converted Gabonese President Omar Bongo, an action which he repeated three years later with Jean-Bédel Bokassa, president of the Central African Republic. Gaddafi was also keen on reducing Israeli influence within Africa, using financial incentives to successfully convince eight African states to break off diplomatic relations with Israel in 1973. A strong relationship was also established between Gaddafi's Libya and Prime Minister Zulfikar Ali Bhutto's Pakistani government, with the two countries exchanging nuclear research and military assistance; this relationship ended after Bhutto was deposed by Muhammad Zia-ul-Haq in 1977.
Gaddafi sought to develop closer links in the Maghreb; in January 1974 Libya and Tunisia announced a political union, the Arab Islamic Republic. Although advocated by Gaddafi and Tunisian President Habib Bourguiba, the move was deeply unpopular in Tunisia and it was soon abandoned. Retaliating, Gaddafi sponsored anti-government militants in Tunisia into the 1980s. Turning his attention to Algeria, in 1975 Libya signed the Hassi Messaoud defence allegedly to counter alleged "Moroccan expansionism", also funding the Polisario Front of Western Sahara in its independence struggle against Morocco. Seeking to diversify Libya's economy, Gaddafi's government began purchasing shares in major European corporations like Fiat as well as buying real estate in Malta and Italy, which would become a valuable source of income during the 1980s oil slump.
Great Socialist People's Libyan Arab Jamahiriya
Foundation: 1977
On 2 March 1977 the General People's Congress adopted the "Declaration of the Establishment of the People's Authority" at Gaddafi's behest. Dissolving the Libyan Arab Republic, it was replaced by the Great Socialist People's Libyan Arab Jamahiriya (, ), a "state of the masses" conceptualised by Gaddafi. A new, all-green banner was adopted as the country's flag. Officially, the Jamahiriya was a direct democracy in which the people ruled themselves through the 187 Basic People's Congresses, where all adult Libyans participated and voted on national decisions. These then sent members to the annual General People's Congress, which was broadcast live on television. In principle, the People's Congresses were Libya's highest authority, with major decisions proposed by government officials or with Gaddafi himself requiring the consent of the People's Congresses.
thumb|left|200px|Flag of the Great Socialist People's Libyan Arab Jamahiriya.
Although all political control was officially vested in the People's Congresses, in reality Libya's existing political leadership continued to exercise varying degrees of power and influence. Debate remained limited, and major decisions regarding the economy and defence were avoided or dealt with cursorily; the GPC largely remained "a rubber stamp" for Gaddafi's policies. On rare occasions, the GPC opposed Gaddafi's suggestions, sometimes successfully; notably, when Gaddafi called on primary schools to be abolished, believing that home schooling was healthier for children, the GPC rejected the idea. In other instances, Gaddafi pushed through laws without the GPC's support, such as when he desired to allow women into the armed forces. Gaddafi proclaimed that the People's Congresses provided for Libya's every political need, rendering other political organizations unnecessary; all non-authorised groups, including political parties, professional associations, independent trade unions and women's groups, were banned.
With preceding legal institutions abolished, Gaddafi envisioned the Jamahiriya as following the Qur'an for legal guidance, adopting sharia law; he proclaimed "man-made" laws unnatural and dictatorial, only permitting Allah's law. Within a year he was backtracking, announcing that sharia was inappropriate for the Jamahiriya because it guaranteed the protection of private property, contravening The Green Book'''s socialism. His emphasis on placing his own work on a par with the Qur'an led conservative clerics to accuse him of shirk, furthering their opposition to his regime. In July 1977, a border war broke out with Egypt, in which the Egyptians defeated Libya despite their technological inferiority. The conflict lasted one week before both sides agreed to sign a peace treaty that was brokered by several Arab states. Both Egypt and Sudan had aligned themselves with the U.S., and this pushed Libya into a strategic—although not political—alignment with the Soviet Union. In recognition of the growing commercial relationship between Libya and the Soviets, Gaddafi was invited to visit Moscow in December 1976; there, he entered talks with Leonid Brezhnev. In August 1977 he then visited Yugoslavia, where he met its leader Josip Broz Tito, with whom he had a much warmer relationship.
Revolutionary Committees and furthering socialism: 1978–80
In December 1978, Gaddafi stepped down as Secretary-General of the GPC, announcing his new focus on revolutionary rather than governmental activities; this was part of his new emphasis on separating the apparatus of the revolution from the government. Although no longer in a formal governmental post, he adopted the title of "Leader of the Revolution" and continued as commander-in-chief of the armed forces. The historian Dirk Vandewalle stated that despite the Jamahariya's claims to being a direct democracy, Libya remained "an exclusionary political system whose decision-making process" was "restricted to a small cadre of advisors and confidantes" surrounding Gaddafi.
Libya began to turn towards socialism. In March 1978, the government issued guidelines for housing redistribution, attempting to ensure the population that every adult Libyan owned his own home and that nobody was enslaved to paying their rent. Most families were banned from owning more than one house, while former rental properties were expropriated by the state and sold to the tenants at a heavily subsidised price. In September, Gaddafi called for the People's Committees to eliminate the "bureaucracy of the public sector" and the "dictatorship of the private sector"; the People's Committees took control of several hundred companies, converting them into worker cooperatives run by elected representatives.
On 2 March 1979, the GPC announced the separation of government and revolution, the latter being represented by new Revolutionary Committees, who operated in tandem with the People's Committees in schools, universities, unions, the police force and the military. Dominated by revolutionary zealots, the Revolutionary Committees were led by Mohammad Maghgoub and a Central Coordinating Office, and met with Gaddafi annually. Publishing a weekly magazine The Green March (al-Zahf al-Akhdar), in October 1980 they took control of the press. Responsible for perpetuating revolutionary fervour, they performed ideological surveillance, later adopting a significant security role, making arrests and putting people on trial according to the "law of the revolution" (qanun al-thawra). With no legal code or safeguards, the administration of revolutionary justice was largely arbitrary and resulted in widespread abuses and the suppression of civil liberties: the "Green Terror."
thumb|left|Gaddafi with Yasser Arafat in 1977
In 1979, the committees began the redistribution of land in the Jefara plain, continuing through 1981. In May 1980, measures to redistribute and equalise wealth were implemented; anyone with over 1000 dinar in his bank account saw that extra money expropriated. The following year, the GPC announced that the government would take control of all import, export and distribution functions, with state supermarkets replacing privately owned businesses; this led to a decline in the availability of consumer goods and the development of a thriving black market.
The Jamahiriya's radical direction earned the government many enemies. In February 1978, Gaddafi discovered that his head of military intelligence was plotting to kill him, and began to increasingly entrust security to his Qaddadfa tribe. Many who had seen their wealth and property confiscated turned against the administration, and a number of Western-funded opposition groups were founded by exiles. Most prominent was the National Front for the Salvation of Libya (NFSL), founded in 1981 by Mohammed Magariaf, which orchestrated militant attacks against Libya's government, while another, al-Borkan, began killing Libyan diplomats abroad. Following Gaddafi's command to kill these "stray dogs", under Colonel Younis Bilgasim's leadership, the Revolutionary Committees set up overseas branches to suppress counter-revolutionary activity, assassinating various dissidents. Although nearby nations like Syria also employed hit squads, Gaddafi was unusual in publicly bragging about his administration's use of them; in June 1980, he ordered all dissidents to return home or be "liquidated wherever you are."
In 1979, the U.S. placed Libya on its list of "State Sponsors of Terrorism", while at the end of the year a demonstration torched the U.S. embassy in Tripoli in solidarity with the perpetrators of the Iran hostage crisis. The following year, Libyan fighters began intercepting U.S. fighter jets flying over the Mediterranean, signalling the collapse of relations between the two countries. Libyan relations with Lebanon and Shi'ite communities across the world also deteriorated due to the August 1978 disappearance of imam Musa al-Sadr when visiting Libya; the Lebanese accused Gaddafi of having him killed or imprisoned, a charge he denied. Relations with Syria improved, as Gaddafi and Syrian President Hafez al-Assad shared an enmity with Israel and Egypt's Sadat. In 1980, they proposed a political union, with Libya paying off Syria's £1 billion debt to the Soviet Union; although pressures led Assad to pull out, they remained allies. Another key ally was Uganda, and in 1979, Gaddafi sent 2,500 troops into Uganda to defend the regime of President Idi Amin from Tanzanian invaders. The mission failed; 400 Libyans were killed and they were forced to retreat. Gaddafi later came to regret his alliance with Amin, openly criticising him as a "fascist" and a "show-off".
Conflict with the USA and its allies: 1981–86
The early and mid-1980s saw economic trouble for Libya; from 1982 to 1986, the country's annual oil revenues dropped from $21 billion to $5.4 billion. Focusing on irrigation projects, 1983 saw construction start on "Gaddafi's Pet Project", the Great Man-Made River; although designed to be finished by the end of the decade, it remained incomplete at the start of the 21st century. Military spending increased, while other administrative budgets were cut back. Libya had long supported the FROLINAT militia in neighbouring Chad, and in December 1980, re-invaded Chad at the request of the Frolinat-controlled GUNT government to aid in the civil war; in January 1981, Gaddafi suggested a political merger. The Organisation of African Unity (OAU) rejected this, and called for a Libyan withdrawal, which came about in November 1981. The civil war resumed, and so Libya sent troops back in, clashing with French forces who supported the southern Chadian forces. Many African nations had tired of Libya's policies of interference in foreign affairs; by 1980, nine African states had cut off diplomatic relations with Libya, while in 1982 the OAU cancelled its scheduled conference in Tripoli in order to prevent Gaddafi gaining chairmanship. Proposing political unity with Morocco, in August 1984, Gaddafi and Moroccan monarch Hassan II signed the Oujda Treaty, forming the Arab-African Union; such a union was considered surprising due to the strong political differences and longstanding enmity that existed between the two governments. Relations remained strained, particularly due to Morocco's friendly relations with the U.S. and Israel; in August 1986, Hassan abolished the union. Domestic threats continued to plague Gaddafi; in May 1984, his Bab al-Azizia home was unsuccessfully attacked by a joint NFSL–Muslim Brotherhood militia, and in the aftermath 5000 dissidents were arrested.
thumb|13th Anniversary of 1 September Revolution on postage stamp, Libya 1982
In 1981, the new US President Ronald Reagan pursued a hard line approach to Libya, erroneously considering it a puppet regime of the Soviet Union. In turn, Gaddafi played up his commercial relationship with the Soviets, visiting Moscow again in April 1981 and 1985, and threatening to join the Warsaw Pact. The Soviets were nevertheless cautious of Gaddafi, seeing him as an unpredictable extremist. Beginning military exercises in the Gulf of Sirte – an area of sea that Libya claimed as a part of its territorial waters – in August 1981 the U.S. shot down two Libyan Su-22 planes monitoring them. Closing down Libya's embassy in Washington, D.C., Reagan advised U.S. companies operating in the country to reduce the number of American personnel stationed there. In March 1982, the U.S. implemented an embargo of Libyan oil, and in January 1986 ordered all U.S. companies to cease operating in the country, although several hundred workers remained. Diplomatic relations also broke down with the U.K., after Libyan diplomats were accused in the shooting death of Yvonne Fletcher, a British policewoman stationed outside their London embassy, in April 1984. In Spring 1986, the U.S. Navy again began performing exercises in the Gulf of Sirte; the Libyan military retaliated, but failed as the U.S. sank several Libyan ships.
After the U.S. accused Libya of orchestrating the 1986 Berlin discotheque bombing, in which two American soldiers died, Reagan decided to retaliate militarily. The Central Intelligence Agency were critical of the move, believing that Syria were a greater threat and that an attack would strengthen Gaddafi's reputation; however Libya was recognised as a "soft target." Reagan was supported by the U.K. but opposed by other European allies, who argued that it would contravene international law. In Operation El Dorado Canyon, orchestrated on 15 April 1986, U.S. military planes launched a series of air-strikes on Libya, bombing military installations in various parts of the country, killing around 100 Libyans, including several civilians. One of the targets had been Gaddafi's home. Himself unharmed, two of Gaddafi's sons were injured, and he claimed that his four-year-old adopted daughter Hanna was killed, although her existence has since been questioned.
In the immediate aftermath, Gaddafi retreated to the desert to meditate, while there were sporadic clashes between Gaddafists and army officers who wanted to overthrow the government. Although the U.S. was condemned internationally, Reagan received a popularity boost at home. Publicly lambasting U.S. imperialism, Gaddafi's reputation as an anti-imperialist was strengthened both domestically and across the Arab world, and in June 1986, he ordered the names of the month to be changed in Libya.
"Revolution within a Revolution": 1987–98
The late 1980s saw a series of liberalising economic reforms within Libya designed to cope with the decline in oil revenues. In May 1987, Gaddafi announced the start of the "Revolution within a Revolution", which began with reforms to industry and agriculture and saw the re-opening of small business. Restrictions were placed on the activities of the Revolutionary Committees; in March 1988, their role was narrowed by the newly created Ministry for Mass Mobilization and Revolutionary Leadership to restrict their violence and judicial role, while in August 1988 Gaddafi publicly criticised them.
left|thumb|Gaddafi at the twelfth African Union conference in 2009
In March, hundreds of political prisoners were freed, with Gaddafi falsely claiming that there were no further political prisoners in Libya. In June, Libya's government issued the Great Green Charter on Human Rights in the Era of the Masses, in which 27 articles laid out goals, rights and guarantees to improve the situation of human rights in Libya, restricting the use of the death penalty and calling for its eventual abolition. Many of the measures suggested in the charter would be implemented the following year, although others remained inactive. Also in 1989, the government founded the Al-Gaddafi International Prize for Human Rights, to be awarded to figures from the Third World who had struggled against colonialism and imperialism; the first year's winner was South African anti-apartheid activist Nelson Mandela. From 1994 through to 1997, the government initiated cleansing committees to root out corruption, particularly in the economic sector.
In the aftermath of the 1986 U.S. attack, the army was purged of perceived disloyal elements, and in 1988, Gaddafi announced the creation of a popular militia to replace the army and police. In 1987, Libya began production of mustard gas at a facility in Rabta, although publicly denying it was stockpiling chemical weapons, and unsuccessfully attempted to develop nuclear weapons. The period also saw a growth in domestic Islamist opposition, formulated into groups like the Muslim Brotherhood and the Libyan Islamic Fighting Group. A number of assassination attempts against Gaddafi were foiled, and in turn, 1989 saw the security forces raid mosques believed to be centres of counter-revolutionary preaching. In October 1993, elements of the increasingly marginalised army initiated a failed coup in Misrata, while in September 1995, Islamists launched an insurgency in Benghazi, and in July 1996 an anti-Gaddafist football riot broke out in Tripoli. The Revolutionary Committees experienced a resurgence to combat these Islamists.
In 1989, Gaddafi was overjoyed by the foundation of the Arab Maghreb Union, uniting Libya in an economic pact with Mauritania, Morocco, Tunisia and Algeria, viewing it as beginnings of a new Pan-Arab union. Meanwhile, Libya stepped up its support for anti-Western militants such as the Provisional IRA, and in 1988, Pan Am Flight 103 was blown up over Lockerbie in Scotland, killing 243 passengers and 16 crew members, plus 11 people on the ground. British police investigations identified two Libyans – Abdelbaset al-Megrahi and Lamin Khalifah Fhimah – as the chief suspects, and in November 1991 issued a declaration demanding that Libya hand them over. When Gaddafi refused, citing the Montreal Convention, the United Nations (UN) imposed Resolution 748 in March 1992, initiating economic sanctions against Libya which had deep repercussions for the country's economy. The country suffered an estimated $900 million financial loss as a result. Further problems arose with the West when in January 1989, two Libyan warplanes were shot down by the U.S. off the Libyan coast. Many African states opposed the UN sanctions, with Mandela criticising them on a visit to Gaddafi in October 1997, when he praised Libya for its work in fighting apartheid and awarded Gaddafi the Order of Good Hope. They would only be suspended in 1998 when Libya agreed to allow the extradition of the suspects to the Scottish Court in the Netherlands, in a process overseen by Mandela.
Pan-Africanism, reconciliation and privatization: 1999–2011
thumb|upright|Muammar Gaddafi wearing an insignie showing the image of the African continent
At the 20th century's end, Gaddafi—frustrated by the failure of his Pan-Arab ideals—increasingly rejected Arab nationalism in favour of Pan-Africanism, emphasising Libya's African identity. From 1997 to 2000, Libya initiated cooperative agreements or bilateral aid arrangements with 10 African states, and in 1999 joined the Community of Sahel-Saharan States. In June 1999, Gaddafi visited Mandela in South Africa, and the following month attended the OAU summit in Algiers, calling for greater political and economic integration across the continent and advocating the foundation of a United States of Africa. He became one of the founders of the African Union (AU), initiated in July 2002 to replace the OAU; at the opening ceremonies, he called for African states to reject conditional aid from the developed world, a direct contrast to the message of South African President Thabo Mbeki.
At the third AU summit, held in Libya in July 2005, he called for greater integration, advocating a single AU passport, a common defence system, and a single currency, utilising the slogan: "The United States of Africa is the hope." His proposal for a Union of African States project, a project originally conceived by Kwame Nkrumah of Ghana in the 1960s, was rejected at the Assembly of Heads of States and Government (AHSG) summit in Lusaka (2001) by African leaders who thought it was "unrealistic" and "utopian." In June 2005, Libya joined the Common Market for Eastern and Southern Africa (COMESA), and in August 2008 Gaddafi was proclaimed "King of Kings" by a committee of traditional African leaders. They crowned him in February 2009, in a ceremony held in Addis Ababa, Ethiopia; this coincided with Gaddafi's election as AU chairman for a year.
The era saw Libya's return to the international arena. In 1999, Libya began secret talks with the British government to normalise relations. In 2001, Gaddafi condemned the September 11 attacks on the U.S. by al-Qaeda, expressing sympathy with the victims and calling for Libyan involvement in the War on Terror against militant Islamism. His government continued suppressing domestic Islamism, at the same time as Gaddafi called for the wider application of sharia law. Libya also cemented connections with China and North Korea, being visited by Chinese President Jiang Zemin in April 2002. Influenced by the events of the Iraq War, in December 2003, Libya renounced its possession of weapons of mass destruction, decommissioning its chemical and nuclear weapons programs. Relations with the U.S. improved as a result, while British Prime Minister Tony Blair visited Gaddafi in March 2004. The following month, Gaddafi travelled to the headquarters of the European Union (EU) in Brussels, signifying improved relations between Libya and the EU; the latter ended its sanctions in October.
thumb|250px|left|During his 2008 visit to Russia, Gaddafi pitched his Bedouin tent in the grounds of the Moscow Kremlin. Here he is joined by Russian Prime Minister Vladimir Putin and French singer Mireille Mathieu.
In October 2010, the EU paid Libya €50 million to stop African migrants passing into Europe; Gaddafi encouraged the move, saying that it was necessary to prevent the loss of European cultural identity to a new "Black Europe".
Removed from the U.S. list of state sponsors of terrorism in 2006, Gaddafi nevertheless continued his anti-Western rhetoric, and at the Second Africa-South America Summit in Venezuela in September 2009, joined Venezuelan President Hugo Chávez in calling for an "anti-imperialist" front across Africa and Latin America. Gaddafi proposed the establishment of a South Atlantic Treaty Organization to rival NATO. That month he also addressed the United Nations General Assembly in New York for the first time, using it to condemn "Western aggression". In Spring 2010, Gaddafi proclaimed jihad against Switzerland after Swiss police accused two of his family members of criminal activity in the country, resulting in the breakdown of bilateral relations.
Libya's economy witnessed increasing privatization; although rejecting the socialist policies of nationalized industry advocated in The Green Book, government figures asserted that they were forging "people's socialism" rather than capitalism. Gaddafi welcomed these reforms, calling for wide-scale privatization in a March 2003 speech. In 2003, the oil industry was largely sold to private corporations, and by 2004, there was $40 billion of direct foreign investment in Libya, a sixfold rise over 2003. Sectors of Libya's population reacted against these reforms with public demonstrations, and in March 2006, revolutionary hard-liners took control of the GPC cabinet; although scaling back the pace of the changes, they did not halt them. In 2010, plans were announced that would have seen half the Libyan economy privatized over the following decade.
While there was no accompanying political liberalization, with Gaddafi retaining predominant control, in March 2010, the government devolved further powers to the municipal councils. Rising numbers of reformist technocrats attained positions in the country's governance; best known was Gaddafi's son and heir apparent Saif al-Islam Gaddafi, who was openly critical of Libya's human rights record. He led a group who proposed the drafting of the new constitution, although it was never adopted. Involved in encouraging tourism, Saif founded several privately run media channels in 2008, but after criticising the government they were nationalised in 2009. In October 2010, Gaddafi apologized to African leaders for the historical enslavement of Africans by the Arab slave trade.
Libyan Civil War
Origins and development: February–August 2011
thumb|280px|People protesting against Gaddafi in Dublin, Ireland, March 2011
Following the start of the Arab Spring in 2011, Gaddafi spoke out in favour of Tunisian President Zine El Abidine Ben Ali, then threatened by the Tunisian Revolution. He suggested that Tunisia's people would be satisfied if Ben Ali introduced a Jamahiriyah system there. Fearing domestic protest, Libya's government implemented preventative measures by reducing food prices, purging the army leadership of potential defectors and releasing several Islamist prisoners. They proved ineffective, and on 17 February 2011, major protests broke out against Gaddafi's government. Unlike Tunisia or Egypt, Libya was largely religiously homogenous and had no strong Islamist movement, but there was widespread dissatisfaction with the corruption and entrenched systems of patronage, while unemployment had reached around 30%.
Accusing the rebels of being "drugged" and linked to al-Qaeda, Gaddafi proclaimed that he would die a martyr rather than leave Libya. As he announced that the rebels would be "hunted down street by street, house by house and wardrobe by wardrobe", the army opened fire on protests in Benghazi, killing hundreds. Shocked at the government's response, a number of senior politicians resigned or defected to the protesters' side. The uprising spread quickly through Libya's less economically developed eastern half. By February's end, eastern cities like Benghazi, Misrata, al-Bayda and Tobruk were controlled by rebels, and the Benghazi-based National Transitional Council (NTC) had been founded to represent them.
thumb|left|Pro-Gaddafi protests in Tripoli, May 2011
In the conflict's early months it appeared that Gaddafi's government—with its greater firepower—would be victorious. Both sides disregarded the laws of war, committing human rights abuses, including arbitrary arrests, torture, extrajudicial executions and revenge attacks. On 26 February the United Nations Security Council passed Resolution 1970, suspending Libya from the UN Human Rights Council, implementing sanctions and calling for an International Criminal Court (ICC) investigation into the killing of unarmed civilians. In March, the Security Council declared a no fly zone to protect the civilian population from aerial bombardment, calling on foreign nations to enforce it; it also specifically prohibited foreign occupation. Ignoring this, Qatar sent hundreds of troops to support the dissidents, and along with France and the United Arab Emirates provided the NTC with weaponry and training. NATO announced that it would enforce the no-fly zone. On 30 April a NATO airstrike killed Gaddafi's sixth son and three of his grandsons in Tripoli.
In June, the ICC issued arrest warrants for Gaddafi, his son Saif al-Islam, and his brother-in-law Abdullah Senussi, head of state security, for charges concerning crimes against humanity. Libyan officials rejected the ICC, claiming that it had "no legitimacy whatsoever" and highlighting that "all of its activities are directed at African leaders". That month, Amnesty International published their report, finding that while Gaddafi's forces were responsible for numerous war crimes, many other allegations of mass human rights abuses lacked credible evidence and were likely fabrications by rebel forces that had been promoted by Western media. In July, over 30 governments recognised the NTC as the legitimate government of Libya; Gaddafi called on his supporters to "Trample on those recognitions, trample on them under your feet ... They are worthless". In August, the Arab League recognised the NTC to be "the legitimate representative of the Libyan state".
Aided by NATO air cover, the rebel militia pushed westward, defeating loyalist armies and securing control of the centre of the country. Gaining the support of Amazigh (Berber) communities of the Nafusa Mountains, who had long been persecuted as non-Arabic speakers under Gaddafi, the NTC armies surrounded Gaddafi loyalists in several key areas of western Libya. In August, the rebels seized Zliten and Tripoli, ending the last vestiges of Gaddafist power.
Capture and death: September–October 2011
Only a few towns in western Libya—such as Bani Walid, Sebha and Sirte—remained Gaddafist strongholds. Retreating to Sirte after Tripoli's fall, Gaddafi announced his willingness to negotiate for a handover to a transitional government, a suggestion rejected by the NTC. Surrounding himself with bodyguards, he continually moved residences to escape NTC shelling, devoting his days to prayer and reading the Qur'an. On 20 October, Gaddafi broke out of Sirte's District 2 in a joint civilian-military convoy, hoping to take refuge in the Jarref Valley. At around 8.30am, NATO bombers attacked, destroying at least 14 vehicles and killing at least 53. The convoy scattered, and Gaddafi and those closest to him fled to a nearby villa, which was shelled by rebel militia from Misrata. Fleeing to a construction site, Gaddafi and his inner cohort hid inside drainage pipes while his bodyguards battled the rebels; in the conflict, Gaddafi suffered head injuries from a grenade blast while defence minister Abu-Bakr Yunis Jabr was killed.
A Misratan militia took Gaddafi prisoner, beating him, causing serious injuries; the events were filmed on a mobile phone. A video appears to picture Gaddafi being poked or stabbed in the rear end "with some kind of stick or knife" or possibly a bayonet. Pulled onto the front of a pick-up truck, he fell off as it drove away. His semi-naked, lifeless body was then placed into an ambulance and taken to Misrata; upon arrival, he was found to be dead. Official NTC accounts claimed that Gaddafi was caught in a cross-fire and died from his bullet wounds. Other eye-witness accounts claimed that rebels had fatally shot Gaddafi in the stomach; a rebel identifying himself as Senad el-Sadik el-Ureybi later claimed responsibility. Gaddafi's son Mutassim, who had also been among the convoy, was also captured, and found dead several hours later, most probably from an extrajudicial execution. Around 140 Gaddafi loyalists were rounded up from the convoy; tied up and abused, the corpses of 66 were found at the nearby Mahari Hotel, victims of extrajudicial execution. Libya's chief forensic pathologist, Othman al-Zintani, carried out the autopsies of Gaddafi, his son and Jabr in the days following their deaths; although the pathologist initially told the press that Gaddafi had died from a gunshot wound to the head, the autopsy report was not made public.
On the afternoon of Gaddafi's death, NTC Prime Minister Mahmoud Jibril publicly revealed the news. Gaddafi's corpse was placed in the freezer of a local market alongside the corpses of Yunis Jabr and Mutassim; the bodies were publicly displayed for four days, with Libyans from all over the country coming to view them. In response to international calls, on 24 October Jibril announced that a commission would investigate Gaddafi's death. On 25 October, the NTC announced that Gaddafi had been buried at an unidentified location in the desert; Al Aan TV showed amateur video footage of the funeral. Seeking vengeance for the killing, Gaddafist sympathisers fatally wounded one of those who had captured Gaddafi, Omran Shaaban, near Bani Walid in September 2012.
Political ideology
As a schoolboy, Gaddafi adopted the ideologies of Arab nationalism and Arab socialism, influenced in particular by Nasserism, the thought of Egyptian revolutionary and president Gamal Abdel Nasser, whom Gaddafi adopted as his hero. During the early 1970s, Gaddafi formulated his own particular approach to Arab nationalism and socialism, known as Third International Theory, which has been described as a combination of "utopian socialism, Arab nationalism, and the Third World revolutionary theory that was in vogue at the time". He laid out the principles of this Theory in the three volumes of The Green Book, in which he sought to "explain the structure of the ideal society."
Gaddafi's Arab nationalist views led him to believe that there needed to be unity across the Arab world, combining the Arab nation under a single nation-state. He deemed Arabism and Islam to be inseparable, referring to them as "one and indivisible", and called on the Arab world's Christian minority to convert to Islam. Gaddafi saw his socialist Jamahiriyah as a model for the Arab, Islamic, and non-aligned worlds to follow.
He described his approach to economics as "Islamic socialism". For him, a socialist society could be defined as one in which men controlled their own needs, either through personal ownership or through a collective. The extent to which Libya became socialist under Gaddafi is disputed. Gaddafi biographer Jonathan Bearman suggested that while Libya did undergo "a profound social revolution", he did not think that "a socialist society" was established in Libya. Conversely, Bruce St. John expressed the view that "if socialism is defined as a redistribution of wealth and resources, a socialist revolution clearly occurred in Libya" under Gaddafi's regime.
Gaddafi was staunchly anti-Marxist, and in 1973 declared that "it is the duty of every Muslim to combat" Marxism because it promotes atheism. In his view, ideologies like Marxism and Zionism were alien to the Islamic world and were a threat to the ummah, or global Islamic community. Nevertheless, Blundy and Lycett noted that Gaddafi's socialism had a "curiously Marxist undertone", with political scientist Sami Hajjar arguing that Gaddafi's model of socialism offered a simplification of Karl Marx and Friedrich Engels' theories. While acknowledging the Marxist influence on Gaddafi's thought, Bearman stated that the Libyan leader rejected Marxism's core tenet, that of class struggle as the main engine of social development. Instead of embracing the Marxist idea that a socialist society emerged from class struggle between the proletariat and bourgeoisie, Gaddafi believed that socialism would be achieved through overturning 'un-natural' capitalism and returning society to its "natural equilibrium". In this he sought to replace a capitalist economy with one based on his own romanticised ideas of a traditional, pre-capitalist past. This owed much to the Islamic belief in God's natural law providing order to the universe.
Gaddafi's ideological worldview was moulded by his environment, namely his Islamic faith, his Bedouin upbringing, and his disgust at the actions of European colonialists in Libya. He was driven by a sense of "divine mission", believing himself a conduit of Allah's will, and thought that he must achieve his goals "no matter what the cost". Raised within the Sunni branch of Islam, Gaddafi called for the implementation of sharia within Libya. He desired unity across the Islamic world, and encouraged the propagation of the faith elsewhere. On a 2010 visit to Italy, he paid a modelling agency to find 200 young Italian women for a lecture he gave urging them to convert. His interpretation of Islam has nevertheless been regarded as idiosyncratic, and he clashed with conservative Libyan clerics. Many criticised his attempts to encourage women to enter traditionally male-only sectors of society, such as the armed forces. Gaddafi was keen to improve women's status, though saw the sexes as "separate but equal" and therefore felt women should usually remain in traditional roles. According to Bearman, in Islamic terms Gaddafi was a modernist rather than a fundamentalist, for he subordinated religion to the political system rather than seeking to Islamicise the state as Islamists sought to do.
A fundamental part of Gaddafi's ideology was anti-Zionism. He believed that the state of Israel should not exist, and that any Arab compromise with the Israeli government was a betrayal of the Arab people. In large part due to their support of Israel, Gaddafi despised the United States, considering the country to be imperialist and lambasting it as "the embodiment of evil." Rallying against Jews in many of his speeches, his anti-Semitism has been described as "almost Hitlerian" by Blundy and Lycett. From the late 1990s onward, his view seemed to become more moderate. In 2007, he advocated the Isratin single-state solution to the Israeli–Palestinian conflict, stating that "the [Israel-Palestine] solution is to establish a democratic state for the Jews and the Palestinians... This is the fundamental solution, or else the Jews will be annihilated in the future, because the Palestinians have [strategic] depth." Two years later he argued that a single-state solution would "move beyond old conflicts and look to a unified future based on shared culture and respect.""The One-State Solution", The New York Times, 22 January 2009.
Personal and family life
right|thumb|The Green Book centre in Benghazi
Gaddafi was a very private individual, who described himself as a "simple revolutionary" and "pious Muslim" called upon by Allah to continue Nasser's work. According to Vandewalle, Gaddafi was "an austere and devout Muslim", albeit one whose interpretation of Islam was "deeply personal and idiosyncratic". Reporter Mirella Bianco found that his friends considered him particularly loyal and generous, and asserted that he adored children. She was told by Gaddafi's father that even as a child he had been "always serious, even taciturn", a trait he also exhibited in adulthood. His father said that he was courageous, intelligent, pious, and family oriented.
Other sources describe Gaddafi as "extraordinarily vain" and a womaniser. Blundy and Lycett note Gaddafi had a large wardrobe, and sometimes changed his outfit multiple times a day. He saw himself as a fashion icon, stating "Whatever I wear becomes a fad. I wear a certain shirt and suddenly everyone is wearing it."
In the 1970s and 1980s there were reports of his making sexual advances toward female reporters and members of his entourage. After the civil war, more serious charges came to light. Annick Cojean, a journalist for Le Monde, wrote in her book, Gaddafi's Harem that Gaddafi had raped, tortured, performed urolagnia, and imprisoned hundreds or thousands of women, usually very young. Another source—Libyan psychologist Seham Sergewa—reported that several of his female bodyguards claim to have been raped by Gaddafi and senior officials. After the civil war, Luis Moreno Ocampo, prosecutor for the International Criminal Court, said there was evidence that Gaddafi told soldiers to rape women who had spoken out against his regime. In 2011 Amnesty International questioned this and other claims used to justify NATO's war in Libya.
thumb|left|Gaddafi's son Mutassim with U.S. Secretary of State Hillary Clinton in 2009
According to a Brazilian plastic surgeon, Gaddafi had been his patient in 1995. The U.S. Central Intelligence Agency believed that Gaddafi had suffered from clinical depression, while the Israeli authorities claimed that he had been afflicted by epilepsy and hemorrhoids. He was a fan of Beethoven, and said his favourite novels were Uncle Tom's Cabin, Roots, and Colin Wilson's The Outsider. He was also a football enthusiast.
Following his ascension to power, Gaddafi moved into the Bab al-Azizia barracks, a six-mile long fortified compound located two miles from the center of Tripoli. His home and office at Azizia was a bunker designed by West German engineers, while the rest of his family lived in a large two-story building. Within the compound were also two tennis courts, a soccer field, several gardens, camels, and a Bedouin tent in which he entertained guests. In the 1980s, his lifestyle was considered modest in comparison to those of many other Arab leaders. Gaddafi allegedly worked for years with Swiss banks to launder international banking transactions. In November 2011, The Sunday Times identified property worth £1 billion in the UK that Gaddafi allegedly owned. Gaddafi had an Airbus A340 private jet, which he bought from Prince Al-Waleed bin Talal of Saudi Arabia for $120 million in 2003. Operated by Tripoli-based Afriqiyah Airways and decorated externally in their colours, it had various luxuries including a jacuzzi.
Gaddafi married his first wife, Fatiha al-Nuri, in 1969. She was the daughter of General Khalid, a senior figure in King Idris' administration, and was from a middle-class background. Although they had one son, Muhammad Gaddafi (b. 1970), their relationship was strained, and they divorced in 1970. Gaddafi's second wife was Safia Farkash, el-Brasai, a former nurse from Obeidat tribe born in Bayda. They met in 1969, following his ascension to power, when he was hospitalized with appendicitis; he claimed that it was love at first sight. The couple remained married until his death. Together they had seven biological children: Saif al-Islam Gaddafi (b. 1972), Al-Saadi Gaddafi (b. 1973), Mutassim Gaddafi (1974–2011), Hannibal Muammar Gaddafi (b. 1975), Ayesha Gaddafi (b. 1976), Saif al-Arab Gaddafi (1982–2011), and Khamis Gaddafi (1983–2011). He also adopted two children, Hanna Gaddafi and Milad Gaddafi.; The Gaddafi family tree, BBC News, 21 February 2011
Public image
thumb|right|Poster featuring Gaddafi's image in Green Square, Tripoli, in 2007
According to Vandewalle, Gaddafi "dominated [Libya's] political life" during his period in power. A cult of personality devoted to Gaddafi existed in Libya. His face appeared on a wide variety of items, including postage stamps, watches, and school satchels. Quotations from The Green Book appeared on a wide variety of places, from street walls to airports and pens, and were put to pop music for public release. Gaddafi claimed that he disliked this personality cult, but that he tolerated it because Libya's people adored him.
Biographers Blundy and Lycett believed that he was "a populist at heart." Throughout Libya, crowds of supporters would turn up to public events at which he appeared; described as "spontaneous demonstrations" by the government, there are recorded instances of groups being coerced or paid to attend. He was typically late to public events, and would sometimes not show up at all. Although Bianco thought he had a "gift for oratory", he was considered a poor orator by biographers Blundy and Lycett. Biographer Daniel Kawczynski noted that Gaddafi was famed for his "lengthy, wandering" speeches, which typically involved criticising Israel and the U.S.
Gaddafi was notably confrontational in his approach to foreign powers, and generally shunned Western ambassadors and diplomats, believing them to be spies.Gaddafi was preoccupied with his own security, regularly changing where he slept and sometimes grounding all other planes in Libya when he was flying. He made very particular requests when traveling to foreign nations. During his trips to Rome, Paris, Madrid, Moscow, and New York City, he resided in a bulletproof tent, following his Bedouin traditions.
Starting in the 1980s, he travelled with his all-female Amazonian Guard, who were allegedly sworn to a life of celibacy. However, according to psychologist Seham Sergewa, after the civil war several of the guards told her they had been pressured into joining and raped by Gaddafi and senior officials. He hired several Ukrainian nurses to care for him and his family's health, and traveled everywhere with his trusted Ukrainian nurse Halyna Kolotnytska."WikiLeaks cables: Muammar Gaddafi and the 'voluptuous blonde'". The Guardian. 7 December 2010 Kolotnytska's daughter denied the suggestion that the relationship was anything but professional.
Reception and legacy
thumb|right|Anti-Gaddafist placard in Ireland
According to Jonathan Bearman, Gaddafi "evoked the extremes of passion: supreme adoration from his following, bitter contempt from his opponents". Supporters praised Gaddafi's administration for the creation of an almost classless society through domestic reform. They stress the regime's achievements in combating homelessness and ensuring access to food and safe drinking water. Highlighting that under Gaddafi, all Libyans enjoyed free education to a university level, they point to the dramatic rise in literacy rates after the 1969 revolution. Supporters have also applauded achievements in medical care, praising the universal free healthcare provided under the Gaddafist administration, with diseases like cholera and typhoid being contained and life expectancy raised. Biographers Blundy and Lycett believed that under the first decade of Gaddafi's leadership, life for most Libyans "undoubtedly changed for the better" as material conditions and wealth drastically improved, while Libyan studies specialist Lillian Craig Harris remarked that in the early years of his administration, Libya's "national wealth and international influence soared, and its national standard of living has risen dramatically." Such high standards declined during the 1980s, as a result of economic stagnation. Gaddafi claimed that his Jamahiriya was a "concrete utopia", and that he had been appointed by "popular assent", with some Islamic supporters believing that he exhibited barakah. His opposition to Western governments earned him the respect of many in the Euro-American far right. In 1971, the Soviet Union awarded him the Order of Lenin, although his mistrust of communism prevented him from attending the ceremony in Moscow.
Critics labelled Gaddafi "despotic, cruel, arrogant, vain and stupid", and he became a bogeyman for Western governments, who presented him as the "vicious dictator of an oppressed people". During the Reagan administration, the United States regarded him as "Public Enemy No. 1" and Reagan famously dubbed him the "mad dog of the Middle East". According to critics, the Libyan people lived in a climate of fear under Gaddafi's administration, due to his government's pervasive surveillance of civilians. Gaddafi's Libya was typically described by Western commentators as "a police state". Opponents were critical of Libya's human rights abuses; according to Human Rights Watch (HRW) and others, hundreds of arrested political opponents often failed to receive a fair trial, and were sometimes subjected to torture or extrajudicial execution, most notably in the Abu Salim prison, including an alleged massacre on 29 June 1996 in which HRW estimated that 1,270 prisoners were massacred. Dissidents abroad or "stray dogs" were also publicly threatened with death and sometimes killed by government hit squads.
His government's treatment of non-Arab Libyans has also came in for criticism from human rights activists, with native Berbers, Italians, Jews, refugees, and foreign workers all facing persecution in Gaddafist Libya. According to journalist Annick Cojean and psychologist Seham Sergewa, Gaddafi and senior officials raped and imprisoned hundreds or thousands of young women and reportedly raped several of his female bodyguards. Gaddafi's government was frequently criticized for not being democratic, with Freedom House consistently giving Libya under Gaddafi the "Not Free" ranking for civil liberties and political rights.
thumb|left|The pre-Gaddafi flag of Libya, readopted by the Libyan rebel forces during the civil war and by the new government after Gaddafi's defeat
International reactions to Gaddafi's death were divided. U.S. President Barack Obama stated that it meant that "the shadow of tyranny over Libya has been lifted," while UK Prime Minister David Cameron stated that he was "proud" of his country's role in overthrowing "this brutal dictator". Contrastingly, former Cuban President Fidel Castro commented that in defying the rebels, Gaddafi would "enter history as one of the great figures of the Arab nations", while Venezuelan President Hugo Chávez described him as "a great fighter, a revolutionary and a martyr." Nelson Mandela expressed sadness at the news, praising Gaddafi for his anti-apartheid stance, remarking that he backed the African National Congress during "the darkest moments of our struggle". Gaddafi was mourned by many as a hero across Sub-Saharan Africa, for instance, a vigil was held by Muslims in Sierra Leone. The Daily Times of Nigeria stated that while undeniably a dictator, Gaddafi was the most benevolent in a region that only knew dictatorship, and that he was "a great man that looked out for his people and made them the envy of all of Africa." The Nigerian newspaper Leadership reported that while many Libyans and Africans would mourn Gaddafi, this would be ignored by Western media and that as such it would take 50 years before historians decided whether he was "martyr or villain." (Archive)
Following his defeat in the civil war, Gaddafi's system of governance was dismantled and replaced under the interim government of the NTC, who legalised trade unions and freedom of the press. In July 2012, elections were held to form a new General National Congress (GNC), who officially took over governance from the NTC in August. The GNC proceeded to elect Mohammed Magariaf as president of the chamber, and then voted Mustafa A.G. Abushagur as Prime Minister; when Abushagar failed to gain congressional approval, the GNC instead elected Ali Zeidan to the position. In January 2013, the GNC officially renamed the Jamahiriyah'' as the "State of Libya".
See also
Gaddafi loyalism after the 2011 Libyan Civil War
History of Libya under Muammar Gaddafi
List of longest-ruling non-royal national leaders since 1900
HIV trial in Libya
References
Notes
Footnotes
Bibliography
Further reading
External links
U.S. Policy Towards Qaddafi from the Dean Peter Krogh Foreign Affairs Digital Archives
Category:1942 births
Category:2011 deaths
Category:20th-century politicians
Category:African pan-Africanists
Category:African revolutionaries
Category:Arabized Berbers
Category:Articles containing video clips
Category:Assassinated heads of government
Category:Assassinated heads of state
Category:Assassinated Libyan people
Category:Chadian–Libyan conflict
Category:Deaths by firearm in Libya
Category:Gaddafi family
Category:Grand Commanders of the Order of the Federal Republic
Category:Grand Crosses of the Order of Good Hope
Category:Heads of state of Libya
Category:International opponents of apartheid in South Africa
Category:Islamic socialism
Category:Leaders who took power by coup
Category:Libyan Arab nationalists
Category:Libyan Arab Socialist Union politicians
Category:Libyan colonels
Category:Libyan political theorists
Category:Libyan rebels
Category:Libyan revolutionaries
Category:Libyan Sunni Muslims
Category:Members of the General People's Committee of Libya
Category:Nasserists
Category:Pan Am Flight 103
Category:People from Sirte
Category:People indicted for crimes against humanity
Category:People of the Libyan Civil War (2011)
Category:Political philosophers
Category:Political writers
Category:Libyan Quranist Muslims
Category:Prime Ministers of Libya
Category:Recipients of the Order of Prince Yaroslav the Wise, 1st class
Category:Recipients of the Order of the Yugoslav Star
Category:Socialist rulers
Category:Rape in Libya | 53,029 | 2017-01 |
Dissolution of the Soviet Union | thumb|355px|Post-Soviet states
The Soviet Union was dissolved on December 26, 1991, as a result of the declaration no. 142-Н of the Soviet of the Republics of the Supreme Soviet of the Soviet Union. The declaration acknowledged the independence of the former Soviet republics and created the Commonwealth of Independent States (CIS), although five of the signatories ratified it much later or not at all. On the previous day, Soviet President Mikhail Gorbachev, the eighth and final leader of the Soviet Union, resigned, declared his office extinct, and handed over its powers – including control of the Soviet nuclear missile launching codes – to Russian President Boris Yeltsin. That evening at 7:32, the Soviet flag was lowered from the Kremlin for the last time and replaced with the pre-revolutionary Russian flag.
Previously, from August to December, all the individual republics, including Russia itself, had seceded from the union. The week before the union's formal dissolution, 11 republics signed the Alma-Ata Protocol formally establishing the CIS and declaring that the Soviet Union had ceased to exist. The Revolutions of 1989 and the dissolution of the USSR () also signaled the end of the Cold War, which left the United States as the world's only superpower.
Several of the former Soviet republics have retained close links with the Russian Federation and formed multilateral organizations such as the Commonwealth of Independent States, Eurasian Economic Community, the Union State, the Eurasian Customs Union, and the Eurasian Economic Union to enhance economic and security cooperation. Some have joined NATO and the European Union.
1985
Moscow: Mikhail Gorbachev, new General Secretary
thumb|Mikhail Gorbachev in 1987
Mikhail Gorbachev was elected General Secretary by the Politburo on March 11, 1985, three hours after predecessor Konstantin Chernenko's death at age 73. Gorbachev, aged 54, was the youngest member of the Politburo. His initial goal as general secretary was to revive the Soviet economy, and he realized that doing so would require reforming underlying political and social structures. The reforms began with personnel changes of senior Brezhnev-era officials who would impede political and economic change. On April 23, 1985, Gorbachev brought two protégés, Yegor Ligachev and Nikolai Ryzhkov, into the Politburo as full members. He kept the "power" ministries happy by promoting KGB Head Viktor Chebrikov from candidate to full member and appointing Minister of Defence Marshal Sergei Sokolov as a Politburo candidate.
This liberalization, however, fostered nationalist movements and ethnic disputes within the Soviet Union. It also led indirectly to the revolutions of 1989, in which Soviet-imposed communist regimes of the Warsaw Pact were peacefully toppled (Romania excepted), which in turn increased pressure on Gorbachev to introduce greater democracy and autonomy for the Soviet Union's constituent republics. Under Gorbachev's leadership, the Communist Party of the Soviet Union in 1989 introduced limited competitive elections to a new central legislature, the Congress of People's Deputies (although the ban on other political parties was not lifted until 1990).
In May 1985, Gorbachev delivered a speech in Leningrad advocating reforms and an anti-alcohol campaign to tackle widespread alcoholism. Prices of vodka, wine, and beer were raised in order to make these drinks more expensive and a disincentive to consumers, and the introduction of rationing. Unlike most forms of rationing intended to conserve scarce goods, this was done to restrict sales with the overt goal of curtailing drunkenness.Hough, Jerry F. (1997), pp. 124–125 Gorbachev's plan also included billboards promoting sobriety, increased penalties for public drunkenness, and to censor drinking scenes from old movies. Some noted this mirrored Tsar Nicholas II's program during World War I, as that was also intended to eradicate drunkenness in order to bolster the war effort, although that was also intended to redirect grain usage for only the most essential purposes, which did not appear to be a goal in Gorbachev's program. However, Gorbachev soon faced the same adverse economic reaction to his prohibition as did the last Tsar. The disincentivization of alcohol consumption was a serious blow to the state budget according to Alexander Yakovlev, who noted annual collections of alcohol taxes decreased by 100 billion rubles. Alcohol production migrated to the black market, or through moonshining as some made "bathtub vodka" with homegrown potatoes. Poorer, less educated Russians resorted to drinking unhealthy substitutes such as nail-polish remover, rubbing alcohol or men's cologne, which only served to be an additional burden on Russia's healthcare sector due to the subsequent poisoning cases. The purpose of these reforms, however, was to prop up the existing centrally planned economy, unlike later reforms, which tended toward market socialism.
On July 1, 1985, Gorbachev promoted Eduard Shevardnadze, First Secretary of the Georgian Communist Party, to full member of the Politburo, and the following day appointed him minister of foreign affairs, replacing longtime Foreign Minister Andrei Gromyko. The latter, disparaged as "Mr Nyet" in the West, had served for 28 years as Minister of Foreign Affairs. Gromyko was relegated to the largely ceremonial position of Chairman of the Presidium of the Supreme Soviet (officially Soviet Head of State), as he was considered an "old thinker." Also on July 1, Gorbachev took the opportunity to dispose of his main rival by removing Grigory Romanov from the Politburo, and brought Boris Yeltsin and Lev Zaikov into the CPSU Central Committee Secretariat.
In the fall of 1985, Gorbachev continued to bring younger and more energetic men into government. On September 27, Nikolai Ryzhkov replaced 79-year-old Nikolai Tikhonov as Chairman of the Council of Ministers, effectively the Soviet prime minister, and on October 14, Nikolai Talyzin replaced Nikolai Baibakov as chairman of the State Planning Committee (GOSPLAN). At the next Central Committee meeting on October 15, Tikhonov retired from the Politburo and Talyzin became a candidate. Finally, on December 23, 1985, Gorbachev appointed Yeltsin First Secretary of the Moscow Communist Party replacing Viktor Grishin.
1986
Sakharov
Gorbachev continued to press for greater liberalization. On December 23, 1986, the most prominent Soviet dissident, Andrei Sakharov, returned to Moscow shortly after receiving a personal telephone call from Gorbachev telling him that after almost seven years his internal exile for defying the authorities was over.
Baltic republics
The Baltic republics, forcibly reincorporated into the Soviet Union in 1944, pressed for independence, beginning with Estonia in November 1988 when the Estonian legislature passed laws resisting the control of the central government.
Latvia's Helsinki-86
thumb|left|upright|Figure of Liberty on the Freedom Monument in Riga, focus of 1986 Latvian demonstrations.
The CTAG (, Human Rights Defense Group) Helsinki-86 was founded in July 1986 in the Latvian port town of Liepāja by three workers: Linards Grantiņš, Raimonds Bitenieks, and Mārtiņš Bariss. Its name refers to the human-rights statements of the Helsinki Accords. Helsinki-86 was the first openly anti-Communist organization in the U.S.S.R., and the first openly organized opposition to the Soviet regime, setting an example for other ethnic minorities' pro-independence movements.
On December 26, 1986, in the early morning hours after a rock concert, 300 working-class Latvian youths gathered in Riga's Cathedral Square and marched down Lenin Avenue toward the Freedom Monument, shouting, "Soviet Russia out! Free Latvia!" Security forces confronted the marchers, and several police vehicles were overturned.
Central Asia
Kazakhstan: Jeltoqsan riots
The "Jeltoqsan" (Kazakh for "December") of 1986 were riots in Alma-Ata, Kazakhstan, sparked by Gorbachev's dismissal of Dinmukhamed Konayev, the First Secretary of the Communist Party of Kazakhstan and an ethnic Kazakh, who was replaced with Gennady Kolbin, an outsider from the Russian SFSR. Demonstrations started in the morning of December 17, 1986, with 200 to 300 students in front of the Central Committee building on Brezhnev Square protesting Konayev's dismissal and replacement by a Russian. Protesters swelled to 1,000 then to 5,000 as other students joined the crowd. The CPK Central Committee ordered troops from the Ministry of Internal Affairs, druzhiniki (volunteers), cadets, policemen, and the KGB to cordon the square and videotape the participants. The situation escalated around 5 p.m., as troops were ordered to disperse the protesters. Clashes between the security forces and the demonstrators continued throughout the night in Almaty.
On the next day, December 18, protests turned into civil unrest as clashes between troops, volunteers, militia units, and Kazakh students turned into a wide-scale confrontation. The clashes could only be controlled on the third day. The Almaty events were followed by smaller protests and demonstrations in Shymkent, Pavlodar, Karaganda, and Taldykorgan. Reports from Kazakh SSR authorities estimated that the riots drew 3,000 people."Soviet Riots Worse Than First Reported", San Francisco Chronicle. San Francisco, Calif.: February 19, 1987. p. 22 Other estimates are of at least 30,000 to 40,000 protestors with 5,000 arrested and jailed, and an unknown number of casualties. Jeltoqsan leaders say over 60,000 Kazakhs participated in the protests."Kazakhstan: Jeltoqsan Protest Marked 20 Years Later", RadioFreeEurope/RadioLiberty"Jeltoqsan Movement blames leader of Kazakh Communists", EurasiaNet According to the Kazakh SSR government, there were two deaths during the riots, including a volunteer police worker and a student. Both of them had died due to blows to the head. About 100 others were detained and several others were sentenced to terms in labor camps.San Francisco Chronicle. Retrieved March 27, 2010, from ProQuest Newsstand. Sources cited by the Library of Congress claimed that at least 200 people died or were summarily executed soon thereafter; some accounts estimate casualties at more than 1,000. The writer Mukhtar Shakhanov claimed that a KGB officer testified that 168 protesters were killed, but that figure remains unconfirmed.
1987
Moscow: One-party democracy
At the January 28–30, 1987, Central Committee plenum, Gorbachev suggested a new policy of "Demokratizatsiya" throughout Soviet society. He proposed that future Communist Party elections should offer a choice between multiple candidates, elected by secret ballot. However, the CPSU delegates at the Plenum watered down Gorbachev's proposal, and democratic choice within the Communist Party was never significantly implemented.
Gorbachev also radically expanded the scope of Glasnost, stating that no subject was off-limits for open discussion in the media. Even so, the cautious Soviet intelligentsia took almost a year to begin pushing the boundaries to see if he meant what he said. For the first time, the Communist Party leader had appealed over the heads of Central Committee members for the people's support in exchange for expansion of liberties. The tactic proved successful: Within two years political reform could no longer be sidetracked by Party "conservatives." An unintended consequence was that having saved reform, Gorbachev's move ultimately killed the very system it was designed to save.Leon Aron, Boris Yeltsin A Revolutionary Life. Harper Collins, 2000. page 187
On February 7, 1987, dozens of political prisoners were freed in the first group release since Khrushchev's "thaw" in the mid-1950s. On May 6, 1987, Pamyat, a Russian nationalist group, held an unsanctioned demonstration in Moscow. The authorities did not break up the demonstration and even kept traffic out of the demonstrators' way while they marched to an impromptu meeting with Boris Yeltsin, head of the Moscow Communist Party and at the time one of Gorbachev's closest allies.
On July 25, 1987, 300 Crimean Tatars staged a noisy demonstration near the Kremlin Wall for several hours, calling for the right to return to their homeland, from which they were deported in 1944; police and soldiers merely looked on.
On September 10, 1987, after a lecture from hardliner Yegor Ligachev at the Politburo for allowing these two unsanctioned demonstrations in Moscow, Boris Yeltsin wrote a letter of resignation to Gorbachev, who had been holidaying on the Black Sea.O'Clery, Conor. Moscow December 25, 1991: The Last Day of the Soviet Union. Transworld Ireland (2011). ISBN 978-1-84827-112-8, p. 71. Gorbachev was stunned – no one had ever voluntarily resigned from the Politburo. At the October 27, 1987, plenary meeting of the Central Committee, Yeltsin, frustrated that Gorbachev had not addressed any of the issues outlined in his resignation letter, criticized the slow pace of reform, servility to the general secretary, and opposition from Ligachev that had led to his (Yeltsin's) resignation.Conor O'Clery, Moscow December 25, 1991: The Last Day of the Soviet Union. Transworld Ireland (2011). ISBN 978-1-84827-112-8, p. 74 No one had ever addressed the Party leader so brazenly in front of the Central Committee since Leon Trotsky in the 1920s. In his reply, Gorbachev accused Yeltsin of "political immaturity" and "absolute irresponsibility." No one backed Yeltsin.
Nevertheless, news of Yeltsin's insubordination and "secret speech" spread, and soon samizdat versions began to circulate. This marked the beginning of Yeltsin's rebranding as a rebel and rise in popularity as an anti-establishment figure. The following four years of political struggle between Yeltsin and Gorbachev played a large role in the dissolution of the USSR. On November 11, 1987, Yeltsin was fired from the post of First Secretary of the Moscow Communist Party.
Baltic republics: Molotov–Ribbentrop protests
On August 23, 1987, on the 48th anniversary of the secret protocols of the 1939 Molotov Pact between Adolf Hitler and Joseph Stalin that ultimately turned the then-independent Baltic states over to the Soviet Union, thousands of demonstrators marked the occasion in the three Baltic capitals to sing independence songs and listen to speeches commemorating Stalin’s victims. The gatherings were sharply denounced in the official press and closely watched by the police, but were not interrupted.
Latvia leads
On June 14, 1987, about 5,000 people gathered again at Freedom Monument in Riga, and laid flowers to commemorate the anniversary of Stalin's mass deportation of Latvians in 1941. This was the first large demonstration in the Baltic republics to commemorate the anniversary of an event contrary to official Soviet history. The authorities did not crack down on demonstrators, which encouraged more and larger demonstrations throughout the Baltic States. The next major anniversary after the August 23 Molotov Pact demonstration was on November 18, the date of Latvia’s independence in 1918. On November 18, 1987, hundreds of police and civilian militiamen cordoned off the central square to prevent any demonstration at Freedom Monument, but thousands lined the streets of Riga in silent protest regardless.
Estonia’s first protests
In spring 1987, a protest movement arose against new phosphate mines in Estonia. Signatures were collected in Tartu, and students assembled in the university's main hall to express lack of confidence in the government. At a demonstration on May 1, 1987, young people showed up with banners and slogans despite an official ban. On August 15, 1987, former political prisoners formed the MRP-AEG group (Estonians for the Public Disclosure of the Molotov-Ribbentrop Pact), which was headed by Tiit Madisson. In September 1987, the Edasi newspaper published a proposal by Edgar Savisaar, Siim Kallas, Tiit Made, and Mikk Titma calling for Estonia's transition to autonomy. Initially geared toward economic independence, then toward a certain amount of political autonomy, the project, Isemajandav Eesti ("A Self-Managing Estonia") became known according to its Estonian acronym, IME, which means "miracle". On October 21, a demonstration dedicated to those who gave their lives in the 1918–1920 Estonian War of Independence took place in Võru, which culminated in a conflict with the militia. For the first time in years, the blue, black, and white national tricolor was publicly displayed.
The Caucasus
Armenia: Environmental concerns and Nagorno-Karabakh
thumb|right|Environmental concerns over the Metsamor nuclear power plant drove initial demonstrations in Yerevan.
On October 17, 1987, about 3,000 Armenians demonstrated in Yerevan complaining about the condition of Lake Sevan, the Nairit chemicals plant, and the Metsamor Nuclear Power Plant, and air pollution in Yerevan. Police tried to prevent the protest but took no action to stop it once the march was underway. The demonstration was led by Armenian writers such as Silva Kaputikian, Zori Balayan, and Maro Margarian and leaders from the National Survival organization. The march originated at the Opera Plaza after speakers, mainly intellectuals, addressed the crowd.
The following day 1,000 Armenians participated in another demonstration calling for Armenian national rights in Karabagh. The demonstrators demanded the annexation of Nakhchivan and Nagorno-Karabakh to Armenia, and carried placards to that effect. The police tried to physically prevent the march and after a few incidents, dispersed the demonstrators. Nagorno-Karabakh would break out in violence the following year.
1988
Moscow loses control
In 1988 Gorbachev started to lose control of two small but troublesome regions of the Soviet Union, as the Baltic republics were captured by their popular fronts, and the Caucasus descended into violence and civil war.
On July 1, 1988, the fourth and last day of a bruising 19th Party Conference, Gorbachev won the backing of the tired delegates for his last-minute proposal to create a new supreme legislative body called the Congress of People's Deputies. Frustrated by the old guard's resistance, Gorbachev embarked on a set of constitutional changes to try to separate party and state, and thereby isolate his conservative Party opponents. Detailed proposals for the new Congress of People's Deputies were published on October 2, 1988, and to enable the creation of the new legislature the Supreme Soviet, during its November 29 – December 1, 1988, session, implemented amendments to the 1977 Soviet Constitution, enacted a law on electoral reform, and set the date of the election for March 26, 1989.
On November 29, 1988, the Soviet Union ceased to jam all foreign radio stations, allowing Soviet citizens for the first time to have unrestricted access to news sources beyond Communist Party control.http://www.radiojamming.puslapiai.it/article_en.htm
Baltic Republics
In 1986 and 1987 Latvia had been in the vanguard of the Baltic states in pressing for reform. In 1988 Estonia took over the lead role with the foundation of the Soviet Union's first popular front and starting to influence state policy.
Estonian Popular Front
The Estonian Popular Front was founded in April 1988. On June 16, 1988 Gorbachev replaced Karl Vaino, the "old guard" leader of the Communist Party of Estonia, with the comparatively liberal Vaino Väljas, the Soviet ambassador to Nicaragua. In late June 1988, Väljas bowed to pressure from the Estonian Popular Front and legalized the flying of the old blue-black-white flag of Estonia, and agreed to a new state language law that made Estonian the official language of the Republic.
On October 2, the Popular Front formally launched its political platform at a two-day congress. Väljas attended, gambling that the front could help Estonia become a model of economic and political revival, while moderating separatist and other radical tendencies. On November 16, 1988, the Supreme Soviet of the Estonian SSR adopted a declaration of national sovereignty under which Estonian laws would take precedence over those of the Soviet Union.Website of Estonian Embassy in London (National Holidays) Estonia's parliament also laid claim to the republic's natural resources including land, inland waters, forests, mineral deposits, and to the means of industrial production, agriculture, construction, state banks, transportation, and municipal services within the territory of Estonia's borders.Walker, Edward (2003). Dissolution. Rowman & Littlefield. p. 63. ISBN 0-7425-2453-1.
Latvian Popular Front
The Latvian Popular Front was founded in June 1988. On October 4, Gorbachev replaced Boris Pugo, the "old guard" leader of the Communist Party of Latvia, with the more liberal Jānis Vagris. In October 1988 Vagris bowed to pressure from the Latvian Popular Front and legalized flying the former carmine red-and-white flag of independent Latvia, and on October 6 he passed a law making Latvian the country's official language.
Lithuania’s Sąjūdis
The Popular Front of Lithuania, called Sąjūdis ("Movement"), was founded in May 1988. On October 19, 1988, Gorbachev replaced Ringaudas Songaila, the "old guard" leader of the Communist Party of Lithuania, with the relatively liberal Algirdas Mykolas Brazauskas. In October 1988 Brazauskas bowed to pressure from Sąjūdis and legalized the flying of the historic yellow-green-red flag of independent Lithuania, and in November 1988 passed a law making Lithuanian the country's official language.
Rebellion in the Caucasus
Azerbaijan: Violence
On February 20, 1988, after a week of growing demonstrations in Stepanakert, capital of the Nagorno-Karabakh Autonomous Oblast (the Armenian majority area within Azerbaijan Soviet Socialist Republic), the Regional Soviet voted to secede and join with the Soviet Socialist Republic of Armenia.Pages 10–12 Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7 This local vote in a small, remote part of the Soviet Union made headlines around the world; it was an unprecedented defiance of republican and national authorities. On February 22, 1988, in what became known as the "Askeran clash", two Azerbaijanis were killed by Karabakh police. These deaths, announced on state radio, led to the Sumgait Pogrom. Between February 26 and March 1, the city of Sumgait (Azerbaijan) saw violent anti-Armenian rioting during which 32 people were killed. The authorities totally lost control and occupied the city with paratroopers and tanks; nearly all of the 14,000 Armenian residents of Sumgait fled.Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7, p. 40
Gorbachev refused to make any changes to the status of Nagorno Karabakh, which remained part of Azerbaijan. He instead sacked the Communist Party Leaders in both Republics – on May 21, 1988, Kamran Baghirov was replaced by Abdulrahman Vezirov as First Secretary of the Azerbaijan Communist Party. From July 23 to September 1988, a group of Azerbaijani intellectuals began working for a new organization called the Popular Front of Azerbaijan, loosely based on the Estonian Popular Front.Page 82 Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7 On September 17, when gun battles broke out between the Armenians and Azerbaijanis near Stepanakert, two soldiers were killed and more than two dozen injured. This led to almost tit-for-tat ethnic polarization in Nagorno-Karabakh's two main towns: The Azerbaijani minority was expelled from Stepanakert, and the Armenian minority was expelled from Shusha.Page 69 Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7 On November 17, 1988, in response to the exodus of tens of thousands of Azerbaijanis from Armenia, a series of mass demonstrations began in Baku's Lenin Square, lasting 18 days and attracting half a million demonstrators. On December 5, 1988, the Soviet militia finally moved in, cleared the square by force, and imposed a curfew that lasted ten months.Page 83 Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7
Armenia: Uprising
The rebellion of fellow Armenians in Nagorno-Karabakh had an immediate effect in Armenia itself. Daily demonstrations, which began in the Armenian capital Yerevan on February 18, initially attracted few people, but each day the Nagorno-Karabakh issue became increasingly prominent and numbers swelled. On February 20, a 30,000-strong crowd demonstrated in Theater Square, by February 22, there were 100,000, the next day 300,000, and a transport strike was declared, by February 25, there were close to 1 million demonstrators – about a quarter of Armenia's population.Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7, p. 23 This was the first of the large, peaceful public demonstrations that would become a feature of communism's overthrow in Prague, Berlin, and, ultimately, Moscow. Leading Armenian intellectuals and nationalists, including future first President of independent Armenia Levon Ter-Petrossian, formed the eleven-member Karabakh Committee to lead and organize the new movement.
Gorbachev again refused to make any changes to the status of Nagorno Karabakh, which remained part of Azerbaijan. Instead he sacked both Republics' Communist Party Leaders: On May 21, 1988, Karen Demirchian was replaced by Suren Harutyunyan as First Secretary of the Communist Party of Armenia. However, Harutyunyan quickly decided to run before the nationalist wind and on May 28, allowed Armenians to unfurl the red-blue-gold First Armenian Republic flag for the first time in almost 70 years.Pages 60–61 Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7 On June 15, 1988, the Armenian Supreme Soviet adopted a resolution formally approving the idea of Nagorno Karabakh joining Armenia. Armenia, formerly one of the most loyal Republics, had suddenly turned into the leading rebel republic. On July 5, 1988, when a contingent of troops was sent in to remove demonstrators by force from Yerevan's Zvartnots International Airport, shots were fired and one student protester was killed. In September, further large demonstrations in Yerevan led to the deployment of armored vehicles. In the autumn of 1988 almost all the 200,000 Azerbaijani minority in Armenia was expelled by Armenian Nationalists, with over 100 killed in the processPages 62–63 Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7 – this, after the Sumgait pogrom earlier that year carried out by Azerbaijanis against ethnic Armenians and subsequent expulsion of all Armenians from Azerbaijan. On November 25, 1988, a military commandant took control of Yerevan as the Soviet government moved to prevent further ethnic violence.
On December 7, 1988, the Spitak earthquake struck, killing an estimated 25,000 to 50,000 people. When Gorbachev rushed back from a visit to the United States, he was so angered to be confronted by protesters calling for Nagorno-Karabakh to be made part of the Armenian Republic – during a natural disaster – that on December 11, 1988, he ordered the entire Karabakh Committee to be arrested.
Georgia: First demonstrations
In November 1988 in Tbilisi, capital of Soviet Georgia, many demonstrators camped out in front of the republic's legislature calling for Georgia's independence and in support of Estonia's declaration of sovereignty.
The Western republics
Democratic Movement of Moldova
Beginning in February 1988, the Democratic Movement of Moldova (formerly Moldavia) organized public meetings, demonstrations, and song festivals, which gradually grew in size and intensity. In the streets, the center of public manifestations was the Stephen the Great Monument in Chişinău, and the adjacent park harboring Aleea Clasicilor (The "Alee of the Classics [of the Literature]"). On January 15, 1988, in a tribute to Mihai Eminescu at his bust on the Aleea Clasicilor, Anatol Şalaru submitted a proposal to continue the meetings. In the public discourse, the movement called for national awakening, freedom of speech, revival of Moldavian traditions, and for attainment of official status for the Romanian language and return to the Latin alphabet. The transition from "movement" (an informal association) to "front" (a formal association) was seen as a natural "upgrade" once a movement gained momentum with the public, and the Soviet authorities no longer dared to crack down on it.
Demonstrations in Lviv, Ukraine
On April 26, 1988, about 500 people participated in a march organized by the Ukrainian Cultural Club on Kiev's Khreschatyk Street to mark the second anniversary of the Chernobyl nuclear disaster, carrying placards with slogans like "Openness and Democracy to the End." Between May and June 1988, Ukrainian Catholics in western Ukraine celebrated the Millennium of Christianity in Kievan Rus' in secret by holding services in the forests of Buniv, Kalush, Hoshiv, and Zarvanytsia. On June 5, 1988, as the official celebrations of the Millennium were held in Moscow, the Ukrainian Cultural Club hosted its own observances in Kiev at the monument to St. Volodymyr the Great, the grand prince of Kievan Rus'.
On June 16, 1988, 6,000 to 8,000 people gathered in Lviv to hear speakers declare no confidence in the local list of delegates to the 19th Communist Party conference, to begin on June 29. On June 21, a rally in Lviv attracted 50,000 people who had heard about a revised delegate list. Authorities attempted to disperse the rally in front of Druzhba Stadium. On July 7, 10,000 to 20,000 people witnessed the launch of the Democratic Front to Promote Perestroika. On July 17, a group of 10,000 gathered in the village Zarvanytsia for Millennium services celebrated by Ukrainian Greek-Catholic Bishop Pavlo Vasylyk. The militia tried to disperse attendees, but it turned out to be the largest gathering of Ukrainian Catholics since Stalin outlawed the Church in 1946. On August 4, which came to be known as "Bloody Thursday," local authorities violently suppressed a demonstration organized by the Democratic Front to Promote Perestroika. Forty-one people were detained, fined, or sentenced to 15 days of administrative arrest. On September 1, local authorities violently displaced 5,000 students at a public meeting lacking official permission at Ivan Franko State University.
On November 13, 1988, approximately 10,000 people attended an officially sanctioned meeting organized by the cultural heritage organization Spadschyna, the Kyiv University student club Hromada, and the environmental groups Zelenyi Svit ("Green World") and Noosfera, to focus on ecological issues. From November 14–18, 15 Ukrainian activists were among the 100 human-, national- and religious-rights advocates invited to discuss human rights with Soviet officials and a visiting delegation of the U.S. Commission on Security and Cooperation in Europe (also known as the Helsinki Commission). On December 10, hundreds gathered in Kiev to observe International Human Rights Day at a rally organized by the Democratic Union. The unauthorized gathering resulted in the detention of local activists.
Kurapaty, Belarus
The Partyja BPF (Belarusian Popular Front) was established in 1988 as a political party and cultural movement for democracy and independence, à la the Baltic republics’ popular fronts. The discovery of mass graves in Kurapaty outside Minsk by historian Zianon Pazniak, the Belarusian Popular Front’s first leader, gave additional momentum to the pro-democracy and pro-independence movement in Belarus. It claimed that the NKVD performed secret killings in Kurapaty. Initially the Front had significant visibility because its numerous public actions almost always ended in clashes with the police and the KGB.
1989
Moscow: Limited democratization
Spring 1989 saw the people of the Soviet Union exercising a democratic choice, albeit limited, for the first time since 1917, when they elected the new Congress of People's Deputies. Just as important was the uncensored live TV coverage of the legislature's deliberations, where people witnessed the previously feared Communist leadership being questioned and held accountable. This example fueled a limited experiment with democracy in Poland, which quickly led to the toppling of the Communist government in Warsaw that summer – which in turn sparked uprisings that overthrew communism in the other five Warsaw Pact countries before the end of 1989, the year the Berlin Wall fell. These events showed that the people of Eastern Europe and the Soviet Union did not support Gorbachev's drive to modernize Communism; rather, they preferred to abandon it altogether.
This was also the year that CNN became the first non-Soviet broadcaster allowed to beam its TV news programs to Moscow. Officially, CNN was available only to foreign guests in the Savoy Hotel, but Muscovites quickly learned how to pick up signals on their home televisions. That had a major impact on how Russians saw events in their country, and made censorship almost impossible.Pages 188–189. Conor O'Clery. Moscow December 25, 1991: The Last Day of the Soviet Union. Transworld Ireland (2011)
Congress of People’s Deputies of the Soviet Union
thumb|left|Andrei Sakharov, formerly exiled to Gorky, was elected to the Congress of People's Deputies in March 1989.
The month-long nomination period for candidates for the Congress of People's Deputies of the USSR lasted until January 24, 1989. For the next month, selection among the 7,531 district nominees took place at meetings organized by constituency-level electoral commissions. On March 7, a final list of 5,074 candidates was published; about 85% were Party members.
In the two weeks prior to the 1,500 district polls, elections to fill 750 reserved seats of public organizations, contested by 880 candidates, were held. Of these seats, 100 were allocated to the CPSU, 100 to the All-Union Central Council of Trade Unions, 75 to the Communist Youth Union (Komsomol), 75 to the Committee of Soviet Women, 75 to the War and Labour Veterans' Organization, and 325 to other organizations such as the Academy of Sciences. The selection process was done in April.
In the March 26 general elections, voter participation was an impressive 89.8%, and 1,958 (including 1,225 district seats) of the 2,250 CPD seats were filled. In district races, run-off elections were held in 76 constituencies on April 2 and 9 and fresh elections were organized on April 20 and 14 to May 23, in the 199 remaining constituencies where the required absolute majority was not attained. While most CPSU-endorsed candidates were elected, more than 300 lost to independent candidates such as Yeltsin, physicist Andrei Sakharov and lawyer Anatoly Sobchak.
In the first session of the new Congress of People's Deputies, from May 25 to June 9, hardliners retained control but reformers used the legislature as a platform for debate and criticism – which was broadcast live and uncensored. This transfixed the population; nothing like this freewheeling debate had ever been witnessed in the U.S.S.R. On May 29, Yeltsin managed to secure a seat on the Supreme Soviet, and in the summer he formed the first opposition, the Inter-Regional Deputies Group, composed of Russian nationalists and liberals. Composing the final legislative group in the Soviet Union, those elected in 1989 played a vital part in reforms and the eventual breakup of the Soviet Union during the next two years.
On May 30, 1989, Gorbachev proposed that nationwide local elections, scheduled for November 1989, be postponed until early 1990 because there were still no laws governing the conduct of such elections. This was seen by some as a concession to local Party officials, who feared they would be swept from power in a wave of anti-establishment sentiment.
On October 25, 1989, the Supreme Soviet voted to eliminate special seats for the Communist Party and other official organizations in national and local elections, responding to sharp popular criticism that such reserved slots were undemocratic. After vigorous debate, the 542-member Supreme Soviet passed the measure 254-85 (with 36 abstentions). The decision required a constitutional amendment, ratified by the full congress, which met December 12–25. It also passed measures that would allow direct elections for presidents of each of the 15 constituent republics. Gorbachev strongly opposed such a move during debate but was defeated.
The vote expanded the power of republics in local elections, enabling them to decide for themselves how to organize voting. Latvia, Lithuania, and Estonia had already proposed laws for direct presidential elections. Local elections in all the republics had already been scheduled to take place between December and March 1990.
Loss of satellite states
thumb|The Eastern Bloc
The six Warsaw Pact countries of Eastern Europe, while nominally independent, were widely recognized in the international community as the Soviet satellite states. All had been occupied by the Soviet Red Army in 1945, had Soviet-style socialist states imposed upon them, and had very restricted freedom of action in either domestic or international affairs. Any moves towards real independence were suppressed by military force – in the Hungarian Revolution of 1956 and the Prague Spring in 1968. Gorbachev abandoned the oppressive and expensive Brezhnev Doctrine, which mandated intervention in the Warsaw Pact states, in favor of non-intervention in the internal affairs of allies – jokingly termed the Sinatra Doctrine in a reference to the Frank Sinatra song "My Way".
Baltic "Chain of Freedom"
thumb|left|"Baltic Way" 1989 demonstration in Šiauliai, Lithuania. The coffins are decorated with national flags of the three Baltic Republics and are placed symbolically beneath Soviet and Nazi flags.
The Baltic Way or Baltic Chain (also Chain of Freedom , , , ) was a peaceful political demonstration on August 23, 1989. An estimated 2 million people joined hands to form a human chain extending across Estonia, Latvia and Lithuania, which had been forcibly reincorporated into the Soviet Union in 1944. The colossal demonstration marked the 50th anniversary of the Molotov–Ribbentrop Pact that divided Eastern Europe into spheres of influence and led to the occupation of the Baltic states in 1940.
In December 1989, the Congress of People's Deputies accepted—and Gorbachev signed—the report by the Yakovlev Commission condemning the secret protocols of the Molotov–Ribbentrop pact.Senn (1995), p. 78
Lithuania’s Communist Party splits
In the March 1989 elections to the Congress of Peoples Deputies, 36 of the 42 deputies from Lithuania were candidates from the independent national movement Sąjūdis. This was the greatest victory for any national organization within the USSR and was a devastating revelation to the Lithuanian Communist Party of its growing unpopularity.
On December 7, 1989, the Communist Party of Lithuania under the leadership of Algirdas Brazauskas, split from the Communist Party of the Soviet Union and abandoned its claim to have a constitutional "leading role" in politics. A smaller loyalist faction of the Communist Party, headed by hardliner Mykolas Burokevičius, was established and remained affiliated with the CPSU. However, Lithuania’s governing Communist Party was formally independent from Moscow's control – a first for Soviet Republics and a political earthquake that prompted Gorbachev to arrange a visit to Lithuania the following month in a futile attempt to bring the local party back under control.
Caucasus
Azerbaijan’s blockade
On July 16, 1989, the Popular Front of Azerbaijan held its first congress and elected Abulfaz Elchibey, who would become President, as its Chairman.Page 86 Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7 On August 19, 600,000 protesters jammed Baku’s Lenin Square (now Azadliq Square) to demand the release of political prisoners. In the second half of 1989, weapons were handed out in Nagorno-Karabakh. When Karabakhis got hold of small arms to replace hunting rifles and crossbows, casualties began to mount; bridges were blown up, roads were blockaded, and hostages were taken.Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7, p. 71
In a new and effective tactic, the Popular Front launched a rail blockade of Armenia, which caused petrol and food shortages because 85 percent of Armenia's freight came from Azerbaijan.Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7, p. 87 Under pressure from the Popular Front the Communist authorities in Azerbaijan started making concessions. On September 25, they passed a sovereignty law that gave precedence to Azerbaijani law, and on October 4, the Popular Front was permitted to register as a legal organization as long as it lifted the blockade. Transport communications between Azerbaijan and Armenia never fully recovered. Tensions continued to escalate and on December 29, Popular Front activists seized local party offices in Jalilabad, wounding dozens.
Armenia’s Karabakh Committee released
On May 31, 1989, the 11 members of the Karabakh Committee, who had been imprisoned without trial in Moscow’s Matrosskaya Tishina prison, were released, and returned home to a hero's welcome. Soon after his release, Levon Ter-Petrossian, an academic, was elected chairman of the anti-communist opposition Pan-Armenian National Movement, and later stated that it was in 1989 that he first began considering full independence.Page 72 Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7
Massacre in Tbilisi, Georgia
thumb|left|Photos of victims (mostly young women) of an April 1989 massacre in Tbilisi, Georgia.
On April 7, 1989, Soviet troops and armored personnel carriers were sent to Tbilisi after more than 100,000 people protested in front of Communist Party headquarters with banners calling for Georgia to secede from the Soviet Union and for Abkhazia to be fully integrated into Georgia. On April 9, 1989, troops attacked the demonstrators; some 20 people were killed and more than 200 wounded. This event radicalized Georgian politics, prompting many to conclude that independence was preferable to continued Soviet rule. On April 14, Gorbachev removed Jumber Patiashvili as First Secretary of the Georgian Communist Party and replaced him with former Georgian KGB chief Givi Gumbaridze.
On July 16, 1989, in Abkhazia's capital Sukhumi, a protest against the opening of a Georgian university branch in the town led to violence that quickly degenerated into a large-scale inter-ethnic confrontation in which 18 died and hundreds were injured before Soviet troops restored order. This riot marked the start of the Georgian-Abkhaz conflict.
The Western republics
Popular Front of Moldova
In the March 26, 1989, elections to the Congress of People's Deputies, 15 of the 46 Moldavian deputies sent to Moscow were supporters of the Nationalist/Democratic movement. The Popular Front of Moldova founding congress took place two months later, on May 20, 1989. During its second congress (June 30 – July 1, 1989), Ion Hadârcă was elected its president.
A series of demonstrations that became known as the Grand National Assembly () was the Front’s first major achievement. Such mass demonstrations, including one attended by 300,000 people on August 27,Esther B. Fein, "Baltic Nationalists Voice Defiance But Say They Won't Be Provoked", in the New York Times, August 28, 1989 convinced the Moldavian Supreme Soviet on August 31 to adopt the language law making Moldovan the official language, and replacing the Cyrillic alphabet with Latin characters.King, p.140
Ukraine’s Rukh
In Ukraine, Lviv and Kiev celebrated Ukrainian Independence Day on January 22, 1989. Thousands gathered in Lviv for an unauthorized moleben (religious service) in front of St. George's Cathedral. In Kiev, 60 activists met in a Kiev apartment to commemorate the proclamation of the Ukrainian People's Republic in 1918. On February 11–12, 1989, the Ukrainian Language Society held its founding congress. On February 15, 1989, the formation of the Initiative Committee for the Renewal of the Ukrainian Autocephalous Orthodox Church was announced. The program and statutes of the movement were proposed by the Writers Association of Ukraine and were published in the journal Literaturna Ukraina on February 16, 1989. The organization heralded Ukrainian dissidents such as Vyacheslav Chornovil.
In late February, large public rallies took place in Kiev to protest the election laws, on the eve of the March 26 elections to the USSR Congress of People's Deputies, and to call for the resignation of the first secretary of the Communist Party of Ukraine, Volodymyr Shcherbytsky, lampooned as "the mastodon of stagnation." The demonstrations coincided with a visit to Ukraine by Soviet President Gorbachev. On February 26, 1989, between 20,000 and 30,000 people participated in an unsanctioned ecumenical memorial service in Lviv, marking the anniversary of the death of 19th Century Ukrainian artist and nationalist Taras Shevchenko.
On March 4, 1989, the Memorial Society, committed to honoring the victims of Stalinism and cleansing society of Soviet practices, was founded in Kiev. A public rally was held the next day. On March 12, A pre-election meeting organized in Lviv by the Ukrainian Helsinki Union and the Marian Society Myloserdia (Compassion) was violently dispersed, and nearly 300 people were detained. On March 26, elections were held to the union Congress of People's Deputies; by-elections were held on April 9, May 14, and May 21. Among the 225 Ukrainian deputies, most were conservatives, though a handful of progressives made the cut.
From April 20–23, 1989, pre-election meetings were held in Lviv for four consecutive days, drawing crowds of up to 25,000. The action included a one-hour warning strike at eight local factories and institutions. It was the first labor strike in Lviv since 1944. On May 3, a pre-election rally attracted 30,000 in Lviv. On May 7, The Memorial Society organized a mass meeting at Bykivnia, site of a mass grave of Ukrainian and Polish victims of Stalinist terror. After a march from Kiev to the site, a memorial service was staged.
From mid-May to September 1989, Ukrainian Greek-Catholic hunger strikers staged protests on Moscow's Arbat to call attention to the plight of their Church. They were especially active during the July session of the World Council of Churches held in Moscow. The protest ended with the arrests of the group on September 18. On May 27, 1989, the founding conference of the Lviv regional Memorial Society was held. On June 18, 1989, an estimated 100,000 faithful participated in public religious services in Ivano-Frankivsk in western Ukraine, responding to Cardinal Myroslav Lubachivsky's call for an international day of prayer.
On August 19, 1989, the Russian Orthodox Parish of Saints Peter and Paul announced it would be switching to the Ukrainian Autocephalous Orthodox Church. On September 2, 1989, tens of thousands across Ukraine protested a draft election law that reserved special seats for the Communist Party and for other official organizations: 50,000 in Lviv, 40,000 in Kiev, 10,000 in Zhytomyr, 5,000 each in Dniprodzerzhynsk and Chervonohrad, and 2,000 in Kharkiv. From September 8–10, 1989, writer Ivan Drach was elected to head Rukh, the People's Movement of Ukraine, at its founding congress in Kiev. On September 17, between 150,000 and 200,000 people marched in Lviv, demanding the legalization of the Ukrainian Greek Catholic Church. On September 21, 1989, exhumation of a mass grave begins in Demianiv Laz, a nature preserve south of Ivano-Frankivsk. On September 28, First Secretary of the Communist Party of the Ukraine Volodymyr Shcherbytsky, a holdover from the Brezhnev era, was replaced by Vladimir Ivashko.
On October 1, 1989, a peaceful demonstration of 10,000 to 15,000 people was violently dispersed by the militia in front of Lviv's Druzhba Stadium, where a concert celebrating the Soviet "reunification" of Ukrainian lands was being held. On October 10, Ivano-Frankivsk was the site of a pre-election protest attended by 30,000 people. On October 15, several thousand people gathered in Chervonohrad, Chernivtsi, Rivne, and Zhytomyr; 500 in Dnipropetrovsk; and 30,000 in Lviv to protest the election law. On October 20, faithful and clergy of the Ukrainian Autocephalous Orthodox Church participated in a synod in Lviv, the first since its forced liquidation in the 1930s.
On October 24, the union Supreme Soviet passed a law eliminating special seats for Communist Party and other official organizations' representatives. On October 26, twenty factories in Lviv held strikes and meetings to protest the police brutality of October 1 and the authorities' unwillingness to prosecute those responsible. From October 26–28, the Zelenyi Svit (Friends of the Earth – Ukraine) environmental association held its founding congress, and on October 27 the Ukrainian Supreme Soviet passed a law eliminating the special status of party and other official organizations.
On October 28, 1989, the Ukrainian Supreme Soviet decreed that effective January 1, 1990, Ukrainian would be the official language of Ukraine, while Russian would be used for communication between ethnic groups. On the same day The Congregation of the Church of the Transfiguration in Lviv left the Russian Orthodox Church and proclaimed itself the Ukrainian Greek Catholic Church. The following day, thousands attended a memorial service at Demianiv Laz, and a temporary marker was placed to indicate that a monument to the "victims of the repressions of 1939–1941" soon would be erected.
In mid-November The Shevchenko Ukrainian Language Society was officially registered. On November 19, 1989, a public gathering in Kiev attracted thousands of mourners, friends and family to the reburial in Ukraine of three inmates of the infamous Gulag Camp No. 36 in Perm in the Ural Mountains: human-rights activists Vasyl Stus, Oleksiy Tykhy, and Yuri Lytvyn. Their remains were reinterred in Baikove Cemetery. On November 26, 1989, a day of prayer and fasting was proclaimed by Cardinal Myroslav Lubachivsky, thousands of faithful in western Ukraine participated in religious services on the eve of a meeting between Pope John Paul II and Soviet President Gorbachev. On November 28, 1989, the Ukrainian SSR's Council for Religious Affairs issued a decree allowing Ukrainian Catholic congregations to register as legal organizations. The decree was proclaimed on December 1, coinciding with a meeting at the Vatican between the pope and the Soviet president.
On December 10, 1989, the first officially sanctioned observance of International Human Rights Day was held in Lviv. On December 17, an estimated 30,000 attended a public meeting organized in Kiev by Rukh in memory of Nobel laureate Andrei Sakharov, who died on December 14. On December 26, the Supreme Soviet of Ukrainian SSR adopted a law designating Christmas, Easter, and the Feast of the Holy Trinity official holidays.
In May 1989, a Soviet dissident, Mustafa Dzhemilev, was elected to lead the newly founded Crimean Tatar National Movement. He also led the campaign for return of Crimean Tatars to their homeland in Crimea after 45 years of exile.
Belarus: Kurapaty
thumb|Meeting in Kurapaty, Byelorussia, 1989
On January 24, 1989, the Soviet authorities in Byelorussia agreed to the demand of the democratic opposition to build a monument to thousands of people shot by Stalin-era police in the Kuropaty Forest near Minsk in the 1930s.
On September 30, 1989, thousands of Byelorussians, denouncing local leaders, marched through Minsk to demand additional cleanup of the 1986 Chernobyl disaster site in Ukraine. Up to 15,000 protesters wearing armbands bearing radioactivity symbols and carrying the banned red-and-white Byelorussian national flag filed through torrential rain in defiance of a ban by local authorities. Later, they gathered in the city center near the government's headquarters, where speakers demanded resignation of Yefrem Sokolov, the republic's Communist Party leader, and called for the evacuation of half a million people from the contaminated zones.
Central Asian republics
Fergana, Uzbekistan
Thousands of Soviet troops were sent to the Fergana Valley, southeast of the Uzbek capital Tashkent, to re-establish order after clashes in which local Uzbeks hunted down members of the Meskhetian minority in several days of rioting between June 4–11, 1989; about 100 people were killed. On June 23, 1989, Gorbachev removed Rafiq Nishonov as First Secretary of the Communist Party of the Uzbek SSR and replaced him with Karimov, who went on to lead Uzbekistan as a Soviet Republic and subsequently as an independent state.
upright|thumb|Nursultan Nazarbayev became leader of the Kazakh SSR in 1989 and later led Kazakhstan to independence.
Zhanaozen, Kazakhstan
In Kazakhstan on June 19, 1989, young men carrying guns, firebombs, iron bars and stones rioted in Zhanaozen, causing a number of deaths. The youths tried to seize a police station and a water-supply station. They brought public transportation to a halt and shut down various shops and industries. By June 25, the rioting had spread to five other towns near the Caspian Sea. A mob of about 150 people armed with sticks, stones and metal rods attacked the police station in Mangishlak, about 90 miles from Zhanaozen, before they were dispersed by government troops flown in by helicopters. Mobs of young people also rampaged through Yeraliev, Shepke, Fort-Shevchenko and Kulsary, where they poured flammable liquid on trains housing temporary workers and set them on fire.
On June 22, 1989, Gorbachev removed Gennady Kolbin (the ethnic Russian whose appointment caused riots in December 1986) as First Secretary of the Communist Party of Kazakhstan for his poor handling of the June events, and replaced him with Nazarbayev, an ethnic Kazakh who went on to lead Kazakhstan as a Soviet Republic and subsequently as an independent state for decades.
1990
Moscow loses six republics
On February 7, 1990, the Central Committee of the CPSU accepted Gorbachev’s recommendation that the party give up its monopoly on political power. In 1990, all fifteen constituent republics of the USSR held their first competitive elections, with reformers and ethnic nationalists winning many seats. The CPSU lost the elections in six republics:
In Lithuania, to Sąjūdis, on February 24 (run-off elections on March 4, 7, 8, and 10).
In Moldova, to the Popular Front of Moldova, on February 25.
In Estonia, to the Estonian Popular Front, on March 18.
In Latvia, to the Latvian Popular Front, on March 18 (run-off elections on March 25, April 1, and April 29).
In Armenia, to the Pan-Armenian National Movement, on May 20 (run-off elections on June 3 and July 15).
In Georgia, to Round Table-Free Georgia, on October 28 (run-off election on November 11).
The constituent republics began to declare their national sovereignty and began a "war of laws" with the Moscow central government; they rejected union-wide legislation that conflicted with local laws, asserted control over their local economy and refused to pay taxes. This conflict caused economic dislocation as supply lines were disrupted, and caused the Soviet economy to decline further.Acton, Edward, (1995) Russia, The Tsarist and Soviet Legacy, Longmann Group Ltd (1995) ISBN 0-582-08922-0
Rivalry between USSR and RSFSR
On March 4, 1990, the Russian Soviet Federative Socialist Republic held relatively free elections for the Congress of People's Deputies of Russia. Boris Yeltsin was elected, representing Sverdlovsk, garnering 72 percent of the vote.Leon Aron, Boris Yeltsin A Revolutionary Life. Harper Collins, 2000. page 739–740. On May 29, 1990, Yeltsin was elected chair of the Presidium of the Supreme Soviet of the RSFSR, despite the fact that Gorbachev asked Russian deputies not to vote for him.
Yeltsin was supported by democratic and conservative members of the Supreme Soviet, who sought power in the developing political situation. A new power struggle emerged between the RSFSR and the Soviet Union. On June 12, 1990, the Congress of People's Deputies of the RSFSR adopted a declaration of sovereignty. On July 12, 1990, Yeltsin resigned from the Communist Party in a dramatic speech at the 28th Congress.
thumb|100px|Lithuania’s Vytautas Landsbergis
Baltic republics
Lithuania
Gorbachev’s visit to the Lithuanian capital Vilnius on January 11–13, 1990, provoked a pro-independence rally attended by an estimated 250,000 people.
On March 11, the newly elected parliament of the Lithuanian SSR elected Vytautas Landsbergis, the leader of Sąjūdis, as its chairman and proclaimed the Act of the Re-Establishment of the State of Lithuania, making Lithuania the first Soviet Republic to break away from the USSR. Moscow reacted with an economic blockade keeping the troops in Lithuania ostensibly "to secure the rights of ethnic Russians".Nina Bandelj, From Communists to Foreign Capitalists: The Social Foundations of Foreign Direct Investment in Postsocialist Europe, Princeton University Press, 2008, ISBN 978-0-691-12912-9, p. 41
thumb|100px||Estonia’s Edgar Savisaar
Estonia
On March 25, 1990, the Estonian Communist Party voted to split from the CPSU after a six-month transition.
On March 30, 1990, the Estonian Supreme Council declared the Soviet occupation of Estonia since World War II to be illegal and began reestablishing Estonia as an independent state.
On April 3, 1990, Edgar Savisaar of the Popular Front of Estonia was elected Chairman of the Council of Ministers (the equivalent of being Estonia's Prime Minister).
thumb|100px|Latvia’s Ivars Godmanis
Latvia
Latvia declared the restoration of independence on May 4, 1990, with the declaration stipulating a transitional period to complete independence. The Declaration stated that although Latvia had de facto lost its independence in World War II, the country had de jure remained a sovereign country because the annexation had been unconstitutional and against the will of the Latvian people.
The declaration also stated that Latvia would base its relationship with the Soviet Union on the basis of the Latvian–Soviet Peace Treaty of 1920, in which the Soviet Union recognized Latvia's independence as inviolable "for all future time". May 4 is a national holiday in Latvia.
On May 7, 1990, Ivars Godmanis of the Latvian Popular Front was elected Chairman of the Council of Ministers (the equivalent of being Latvia's Prime Minister).
Caucasus
Azerbaijan’s Black January
During the first week of January 1990, in the Azerbaijani exclave of Nakhchivan, the Popular Front led crowds in the storming and destruction of the frontier fences and watchtowers along the border with Iran, and thousands of Soviet Azerbaijanis crossed the border to meet their ethnic cousins in Iranian Azerbaijan. It was the first instance the Soviet Union had lost control of an external border.
180px|thumb|left|Azerbaijani stamp with photos of Black January
Ethnic tensions had escalated between the Armenians and Azerbaijanis in spring and summer 1988.Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7, p. 90 On January 9, 1990, after the Armenian parliament voted to include Nagorno-Karabakh within its budget, renewed fighting broke out, hostages were taken, and four Soviet soldiers were killed.Page 89 Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7 On January 11, Popular Front radicals stormed party buildings and effectively overthrew the communist powers in the southern town of Lenkoran. Gorbachev resolved to regain control of Azerbaijan; the events that ensued are known as "Black January." Late on January 19, 1990, after blowing up the central television station and cutting the phone and radio lines, 26,000 Soviet troops entered the Azerbaijani capital Baku, smashing barricades, attacking protesters, and firing into crowds. On that night and during subsequent confrontations (which lasted until February), more than 130 people died – the majority of whom were civilians. More than 700 civilians were wounded, hundreds were detained, but only a few were actually tried for alleged criminal offenses.
Civil liberties suffered. Soviet Defence Minister Dmitry Yazov stated that the use of force in Baku was intended to prevent the de facto takeover of the Azerbaijani government by the non-communist opposition, to prevent their victory in upcoming free elections (scheduled for March 1990), to destroy them as a political force, and to ensure that the Communist government remained in power. This marked the first time the Soviet Army took one of its own cities by force.Page 93 Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7
The army had gained control of Baku, but by January 20 it essentially lost Azerbaijan. Nearly the entire population of Baku turned out for the mass funerals of "martyrs" buried in the Alley of Martyrs. Thousands of Communist Party members publicly burned their party cards. First Secretary Vezirov decamped to Moscow and Ayaz Mutalibov was appointed his successor in a free vote of party officials. The ethnic Russian Viktor Polyanichko remained second secretary and the power behind the throne.Black Garden de Waal, Thomas. 2003. NYU. ISBN 0-8147-1945-7, p. 94
Following the hardliners' takeover, the September 30, 1990 elections (runoffs on October 14) were characterized by intimidation; several Popular Front candidates were jailed, two were murdered, and unabashed ballot stuffing took place even in the presence of Western observers."Conflict, cleavage, and change in Central Asia and the Caucasus" Karen Dawisha and Bruce Parrott (eds.), Cambridge University Press. 1997 ISBN 0-521-59731-5, p. 124 The election results reflected the threatening environment; out of the 350 members, 280 were Communists, with only 45 opposition candidates from the Popular Front and other non-communist groups, who together formed a Democratic Bloc ("Dembloc"). In May 1990 Mutalibov was elected Chairman of the Supreme Soviet unopposed."Conflict, cleavage, and change in Central Asia and the Caucasus", Karen Dawisha and Bruce Parrott (eds.), Cambridge University Press. 1997 ISBN 0-521-59731-5, p. 125
The Western republics
Ukraine
thumb|Viacheslav Chornovil, a prominent Ukrainian dissident and a lead figure of Rukh.
On January 21, 1990, Rukh organized a human chain between Kiev, Lviv, and Ivano-Frankivsk. Hundreds of thousands joined hands to commemorate the proclamation of Ukrainian independence in 1918 and the reunification of Ukrainian lands one year later (1919 Unification Act). On January 23, 1990, the Ukrainian Greek-Catholic Church held its first synod since its liquidation by the Soviets in 1946 (an act which the gathering declared invalid). On February 9, 1990, the Ukrainian Ministry of Justice officially registered Rukh. However, the registration came too late for Rukh to stand its own candidates for the parliamentary and local elections on March 4. At the 1990 elections of people's deputies to the Supreme Council (Verkhovna Rada), candidates from the Democratic Bloc won landslide victories in western Ukrainian oblasts. A majority of the seats had to hold run-off elections. On March 18, Democratic candidates scored further victories in the run-offs. The Democratic Bloc gained about 90 out of 450 seats in the new parliament.
On April 6, 1990, the Lviv City Council voted to return St. George Cathedral to the Ukrainian Greek Catholic Church. The Russian Orthodox Church refused to yield. On April 29–30, 1990, the Ukrainian Helsinki Union disbanded to form the Ukrainian Republican Party. On May 15 the new parliament convened. The bloc of conservative communists held 239 seats; the Democratic Bloc, which had evolved into the National Council, had 125 deputies. On June 4, 1990, two candidates remained in the protracted race for parliament chair. The leader of the Communist Party of Ukraine (CPU), Volodymyr Ivashko, was elected with 60 percent of the vote as more than 100 opposition deputies boycotted the election. On June 5–6, 1990, Metropolitan Mstyslav of the U.S.-based Ukrainian Orthodox Church was elected patriarch of the Ukrainian Autocephalous Orthodox Church (UAOC) during that Church's first synod. The UAOC declared its full independence from the Moscow Patriarchate of the Russian Orthodox Church, which in March had granted autonomy to the Ukrainian Orthodox church headed by Metropolitan Filaret.
left|thumb|Leonid Kravchuk became Ukraine's leader in 1990.
On June 22, 1990, Volodymyr Ivashko withdrew his candidacy for leader of the Communist Party of Ukraine in view of his new position in parliament. Stanislav Hurenko was elected first secretary of the CPU. On July 11, Ivashko resigned from his post as chairman of the Ukrainian Parliament after he was elected deputy general secretary of the Communist Party of the Soviet Union. The Parliament accepted the resignation a week later, on July 18. On July 16 Parliament overwhelmingly approved the Declaration on State Sovereignty of Ukraine – with a vote of 355 in favour and four against. The people's deputies voted 339 to 5 to proclaim July 16 a Ukrainian national holiday.
On July 23, 1990, Leonid Kravchuk was elected to replace Ivashko as parliament chairman. On July 30, Parliament adopted a resolution on military service ordering Ukrainian soldiers "in regions of national conflict such as Armenia and Azerbaijan" to return to Ukrainian territory. On August 1, Parliament voted overwhelmingly to shut down the Chernobyl Nuclear Power Plant. On August 3, it adopted a law on the economic sovereignty of the Ukrainian republic. On August 19, the first Ukrainian Catholic liturgy in 44 years was celebrated at St. George Cathedral. On September 5–7, the International Symposium on the Great Famine of 1932–1933 was held in Kiev. On September 8, The first "Youth for Christ" rally since 1933 took place held in Lviv, with 40,000 participants. In September 28–30, the Green Party of Ukraine held its founding congress. On September 30, nearly 100,000 people marched in Kiev to protest against the new union treaty proposed by Gorbachev.
On October 1, 1990, parliament reconvened amid mass protests calling for the resignations of Kravchuk and of Prime Minister Vitaliy Masol, a leftover from the previous régime. Students erected a tent city on October Revolution Square, where they continued the protest.
On October 17 Masol resigned, and on October 20, Patriarch Mstyslav I of Kiev and all Ukraine arrived at Saint Sophia’s Cathedral, ending a 46-year banishment from his homeland. On October 23, 1990, Parliament voted to delete Article 6 of the Ukrainian Constitution, which referred to the "leading role" of the Communist Party.
On October 25–28, 1990, Rukh held its second congress and declared that its principal goal was the "renewal of independent statehood for Ukraine". On October 28 UAOC faithful, supported by Ukrainian Catholics, demonstrated near St. Sophia’s Cathedral as newly elected Russian Orthodox Church Patriarch Aleksei and Metropolitan Filaret celebrated liturgy at the shrine. On November 1, the leaders of the Ukrainian Greek Catholic Church and of the Ukrainian Autocephalous Orthodox Church, respectively, Metropolitan Volodymyr Sterniuk and Patriarch Mstyslav, met in Lviv during anniversary commemorations of the 1918 proclamation of the Western Ukrainian National Republic.
On November 18, 1990, the Ukrainian Autocephalous Orthodox Church enthroned Mstyslav as Patriarch of Kiev and all Ukraine during ceremonies at Saint Sophia's Cathedral. Also on November 18, Canada announced that its consul-general to Kiev would be Ukrainian-Canadian Nestor Gayowsky. On November 19, the United States announced that its consul to Kiev would be Ukrainian-American John Stepanchuk. On November 19, the chairmen of the Ukrainian and Russian parliaments, respectively, Kravchuk and Yeltsin, signed a 10-year bilateral pact. In early December 1990 the Party of Democratic Rebirth of Ukraine was founded; on December 15, the Democratic Party of Ukraine was founded.
Central Asian republics
Tajikistan: Dushanbe riots
thumb|right|200px|upright|Tajik nationalist protesters squared off against the Soviet Army in Dushanbe.
On February 12–14, 1990, anti-government riots took place in Tajikistan's capital, Dushanbe, as tensions rose between nationalist Tajiks and ethnic Armenian refugees, after the Sumgait pogrom and anti-Armenian riots in Azerbaijan in 1988. During these riots, demonstrations sponsored by the nationalist Rastokhez movement turned violent. Radical economical and political reforms were demanded by the protesters which in turned torched government buildings; shops and other businesses were attacked and looted. 26 people were killed and 565 people were injured.
Kirghizia: Osh massacre
In June 1990, Osh and its environs experienced bloody ethnic clashes between ethnic Kirghiz nationalist group Osh Aymaghi and Uzbek nationalist group Adolat over the land of a former collective farm. There were about 1,200 casualties, including over 300 dead and 462 seriously injured. The riots broke out over the division of land resources in and around the city.
1991
Moscow’s crisis
On January 14, 1991, Nikolai Ryzhkov resigned from his post as Chairman of the Council of Ministers, or premier of the Soviet Union, and was succeeded by Valentin Pavlov in the newly established post of Prime Minister of the Soviet Union.
On March 17, 1991, in a Union-wide referendum 76.4 percent of voters endorsed retention of a reformed Soviet Union.1991: March Referendum SovietHistory.org The Baltic republics, Armenia, Georgia, and Moldova boycotted the referendum as well as Checheno-Ingushetia (an autonomous republic within Russia that had a strong desire for independence, and by now referred to itself as Ichkeria).Charles King, The Ghost of Freedom: History of the Caucasus In each of the other nine republics, a majority of the voters supported the retention of a reformed Soviet Union.
Russia’s President Boris Yeltsin
thumb|Boris Yeltsin, Russia's first democratically elected President
On June 12, 1991, Boris Yeltsin won 57 percent of the popular vote in the democratic elections for the newly created post of President of the Russian SFSR, defeating Gorbachev's preferred candidate, Nikolai Ryzhkov, who won 16 percent of the vote. In his election campaign, Yeltsin criticized the "dictatorship of the center," but did not yet suggest that he would introduce a market economy.
Baltic republics
Lithuania
On January 13, 1991, Soviet troops, along with the KGB Spetsnaz Alpha Group, stormed the Vilnius TV Tower in Lithuania to suppress the independence movement. Fourteen unarmed civilians were killed and hundreds more injured. On the night of July 31, 1991, Russian OMON from Riga, the Soviet military headquarters in the Baltics, assaulted the Lithuanian border post in Medininkai and killed seven Lithuanian servicemen. This event further weakened the Soviet Union's position internationally and domestically, and stiffened Lithuanian resistance.
Latvia
thumb|left|Barricade erected in Riga to prevent the Soviet Army from reaching the Latvian Parliament, July 1991.
The bloody attacks in Lithuania prompted Latvians to organize defensive barricades (the events are still today known as "The Barricades") blocking access to strategically important buildings and bridges in Riga. Soviet attacks in the ensuing days resulted in six deaths and several injuries; one person died later.
Estonia
When Estonia had officially restored its independence during the coup (see below) in the dark hours of August 20, 1991, at 11:03 pm Tallinn time, many Estonian volunteers surrounded the Tallinn TV Tower in an attempt to prepare to cut off the communication channels after the Soviet troops seized it and wouldn’t let themselves to be intimidated by the Soviet troops. When Edgar Savisaar confronted the Soviet troops for ten minutes, they finally retreated from the TV tower after the failed resistance against the Estonian people.
August Coup
thumb|right|Tanks in Red Square during the 1991 coup attempt.
Faced with growing separatism, Gorbachev sought to restructure the Soviet Union into a less centralized state. On August 20, 1991, the Russian SFSR was scheduled to sign a New Union Treaty that would have converted the Soviet Union into a federation of independent republics with a common president, foreign policy and military. It was strongly supported by the Central Asian republics, which needed the economic advantages of a common market to prosper. However, it would have meant some degree of continued Communist Party control over economic and social life.
More radical reformists were increasingly convinced that a rapid transition to a market economy was required, even if the eventual outcome meant the disintegration of the Soviet Union into several independent states. Independence also accorded with Yeltsin's desires as president of the Russian Federation, as well as those of regional and local authorities to get rid of Moscow’s pervasive control. In contrast to the reformers' lukewarm response to the treaty, the conservatives, "patriots," and Russian nationalists of the USSR – still strong within the CPSU and the military – were opposed to weakening the Soviet state and its centralized power structure.
left|thumb|Russian President Boris Yeltsin speaks atop a tank outside the White House in defiance of the August 1991 coup.
On August 19, 1991, Gorbachev's vice president, Gennady Yanayev, Prime Minister Valentin Pavlov, Defense Minister Dmitry Yazov, KGB chief Vladimir Kryuchkov and other senior officials acted to prevent the union treaty from being signed by forming the "General Committee on the State Emergency," which put Gorbachev – on holiday in Foros, Crimea – under house arrest and cut off his communications. The coup leaders issued an emergency decree suspending political activity and banning most newspapers.
Coup organizers expected some popular support but found that public sympathy in large cities and in the republics was largely against them, manifested by public demonstrations, especially in Moscow. Russian SFSR President Yeltsin condemned the coup and garnered popular support.
Thousands of Muscovites came out to defend the White House (the Russian Federation's parliament and Yeltsin's office), the symbolic seat of Russian sovereignty at the time. The organizers tried but ultimately failed to arrest Yeltsin, who rallied opposition to the coup with speech-making atop a tank. The special forces dispatched by the coup leaders took up positions near the White House, but members refused to storm the barricaded building. The coup leaders also neglected to jam foreign news broadcasts, so many Muscovites watched it unfold live on CNN. Even the isolated Gorbachev was able to stay abreast of developments by tuning into BBC World Service on a small transistor radio.http://www.asc.upenn.edu/gerbner/Asset.aspx?assetID=883
After three days, on August 21, 1991, the coup collapsed. The organizers were detained and Gorbachev returned as president, albeit with his power much depleted.
The fall: August–December 1991
thumb|Signing of the agreement to establish the Commonwealth of Independent States (CIS), December 8, 1991.
On August 24, 1991, Gorbachev dissolved the Central Committee of the CPSU, resigned as the party's general secretary, and dissolved all party units in the government. Five days later, the Supreme Soviet indefinitely suspended all CPSU activity on Soviet territory, effectively ending Communist rule in the Soviet Union and dissolving the only remaining unifying force in the country.
The Soviet Union collapsed with dramatic speed in the last quarter of 1991. Between August and December, 10 republics declared their independence, largely out of fear of another coup. By the end of September, Gorbachev no longer had the authority to influence events outside of Moscow. He was challenged even there by Yeltsin, who had begun taking over what remained of the Soviet government, including the Kremlin.
On September 17, 1991, General Assembly resolution numbers 46/4, 46/5, and 46/6 admitted Estonia, Latvia, and Lithuania to the United Nations, conforming to Security Council resolution numbers 709, 710, and 711 passed on September 12 without a vote.
The final round of the Soviet Union's collapse began with a Ukrainian popular referendum on December 1, 1991, in which 90 percent of voters opted for independence. The secession of Ukraine, the second-most powerful republic, ended any realistic chance of Gorbachev keeping the Soviet Union together even on a limited scale. The leaders of the three principal Slavic republics, Russia, Ukraine, and Belarus (formerly Byelorussia), agreed to discuss possible alternatives to the union.
On December 8, the leaders of Russia, Ukraine, and Belarus secretly met in Belavezhskaya Pushcha, in western Belarus, and signed the Belavezha Accords, which proclaimed the Soviet Union had ceased to exist and announced formation of the Commonwealth of Independent States (CIS) as a looser association to take its place. They also invited other republics to join the CIS. Gorbachev called it an unconstitutional coup. However, by this time there was no longer any reasonable doubt that, as the preamble of the Accords put it, "the USSR, as a subject of international law and a geopolitical reality, is ceasing its existence."
On December 12, the Supreme Soviet of the Russian SFSR formally ratified the Belavezha Accords and renounced the 1922 Union Treaty. It also recalled the Russian deputies from the Supreme Soviet of the USSR. The legality of this action was questionable, since Soviet law did not allow a republic to unilaterally recall its deputies.The On paper, the Russian SFSR had the constitutional right to "freely secede from the Soviet Union" (art. 69 of the RSFSR Constitution, art. 72 of the USSR Constitution), but according to USSR laws 1409-I (enacted on April 3, 1990) and 1457-I (enacted on April 26, 1990) this could be done only by referendum with two-thirds of all registered voters supporting it. No special referendum on the secession from the USSR was held in the Russian SFSR However, no one in either Russia or the Kremlin objected. Any objections from the latter would have likely had no effect, since the Soviet government had effectively been rendered impotent long before December. In effect, the largest and most powerful republic had seceded from the Union. Later that day, Gorbachev hinted for the first time that he was considering stepping down.Francis X. Clines, "Gorbachev is Ready to Resign as Post-Soviet Plan Advances", New York Times, December 13, 1991.
On December 17, 1991, along with 28 European countries, the European Community, and four non-European countries, the three Baltic Republics and nine of the twelve remaining Soviet republics signed the European Energy Charter in the Hague as sovereign states.
thumb|left|Five double-headed Russian eagles (below) substituting the former state emblem of the Soviet Union and the "СССР" letters (above) in the façade of the Grand Kremlin Palace after the dissolution of the USSR.
Doubts remained over whether the Belavezha Accords had legally dissolved the Soviet Union, since they were signed by only three republics. However, on December 21, 1991, representatives of 11 of the 12 remaining republics – all except Georgia – signed the Alma-Ata Protocol, which confirmed the dissolution of the Union and formally established the CIS. They also "accepted" Gorbachev's resignation. While Gorbachev hadn't made any formal plans to leave the scene yet, he did tell CBS News that he would resign as soon as he saw that the CIS was indeed a reality.Francis X. Clines, "11 Soviet States Form Commonwealth Without Clearly Defining Its Powers", New York Times, December 22, 1991.
In a nationally televised speech early in the morning of December 25, 1991, Gorbachev resigned as president of the USSR – or, as he put it, "I hereby discontinue my activities at the post of President of the Union of Soviet Socialist Republics." He declared the office extinct, and all of its powers (such as control of the nuclear arsenal) were ceded to Yeltsin. A week earlier, Gorbachev had met with Yeltsin and accepted the fait accompli of the Soviet Union's dissolution. On the same day, the Supreme Soviet of the Russian SFSR adopted a statute to change Russia's legal name from "Russian Soviet Federative Socialist Republic" to "Russian Federation," showing that it was now a sovereign state.
On the night of December 25, at 7:32 p.m. Moscow time, after Gorbachev left the Kremlin, the Soviet flag was lowered for the last time, and the Russian tricolor was raised in its place, symbolically marking the end of the Soviet Union. On that same day, the President of the United States George H.W. Bush held a brief televised speech officially recognizing the independence of the 11 remaining republics.
On December 26, the upper chamber of the Union's Supreme Soviet voted both itself and the Soviet Union out of existence. (the lower chamber, the Council of the Union, had been unable to work since December 12, when the recall of the Russian deputies left it without a quorum). The following day Yeltsin moved into Gorbachev's former office, though the Russian authorities had taken over the suite two days earlier. By the end of 1991, the few remaining Soviet institutions that had not been taken over by Russia ceased operation, and individual republics assumed the central government's role.
The Alma-Ata Protocol also addressed other issues, including UN membership. Notably, Russia was authorized to assume the Soviet Union's UN membership, including its permanent seat on the Security Council. The Soviet Ambassador to the UN delivered a letter signed by Russian President Yeltsin to the UN Secretary-General dated December 24, 1991, informing him that by virtue of the Alma-Ata Protocol, Russia was the successor state to the USSR. After being circulated among the other UN member states, with no objection raised, the statement was declared accepted on the last day of the year, December 31, 1991.
Chronology of declarations of restored states
thumb|right|300px|Animated map showing independent states, and territorial changes to the Soviet Union, in chronological order.
Before the coup
– March 11, 1990
Estonia (transitional) – March 30, 1990
(transitional) – May 4, 1990
During the coup
(effective) – August 20, 1991
(effective) – August 21, 1991
Chronology of declarations of newly independent states
States with limited recognition are shown in italics.
Before the coup
– August 23, 1990
Abkhazia – August 25, 1990
Transnistria – September 2, 1990
– April 9, 1991
During the coup
thumb|Zviazda, a state newspaper of the Belarusian SSR, issue from August 25, 1991. The headline reads, Belarus is independent!
Gagauzia – August 19, 1991
After the coup
Ukraine – August 24, 1991
Belarus/Byelorussia – August 25, 1991
– August 27, 1991
– August 30, 1991
– August 31, 1991
Uzbekistan – September 1, 1991
Nagorno-Karabakh Republic – September 2, 1991
– September 9, 1991
– October 27, 1991
Chechen Republic of Ichkeria – November 1, 1991
– November 28, 1991
– December 12, 1991 (the Supreme Soviet of the Russian SFSR formally ratified the Belavezha Accords, renounced the 1922 Union Treaty, and recalled Russian deputies from the Supreme Soviet of the USSR).
– December 16, 1991
Legacy
According to a 2014 poll, 57 percent of citizens of Russia regretted the collapse of the Soviet Union, while 30 percent said they did not. Elderly people tended to be more nostalgic than younger Russians. 50% of respondents in Ukraine in a similar poll held in February 2005 stated they regret the disintegration of the Soviet Union."Russians, Ukrainians Evoke Soviet Union", Angus Reid Global Monitor (01/02/05)
On 25 January 2016, Russian President Vladimir Putin blamed Lenin and his advocating for the individual republics' right to political secession for the breakup of the Soviet Union.
The breakdown of economic ties that followed the collapse of the Soviet Union led to a severe economic crisis and catastrophic fall in living standards in post-Soviet states and the former Eastern Bloc,"Child poverty soars in eastern Europe", BBC News, October 11, 2000 which was even worse than the Great Depression. "What Can Transition Economies Learn from the First Ten Years? A New World Bank Report", Transition Newsletter, World Bank, K-A.kg"Who Lost Russia?", New York Times, October 8, 2000 Even before Russia's financial crisis in 1998, Russia's GDP was half of what it had been in the early 1990s.
United Nations membership
In a letter dated December 24, 1991, Boris Yeltsin, the President of the Russian Federation, informed the United Nations Secretary-General that the membership of the Soviet Union in the Security Council and all other UN organs was being continued by the Russian Federation with the support of the 11 member countries of the Commonwealth of Independent States.
However, the Belorussian Soviet Socialist Republic and the Ukrainian Soviet Socialist Republic had already joined the UN as original members on October 24, 1945, together with the Soviet Union. After declaring independence, the Ukrainian Soviet Socialist Republic changed its name to Ukraine on August 24, 1991, and on September 19, 1991, the Belorussian Soviet Socialist Republic informed the UN that it had changed its name to the Republic of Belarus.
The other twelve independent states established from the former Soviet Republics were all admitted to the UN:
September 17, 1991: , , and
March 2, 1992: , , , , , , , and
July 31, 1992:
Explanations of Soviet Dissolution in Historiography
Historiography on Soviet dissolution can be roughly classified in two groups: intentionalist accounts and structuralist accounts.
Intentionalist accounts contend that Soviet collapse was not inevitable, and resulted from the policies and decisions of specific individuals (usually, Gorbachev and Yeltsin). One characteristic example of intentionalist writing is historian Archie Brown's The Gorbachev Factor, which argues Gorbachev was the main force in Soviet politics at least in the period 1985–1988; even later, he largely spearheaded the political reforms and developments, as opposed to 'being led by events'. This was especially true of the policies of perestroika and glasnost, market initiatives, and foreign policy stance, as political scientist George Breslauer has seconded, labelling Gorbachev a "man of the events". In a slightly different vein, David Kotz and Fred Weir have contended that Soviet elites were responsible for spurring on both nationalism and capitalism, from which they could personally benefit (this is also demonstrated by their continued presence in the higher economic and political echelons of post-Soviet republics).
Structuralist accounts, by contrast, take a more deterministic view, in which Soviet dissolution was an outcome of deeply-rooted structural issues, which planted a 'time-bomb'. For example, Stephen Walker has argued that while minority nationalities were denied power at the Union level, confronted by a culturally-destabilizing form of economic modernization, and subjected to a certain amount of Russification, they were at the same time strengthened by several policies pursued by Soviet regime (such as indigenization of leadership, support for local languages, etc.) – which over time created conscious nations. Furthermore, the basic legitimating myths of the Soviet Union federative system – that it was a voluntary and mutual union of allied peoples – eased the task of secession/ independence.
See also
Belavezha Accords
Breakup of Yugoslavia
Dissolution of Czechoslovakia
German reunification
History of the Soviet Union (1982–91)
History of Russia (1992–present)
Predictions of Soviet collapse
Union of Sovereign States
References
Further reading
Aron, Leon. Boris Yeltsin : A Revolutionary Life. Harper Collins (2000). ISBN 0-00-653041-9
Aron, Leon. "The 'Mystery' of Soviet Collapse." Journal of Democracy 17.2 (2006): 21–35.
Beissinger, Mark. "Nationalism and the Collapse of Soviet Communism" Contemporary European History 18.3 (2009): 331–347.
Brown, Archie. The Gorbachev Factor. Oxford University Press (1997). ISBN 978-0-19288-052-9.
Cohen, Stephen. "Was the Soviet System Reformable?" Slavic Review 63.3 (2004): 459–488.
Crawshaw, Steve. Goodbye to the USSR: The Collapse of Soviet Power. Bloomsbury (1992). ISBN 0-7475-1561-1
Dallin, Alexander. "Causes of the Collapse of the USSR." Post-Soviet Affairs 8.4 (1992).
Dawisha, Karen & Parrott, Bruce (Editors). "Conflict, cleavage, and change in Central Asia and the Caucasus". Cambridge University Press (1997). ISBN 0-521-59731-5
de Waal, Thomas. Black Garden. NYU (2003). ISBN 0-8147-1945-7
Gorbachev, Mikhail. Memoirs. Doubleday (1995). ISBN 0-385-40668-1
Gvosdev, Nikolas K., ed. The Strange Death of Soviet Communism: A Post-Script. Transaction Publishers (2008). ISBN 978-1-41280-698-5
Kotz, David, and Fred Weir. “The Collapse of the Soviet Union was a Revolution from Above.” In The Rise and Fall of the Soviet Union, edited by Laurie Stoff, 155–164. Thomson Gale (2006).
Mayer, Tom. "The Collapse of Soviet Communism: A Class Dynamics Interpretation." Social Forces 80.3 (2002): 759–811.
O'Clery, Conor. Moscow December 25, 1991: The Last Day of the Soviet Union. Transworld Ireland (2011). ISBN 978-1-84827-112-8
Segrillo, Angelo. The Decline of the Soviet Union: A Hypothesis on Industrial Paradigms, Technological Revolutions and the Roots of Perestroika. LEA Working Paper Series, no. 2, December 2016.
Plokhy, Serhii. The Last Empire: The Final Days of the Soviet Union. Oneworld (2014). ISBN 978-1-78074-646-3
Strayer, Robert. Why Did the Soviet Union Collapse? Understanding Historical Change. M. E. Sharpe (1998). ISBN 978-0-76560-004-2
Suny, Ronald. Revenge of the Past: Nationalism, Revolution, and the Collapse of the Soviet Union. Stanford University Press (1993). ISBN 978-0-80472-247-6
Walker, Edward W. Dissolution: Sovereignty and the Breakup of the Soviet Union. Rowman & Littlefield Publishers (2003). ISBN 978-0-74252-453-8
External links
Photographs of the fall of the USSR by photojournalist Alain-Pierre Hovasse, a first-hand witness of these events.
Guide to the James Hershberg poster collection, Special Collections Research Center, The Estelle and Melvin Gelman Library, The George Washington University. This collection contains posters documenting the changing social and political culture in the former Soviet Union and Europe (particularly Eastern Europe) during the collapse of Communism in Eastern Europe and the breakup of the Soviet Union. A significant portion of the posters in this collection were used in a 1999 exhibit at Gelman Library titled "Goodbye Comrade: An Exhibition of Images from the Revolution of '89 and the Collapse of Communism."
Lowering of the Soviet flag in December 25, 1991
U.S. Response to the End of the USSR from the Dean Peter Krogh Foreign Affairs Digital Archives
.
Category:1985 in the Soviet Union
Category:1986 in the Soviet Union
Category:1987 in the Soviet Union
Category:1988 in the Soviet Union
Category:1989 in the Soviet Union
Category:1990 in the Soviet Union
Category:1991 in the Soviet Union
Category:1991 in politics
Category:1991 in Asia
Category:1991 in Europe
Category:1991 in Russia
Category:1991 in the United States
Soviet Union | 40,494,892 | 2017-01 |
High-definition television | High-definition television (HDTV) is a television system providing an image resolution that is substantially higher than that of standard-definition television.
HDTV may be transmitted in various formats:
1080p: 1920×1080p: 2,073,600 pixels (~2.07 megapixels) per frame
1080i: 1920×1080i: 1,036,800 pixels (~1.04 MP) per field or 2,073,600 pixels (~2.07 MP) per frame
Some countries also use a non-standard CEA resolution, such as 1440×1080i: 777,600 pixels (~0.78 MP) per field or 1,555,200 pixels (~1.56 MP) per frame
720p: 1280×720p: 921,600 pixels (~0.92 MP) per frame
The letter "p" here stands for progressive scan, while "i" indicates interlaced.
When transmitted at two megapixels per frame, HDTV provides about five times as many pixels as SD (standard-definition television). HDTV provides a more desirable picture because of the increased amount of pixels, allowing images to be changed up to 60 times per second compared to SD at 30 times per second. This produces a clearer picture due to progressive scanning.
History
The term high definition once described a series of television systems originating from August 1936; however, these systems were only high definition when compared to earlier systems that were based on mechanical systems with as few as 30 lines of resolution. The ongoing competition between companies and nations to create true "HDTV" spanned the entire 20th century, as each new system became more HD than the last. In the beginning of the 21st century, this race has continued with 4k, 5k and current 8K systems.
The British high-definition TV service started trials in August 1936 and a regular service on 2 November 1936 using both, the (mechanical) Baird 240 line sequential scan (later to be inaccurately rechristened 'progressive') and the (electronic) Marconi-EMI 405 line interlaced systems. The Baird system was discontinued in February 1937. In 1938 France followed with their own 441-line system, variants of which were also used by a number of other countries. The US NTSC 525-line system joined in 1941. In 1949 France introduced an even higher-resolution standard at 819 lines, a system that should have been high definition even by today's standards, but was monochrome only and the technical limitations of the time prevented it from achieving the definition of which it should have been capable. All of these systems used interlacing and a 4:3 aspect ratio except the 240-line system which was progressive (actually described at the time by the technically correct term "sequential") and the 405-line system which started as 5:4 and later changed to 4:3. The 405-line system adopted the (at that time) revolutionary idea of interlaced scanning to overcome the flicker problem of the 240-line with its 25 Hz frame rate. The 240-line system could have doubled its frame rate but this would have meant that the transmitted signal would have doubled in bandwidth, an unacceptable option as the video baseband bandwidth was required to be not more than 3 MHz.
Color broadcasts started at similarly higher resolutions, first with the US NTSC color system in 1953, which was compatible with the earlier monochrome systems and therefore had the same 525 lines of resolution. European standards did not follow until the 1960s, when the PAL and SECAM color systems were added to the monochrome 625 line broadcasts.
The Nippon Hōsō Kyōkai (NHK, the Japan Broadcasting Corporation) began conducting research to "unlock the fundamental mechanism of video and sound interactions with the five human senses" in 1964, after the Tokyo Olympics. NHK set out to create an HDTV system that ended up scoring much higher in subjective tests than NTSC's previously dubbed "HDTV". This new system, NHK Color, created in 1972, included 1125 lines, a 5:3 aspect ratio and 60 Hz refresh rate. The Society of Motion Picture and Television Engineers (SMPTE), headed by Charles Ginsburg, became the testing and study authority for HDTV technology in the international theater. SMPTE would test HDTV systems from different companies from every conceivable perspective, but the problem of combining the different formats plagued the technology for many years.
There were four major HDTV systems tested by SMPTE in the late 1970s, and in 1979 an SMPTE study group released A Study of High Definition Television Systems:
EIA monochrome: 4:3 aspect ratio, 1023 lines, 60 Hz
NHK color: 5:3 aspect ratio, 1125 lines, 60 Hz
NHK monochrome: 4:3 aspect ratio, 2125 lines, n/a Hz
BBC colour: 8:3 aspect ratio, 1501 lines, n/a Hz
Since the formal adoption of digital video broadcasting's (DVB) widescreen HDTV transmission modes in the mid to late 2000s; the 525-line NTSC (and PAL-M) systems, as well as the European 625-line PAL and SECAM systems, are now regarded as standard definition television systems.
Analog systems
Early HDTV broadcasting used analog technology, but today it is transmitted digitally and uses video compression.
In 1949, France started its transmissions with an 819 lines system (with 737 active lines). The system was monochrome only, and was used only on VHF for the first French TV channel. It was discontinued in 1983.
In 1958, the Soviet Union developed Тransformator (, meaning Transformer), the first high-resolution (definition) television system capable of producing an image composed of 1,125 lines of resolution aimed at providing teleconferencing for military command. It was a research project and the system was never deployed by either the military or consumer broadcasting.
In 1979, the Japanese state broadcaster NHK first developed consumer high-definition television with a 5:3 display aspect ratio. The system, known as Hi-Vision or MUSE after its Multiple sub-Nyquist sampling encoding for encoding the signal, required about twice the bandwidth of the existing NTSC system but provided about four times the resolution (1080i/1125 lines). Satellite test broadcasts started in 1989, with regular testing starting in 1991 and regular broadcasting of BS-9ch commencing on November 25, 1994, which featured commercial and NHK programming.
In 1981, the MUSE system was demonstrated for the first time in the United States, using the same 5:3 aspect ratio as the Japanese system. Upon visiting a demonstration of MUSE in Washington, US President Ronald Reagan was impressed and officially declared it "a matter of national interest" to introduce HDTV to the US.James Sudalnik and Victoria Kuhl, "High definition television"
Several systems were proposed as the new standard for the US, including the Japanese MUSE system, but all were rejected by the FCC because of their higher bandwidth requirements. At this time, the number of television channels was growing rapidly and bandwidth was already a problem. A new standard had to be more efficient, needing less bandwidth for HDTV than the existing NTSC.
Demise of analog HD systems
The limited standardization of analog HDTV in the 1990s did not lead to global HDTV adoption as technical and economic constraints at the time did not permit HDTV to use bandwidths greater than normal television.
Early HDTV commercial experiments, such as NHK's MUSE, required over four times the bandwidth of a standard-definition broadcast. Despite efforts made to reduce analog HDTV to about twice the bandwidth of SDTV, these television formats were still distributable only by satellite.
In addition, recording and reproducing an HDTV signal was a significant technical challenge in the early years of HDTV (Sony HDVS). Japan remained the only country with successful public broadcasting of analog HDTV, with seven broadcasters sharing a single channel.
Rise of digital compression
Since 1972, International Telecommunication Union's radio telecommunications sector (ITU-R) had been working on creating a global recommendation for Analog HDTV. These recommendations, however, did not fit in the broadcasting bands which could reach home users. The standardization of MPEG-1 in 1993 also led to the acceptance of recommendations ITU-R BT.709. In anticipation of these standards the Digital Video Broadcasting (DVB) organisation was formed, an alliance of broadcasters, consumer electronics manufacturers and regulatory bodies. The DVB develops and agrees upon specifications which are formally standardised by ETSI.
DVB created first the standard for DVB-S digital satellite TV, DVB-C digital cable TV and DVB-T digital terrestrial TV. These broadcasting systems can be used for both SDTV and HDTV. In the US the Grand Alliance proposed ATSC as the new standard for SDTV and HDTV. Both ATSC and DVB were based on the MPEG-2 standard, although DVB systems may also be used to transmit video using the newer and more efficient H.264/MPEG-4 AVC compression standards. Common for all DVB standards is the use of highly efficient modulation techniques for further reducing bandwidth, and foremost for reducing receiver-hardware and antenna requirements.
In 1983, the International Telecommunication Union's radio telecommunications sector (ITU-R) set up a working party (IWP11/6) with the aim of setting a single international HDTV standard. One of the thornier issues concerned a suitable frame/field refresh rate, the world already having split into two camps, 25/50 Hz and 30/60 Hz, largely due to the differences in mains frequency. The IWP11/6 working party considered many views and throughout the 1980s served to encourage development in a number of video digital processing areas, not least conversion between the two main frame/field rates using motion vectors, which led to further developments in other areas. While a comprehensive HDTV standard was not in the end established, agreement on the aspect ratio was achieved.
Initially the existing 5:3 aspect ratio had been the main candidate but, due to the influence of widescreen cinema, the aspect ratio 16:9 (1.78) eventually emerged as being a reasonable compromise between 5:3 (1.67) and the common 1.85 widescreen cinema format. An aspect ratio of 16:9 was duly agreed upon at the first meeting of the IWP11/6 working party at the BBC's Research and Development establishment in Kingswood Warren. The resulting ITU-R Recommendation ITU-R BT.709-2 ("Rec. 709") includes the 16:9 aspect ratio, a specified colorimetry, and the scan modes 1080i (1,080 actively interlaced lines of resolution) and 1080p (1,080 progressively scanned lines). The British Freeview HD trials used MBAFF, which contains both progressive and interlaced content in the same encoding.
It also includes the alternative 1440×1152 HDMAC scan format. (According to some reports, a mooted 750-line (720p) format (720 progressively scanned lines) was viewed by some at the ITU as an enhanced television format rather than a true HDTV format, and so was not included, although 1920×1080i and 1280×720p systems for a range of frame and field rates were defined by several US SMPTE standards.)
Inaugural HDTV broadcast in the United States
HDTV technology was introduced in the United States in the late 1980s and made official in 1993 by the Digital HDTV Grand Alliance, a group of television, electronic equipment, communications companies consisting of AT&T Bell Labs, General Instrument, Philips, Sarnoff, Thomson, Zenith and the Massachusetts Institute of Technology. Field testing of HDTV at 199 sites in the United States was completed August 14, 1994. The first public HDTV broadcast in the United States occurred on July 23, 1996 when the Raleigh, North Carolina television station WRAL-HD began broadcasting from the existing tower of WRAL-TV southeast of Raleigh, winning a race to be first with the HD Model Station in Washington, D.C., which began broadcasting July 31, 1996 with the callsign WHD-TV, based out of the facilities of NBC owned and operated station WRC-TV. The American Advanced Television Systems Committee (ATSC) HDTV system had its public launch on October 29, 1998, during the live coverage of astronaut John Glenn's return mission to space on board the Space Shuttle Discovery. The signal was transmitted coast-to-coast, and was seen by the public in science centers, and other public theaters specially equipped to receive and display the broadcast.
European HDTV broadcasts
The first HDTV transmissions in Europe, albeit not direct-to-home, began in 1990, when the Italian broadcaster RAI used the HD-MAC and MUSE HDTV technologies to broadcast the 1990 FIFA World Cup. The matches were shown in 8 cinemas in Italy and 2 in Spain. The connection with Spain was made via the Olympus satellite link from Rome to Barcelona and then with a fiber optic connection from Barcelona to Madrid.Le Mini Serie - Italia '90 - The First Step of Digital HDTV part I Le Mini Serie - Italia '90 - The First Step of Digital HDTV part II After some HDTV transmissions in Europe the standard was abandoned in the mid-1990s.
The first regular broadcasts started on January 1, 2004 when the Belgian company Euro1080 launched the HD1 channel with the traditional Vienna New Year's Concert. Test transmissions had been active since the IBC exhibition in September 2003, but the New Year's Day broadcast marked the official launch of the HD1 channel, and the official start of direct-to-home HDTV in Europe.
Euro1080, a division of the former and now bankrupt Belgian TV services company Alfacam, broadcast HDTV channels to break the pan-European stalemate of "no HD broadcasts mean no HD TVs bought means no HD broadcasts ..." and kick-start HDTV interest in Europe.Bains, Geoff. "Take The High Road" What Video & Widescreen TV (April, 2004) 22–24 The HD1 channel was initially free-to-air and mainly comprised sporting, dramatic, musical and other cultural events broadcast with a multi-lingual soundtrack on a rolling schedule of 4 or 5 hours per day.
These first European HDTV broadcasts used the 1080i format with MPEG-2 compression on a DVB-S signal from SES's Astra 1H satellite. Euro1080 transmissions later changed to MPEG-4/AVC compression on a DVB-S2 signal in line with subsequent broadcast channels in Europe.
Despite delays in some countries,HDTV in Germany: Lack of Innovation Management Leads to Market Failure, diffusion of HDTV in Germany from the DIW Berlin the number of European HD channels and viewers has risen steadily since the first HDTV broadcasts, with SES's annual Satellite Monitor market survey for 2010 reporting more than 200 commercial channels broadcasting in HD from Astra satellites, 185 million HD capable TVs sold in Europe (£60 million in 2010 alone), and 20 million households (27% of all European digital satellite TV homes) watching HD satellite broadcasts (16 million via Astra satellites).http://www.ses-astra.com/business/en/support/market-research/index.php
In December 2009 the United Kingdom became the first European country to deploy high definition content using the new DVB-T2 transmission standard, as specified in the Digital TV Group (DTG) D-book, on digital terrestrial television.
The Freeview HD service currently contains 13 HD channels () and was rolled out region by region across the UK in accordance with the digital switchover process, finally being completed in October 2012. However, Freeview HD is not the first HDTV service over digital terrestrial television in Europe; Italy's Rai HD channel started broadcasting in 1080i on April 24, 2008 using the DVB-T transmission standard.
In October 2008 France deployed five high definition channels using DVB-T transmission standard on digital terrestrial distribution.
Notation
HDTV broadcast systems are identified with three major parameters:
Frame size in pixels is defined as number of horizontal pixels × number of vertical pixels, for example 1280 × 720 or 1920 × 1080. Often the number of horizontal pixels is implied from context and is omitted, as in the case of 720p and 1080p.
Scanning system is identified with the letter p for progressive scanning or i for interlaced scanning.
Frame rate is identified as number of video frames per second. For interlaced systems, the number of frames per second should be specified, but it is not uncommon to see the field rate incorrectly used instead.
If all three parameters are used, they are specified in the following form: [frame size][scanning system][frame or field rate] or [frame size]/[frame or field rate][scanning system]. Often, frame size or frame rate can be dropped if its value is implied from context. In this case, the remaining numeric parameter is specified first, followed by the scanning system.
For example, 1920×1080p25 identifies progressive scanning format with 25 frames per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 1080i25 or 1080i50 notation identifies interlaced scanning format with 25 frames (50 fields) per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 1080i30 or 1080i60 notation identifies interlaced scanning format with 30 frames (60 fields) per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 720p60 notation identifies progressive scanning format with 60 frames per second, each frame being 720 pixels high; 1,280 pixels horizontally are implied.
50 Hz systems support three scanning rates: 50i, 25p and 50p. 60 Hz systems support a much wider set of frame rates: 59.94i, 60i, 23.976p, 24p, 29.97p, 30p, 59.94p and 60p. In the days of standard definition television, the fractional rates were often rounded up to whole numbers, e.g. 23.976p was often called 24p, or 59.94i was often called 60i. 60 Hz high definition television supports both fractional and slightly different integer rates, therefore strict usage of notation is required to avoid ambiguity. Nevertheless, 29.97p/59.94i is almost universally called 60i, likewise 23.976p is called 24p.
For the commercial naming of a product, the frame rate is often dropped and is implied from context (e.g., a 1080i television set). A frame rate can also be specified without a resolution. For example, 24p means 24 progressive scan frames per second, and 50i means 25 interlaced frames per second.
There is no single standard for HDTV color support. Colors are typically broadcast using a (10-bits per channel) YUV color space but, depending on the underlying image generating technologies of the receiver, are then subsequently converted to a RGB color space using standardized algorithms. When transmitted directly through the Internet, the colors are typically pre-converted to 8-bit RGB channels for additional storage savings with the assumption that it will only be viewed only on a (sRGB) computer screen. As an added benefit to the original broadcasters, the losses of the pre-conversion essentially make these files unsuitable for professional TV re-broadcasting.
Most HDTV systems support resolutions and frame rates defined either in the ATSC table 3, or in EBU specification. The most common are noted below.
Display resolutions
Video format supported [image resolution] Native resolution [inherent resolution] (W×H) Pixels Aspect ratio (W:H) Description Actual Advertised (Megapixels) Image Pixel 720p1280×720 1024×768XGA 786,432 0.8 4:3 1:1 Typically a PC resolution (XGA); also a native resolution on many entry-level plasma displays with non-square pixels. 1280×720 921,600 0.9 16:9 1:1 Standard HDTV resolution and a typical PC resolution (WXGA), frequently used by high-end video projectors; also used for 750-line video, as defined in SMPTE 296M, ATSC A/53, ITU-R BT.1543. 1366×768WXGA 1,049,088 1.0 683:384(approx. 16:9) 1:1 A typical PC resolution (WXGA); also used by many HD ready TV displays based on LCD technology. 1080p/i1920×1080 1920×1080 2,073,600 2.1 16:9 1:1 Standard HDTV resolution, used by Full HD and HD ready 1080p TV displays such as high-end LCD, plasma and rear projection TVs, and a typical PC resolution (lower than WUXGA); also used for 1125-line video, as defined in SMPTE 274M, ATSC A/53, ITU-R BT.709;
Video format supported Screen resolution (W×H) Pixels Aspect ratio (W:H) Description Actual Advertised (Megapixels) Image Pixel 720p1280×720 1248×702Clean Aperture 876,096 0.9 16:9 1:1 Used for 750-line video with faster artifact/overscan compensation, as defined in SMPTE 296M. 1080p1920×1080 1888×1062Clean aperture 2,005,056 2.0 16:9 1:1 Used for 1124-line video with faster artifact/overscan compensation, as defined in SMPTE 274M. 1080i1920×1080 1440×1080HDCAM/HDV 1,555,200 1.6 16:9 4:3 Used for anamorphic 1125-line video in the HDCAM and HDV formats introduced by Sony and defined (also as a luminance subsampling matrix) in SMPTE D11.
At a minimum, HDTV has twice the linear resolution of standard-definition television (SDTV), thus showing greater detail than either analog television or regular DVD. The technical standards for broadcasting HDTV also handle the 16:9 aspect ratio images without using letterboxing or anamorphic stretching, thus increasing the effective image resolution.
A very high resolution source may require more bandwidth than available in order to be transmitted without loss of fidelity. The lossy compression that is used in all digital HDTV storage and transmission systems will distort the received picture, when compared to the uncompressed source.
Standard frame or field rates
ATSC and DVB define the following frame rates for use with the various broadcast standards:http://www.etsi.org/deliver/etsi_ts/101100_101199/101154/01.11.01_60/ts_101154v011101p.pdf#page=19
23.976 Hz (film-looking frame rate compatible with NTSC clock speed standards)
24 Hz (international film and ATSC high-definition material)
25 Hz (PAL film, DVB standard-definition and high-definition material)
29.97 Hz (NTSC film and standard-definition material)
30 Hz (NTSC film, ATSC high-definition material)
50 Hz (DVB high-definition material)
59.94 Hz (ATSC high-definition material)
60 Hz (ATSC high-definition material)
The optimum format for a broadcast depends upon the type of videographic recording medium used and the image's characteristics. For best fidelity to the source the transmitted field ratio, lines, and frame rate should match those of the source.
PAL, SECAM and NTSC frame rates technically apply only to analogue standard definition television, not to digital or high definition broadcasts. However, with the roll out of digital broadcasting, and later HDTV broadcasting, countries retained their heritage systems. HDTV in former PAL and SECAM countries operates at a frame rate of 25/50 Hz, while HDTV in former NTSC countries operates at 30/60 Hz.
Types of media
Standard 35mm photographic film used for cinema projection has a much higher image resolution than HDTV systems, and is exposed and projected at a rate of 24 frames per second (frame/s). To be shown on standard television, in PAL-system countries, cinema film is scanned at the TV rate of 25 frame/s, causing a speedup of 4.1 percent, which is generally considered acceptable. In NTSC-system countries, the TV scan rate of 30 frame/s would cause a perceptible speedup if the same were attempted, and the necessary correction is performed by a technique called 3:2 Pulldown: Over each successive pair of film frames, one is held for three video fields (1/20 of a second) and the next is held for two video fields (1/30 of a second), giving a total time for the two frames of 1/12 of a second and thus achieving the correct average film frame rate.
Non-cinematic HDTV video recordings intended for broadcast are typically recorded either in 720p or 1080i format as determined by the broadcaster. 720p is commonly used for Internet distribution of high-definition video, because most computer monitors operate in progressive-scan mode. 720p also imposes less strenuous storage and decoding requirements compared to both 1080i and 1080p. 1080p/24, 1080i/30, 1080i/25, and 720p/30 is most often used on Blu-ray Disc.
Modern systems
In the US, residents in the line of sight of television station broadcast antennas can receive free, over the air programming with a television set with an ATSC tuner (most sets sold since 2009 have this). This is achieved with a TV aerial, just as it has been since the 1940s except now the major network signals are broadcast in high definition (ABC, Fox, and Ion Television broadcast at 720p resolution; CBS, My Network TV, NBC, PBS, and The CW at 1080i). As their digital signals more efficiently use the broadcast channel, many broadcasters are adding multiple channels to their signals. Laws about antennas were updated before the change to digital terrestrial broadcasts. These new laws prohibit home owners' associations and city government from banning the installation of antennas.
Additionally, cable-ready TV sets can display HD content without using an external box. They have a QAM tuner built-in and/or a card slot for inserting a CableCARD.
High-definition image sources include terrestrial broadcast, direct broadcast satellite, digital cable, IPTV (including GoogleTV, Roku boxes and AppleTV or built into "Smart Televisions"), Blu-ray video disc (BD), and internet downloads.
Sony's PlayStation 3 has extensive HD compatibility because of its built in Blu-ray disc based player, so does Microsoft's Xbox 360 with the addition of Netflix and Windows Media Center HTPC streaming capabilities. On November 18, 2012, Nintendo released a next generation high definition gaming platform, The Wii U, which includes TV remote control features in addition to IPTV streaming features like Netflix. The HD capabilities of the consoles has influenced some developers to port games from past consoles onto the PS3, Xbox 360 and Wii U, often with remastered or upscaled graphics.
Recording and compression
HDTV can be recorded to D-VHS (Digital-VHS or Data-VHS), W-VHS (analog only), to an HDTV-capable digital video recorder (for example DirecTV's high-definition Digital video recorder, Sky HD's set-top box, Dish Network's VIP 622 or VIP 722 high-definition Digital video recorder receivers, or TiVo's Series 3 or HD recorders), or an HDTV-ready HTPC. Some cable boxes are capable of receiving or recording two or more broadcasts at a time in HDTV format, and HDTV programming, some included in the monthly cable service subscription price, some for an additional fee, can be played back with the cable company's on-demand feature.
The massive amount of data storage required to archive uncompressed streams meant that inexpensive uncompressed storage options were not available to the consumer. In 2008, the Hauppauge 1212 Personal Video Recorder was introduced. This device accepts HD content through component video inputs and stores the content in MPEG-2 format in a .ts file or in a Blu-ray compatible format .m2ts file on the hard drive or DVD burner of a computer connected to the PVR through a USB 2.0 interface. More recent systems are able to record a broadcast high definition program in its 'as broadcast' format or transcode to a format more compatible with Blu-ray.
Analog tape recorders with bandwidth capable of recording analog HD signals, such as W-VHS recorders, are no longer produced for the consumer market and are both expensive and scarce in the secondary market.
In the United States, as part of the FCC's plug and play agreement, cable companies are required to provide customers who rent HD set-top boxes with a set-top box with "functional" FireWire (IEEE 1394) on request. None of the direct broadcast satellite providers have offered this feature on any of their supported boxes, but some cable TV companies have. , boxes are not included in the FCC mandate. This content is protected by encryption known as 5C. This encryption can prevent duplication of content or simply limit the number of copies permitted, thus effectively denying most if not all fair use of the content.
See also
Display motion blur
Glossary of video terms
High Efficiency Video Coding
List of digital television deployments by country
Optimum HDTV viewing distance
Ultra-high-definition television
References
Further reading
Joel Brinkley (1997), Defining Vision: The Battle for the Future of Television, New York: Harcourt Brace.
High Definition Television: The Creation, Development and Implementation of HDTV Technology by Philip J. Cianci (McFarland & Company, 2012)
Technology, Television, and Competition (New York: Cambridge University Press, 2004)
External links
History
The Italian HDTV experience from 1980s to 2006 - in Italian - C.R.I.T./RAI
The HDTV Archive Project
European adoption
Images formats for HDTV, article from the EBU Technical Review.
High Definition for Europe – a progressive approach, article from the EBU Technical Review.
High Definition (HD) Image Formats for Television Production, technical report from the EBU
Category:1936 introductions
Category:1990 introductions
Category:ATSC
Category:Consumer electronics
Category:Digital television
Category:Film and video technology
Category:History of television
Category:Television terminology | 16,315,657 | 2017-01 |
Alloy | thumb|Wire rope made from steel, which is a metal alloy whose major component is iron, with carbon content between 0.02% and 2.14% by mass.
An alloy is a mixture of metals or a mixture of a metal and another element. Alloys are defined by a metallic bonding character.Callister, W. D. "Materials Science and Engineering: An Introduction" 2007, 7th edition, John Wiley and Sons, Inc. New York, Section 4.3 and Chapter 9. An alloy may be a solid solution of metal elements (a single phase) or a mixture of metallic phases (two or more solutions). Intermetallic compounds are alloys with a defined stoichiometry and crystal structure. Zintl phases are also sometimes considered alloys depending on bond types (see also: Van Arkel-Ketelaar triangle for information on classifying bonding in binary compounds).
Alloys are used in a wide variety of applications. In some cases, a combination of metals may reduce the overall cost of the material while preserving important properties. In other cases, the combination of metals imparts synergistic properties to the constituent metal elements such as corrosion resistance or mechanical strength. Examples of alloys are steel, solder, brass, pewter, duralumin, bronze and amalgams.
The alloy constituents are usually measured by mass. Alloys are usually classified as substitutional or interstitial alloys, depending on the atomic arrangement that forms the alloy. They can be further classified as homogeneous (consisting of a single phase), or heterogeneous (consisting of two or more phases) or intermetallic.
Introduction
thumb|Liquid bronze, being poured into molds during casting.
thumb|A brass lamp.
An alloy is a mixture of chemical elements, which forms an impure substance (admixture) that retains the characteristics of a metal. An alloy is distinct from an impure metal in that, with an alloy, the added elements are well controlled to produce desirable properties, while impure metals such as wrought iron, are less controlled, but are often considered useful. Alloys are made by mixing two or more elements, at least one of which is a metal. This is usually called the primary metal or the base metal, and the name of this metal may also be the name of the alloy. The other constituents may or may not be metals but, when mixed with the molten base, they will be soluble and dissolve into the mixture.
The mechanical properties of alloys will often be quite different from those of its individual constituents. A metal that is normally very soft (malleable), such as aluminium, can be altered by alloying it with another soft metal, such as copper. Although both metals are very soft and ductile, the resulting aluminium alloy will have much greater strength. Adding a small amount of non-metallic carbon to iron trades its great ductility for the greater strength of an alloy called steel. Due to its very-high strength, but still substantial toughness, and its ability to be greatly altered by heat treatment, steel is one of the most useful and common alloys in modern use. By adding chromium to steel, its resistance to corrosion can be enhanced, creating stainless steel, while adding silicon will alter its electrical characteristics, producing silicon steel.
Although the elements of an alloy usually must be soluble in the liquid state, they may not always be soluble in the solid state. If the metals remain soluble when solid, the alloy forms a solid solution, becoming a homogeneous structure consisting of identical crystals, called a phase. If as the mixture cools the constituents become insoluble, they may separate to form two or more different types of crystals, creating a heterogeneous microstructure of different phases, some with more of one constituent than the other phase has. However, in other alloys, the insoluble elements may not separate until after crystallization occurs. These alloys are called intermetallic alloys because, if cooled very quickly, they first crystallize as a homogeneous phase, but they are supersaturated and unstable with the secondary constituents. As time passes, the atoms of these supersaturated alloys separate from the crystal lattice, becoming more stable, and form intermetallic (within the crystal lattice) phases that serve to reinforce the crystals internally.
Some alloys, such as electrum which is an alloy consisting of silver and gold, occur naturally. Meteorites are sometimes made of naturally occurring alloys of iron and nickel, but are not native to the Earth. One of the first alloys made by humans was bronze, which is a mixture of the metals tin and copper. Bronze was an extremely useful alloy to the ancients, because it is much stronger and harder than either of its components. Steel was another common alloy. However, in ancient times, it could only be created as an accidental byproduct from the heating of iron ore in fires (smelting) during the manufacture of iron. Other ancient alloys include pewter, brass and pig iron. In the modern age, steel can be created in many forms. Carbon steel can be made by varying only the carbon content, producing soft alloys like mild steel or hard alloys like spring steel. Alloy steels can be made by adding other elements, such as chromium, molybdenum, vanadium or nickel, resulting in alloys such as high-speed steel or tool steel. Small amounts of manganese are usually alloyed with most modern steels because of its ability to remove unwanted impurities, like phosphorus, sulfur and oxygen, which can have detrimental effects on the alloy. However, most alloys were not created until the 1900s, such as various aluminium, titanium, nickel, and magnesium alloys. Some modern superalloys, such as incoloy, inconel, and hastelloy, may consist of a multitude of different elements.
Terminology
thumb|A gate valve, made from Inconel.
The term alloy is used to describe a mixture of atoms in which the primary constituent is a metal. The primary metal is called the base, the matrix, or the solvent. The secondary constituents are often called solutes. If there is a mixture of only two types of atoms (not counting impurities) such as a copper-nickel alloy, then it is called a binary alloy. If there are three types of atoms forming the mixture, such as iron, nickel and chromium, then it is called a ternary alloy. An alloy with four constituents is a quaternary alloy, while a five-part alloy is termed a quinary alloy. Because the percentage of each constituent can be varied, with any mixture the entire range of possible variations is called a system. In this respect, all of the various forms of an alloy containing only two constituents, like iron and carbon, is called a binary system, while all of the alloy combinations possible with a ternary alloy, such as alloys of iron, carbon and chromium, is called a ternary system.Bauccio, Michael (1003) ASM metals reference book. ASM International. ISBN 0-87170-478-1.
Although an alloy is technically an impure metal, when referring to alloys, the term "impurities" usually denotes those elements which are not desired. Such impurities are introduced from the base metals and alloying elements, but are removed during processing. For instance, sulfur is a common impurity in steel. Sulfur combines readily with iron to form iron sulfide, which is very brittle, creating weak spots in the steel. Lithium, sodium and calcium are common impurities in aluminium alloys, which can have adverse effects on the structural integrity of castings. Conversely, otherwise pure-metals that simply contain unwanted impurities are often called "impure metals" and are not usually referred to as alloys. Oxygen, present in the air, readily combines with most metals to form metal oxides; especially at higher temperatures encountered during alloying. Great care is often taken during the alloying process to remove excess impurities, using fluxes, chemical additives, or other methods of extractive metallurgy.Davis, Joseph R. (1993) ASM Specialty Handbook: Aluminum and Aluminum Alloys. ASM International. p. 211. ISBN 978-0-87170-496-2.
In practice, some alloys are used so predominantly with respect to their base metals that the name of the primary constituent is also used as the name of the alloy. For example, 14 karat gold is an alloy of gold with other elements. Similarly, the silver used in jewelry and the aluminium used as a structural building material are also alloys.
The term "alloy" is sometimes used in everyday speech as a synonym for a particular alloy. For example, automobile wheels made of an aluminium alloy are commonly referred to as simply "alloy wheels", although in point of fact steels and most other metals in practical use are also alloys. Steel is such a common alloy that many items made from it, like wheels, barrels, or girders, are simply referred to by the name of the item, assuming it is made of steel. When made from other materials, they are typically specified as such, (i.e.: "bronze wheel", "plastic barrel", or "wood girder").
Theory
Alloying a metal is done by combining it with one or more other elements that often enhance its properties. For example, the combination of carbon with iron produces steel, which is stronger than iron, its primary element. The electrical and thermal conductivity of alloys is usually lower than that of the pure metals. The physical properties, such as density, reactivity, Young's modulus of an alloy may not differ greatly from those of its base element, but engineering properties such as tensile strength,Mills, Adelbert Phillo (1922) Materials of Construction: Their Manufacture and Properties, John Wiley & sons, inc, originally published by the University of Wisconsin, Madison ductility, and shear strength may be substantially different from those of the constituent materials. This is sometimes a result of the sizes of the atoms in the alloy, because larger atoms exert a compressive force on neighboring atoms, and smaller atoms exert a tensile force on their neighbors, helping the alloy resist deformation. Sometimes alloys may exhibit marked differences in behavior even when small amounts of one element are present. For example, impurities in semiconducting ferromagnetic alloys lead to different properties, as first predicted by White, Hogan, Suhl, Tian Abrie and Nakamura.
Some alloys are made by melting and mixing two or more metals. Bronze, an alloy of copper and tin, was the first alloy discovered, during the prehistoric period now known as the bronze age. It was harder than pure copper and originally used to make tools and weapons, but was later superseded by metals and alloys with better properties. In later times bronze has been used for ornaments, bells, statues, and bearings. Brass is an alloy made from copper and zinc.
Unlike pure metals, most alloys do not have a single melting point, but a melting range during which the material is a mixture of solid and liquid phases (a slush). The temperature at which melting begins is called the solidus, and the temperature when melting is just complete is called the liquidus. For many alloys there is a particular alloy proportion (in some cases more than one), called either a eutectic mixture or a peritectic composition, which gives the alloy a unique and low melting point, and no liquid/solid slush transition.
Heat-treatable alloys
thumb|left|Allotropes of iron, (alpha iron and gamma iron) showing the differences in atomic arrangement.
thumb|Photomicrographs of steel. Top photo: Annealed (slowly cooled) steel forms a heterogeneous, lamellar microstructure called pearlite, consisting of the phases cementite (light) and ferrite (dark). Bottom photo: Quenched (quickly cooled) steel forms a single phase called martensite, in which the carbon remains trapped within the crystals, creating internal stresses.
Alloying elements are added to a base metal, to induce hardness, toughness, ductility, or other desired properties. Most metals and alloys can be work hardened by creating defects in their crystal structure. These defects are created during plastic deformation by hammering, bending, extruding, etcetera, and are permanent unless the metal is recrystallized. Otherwise, some alloys can also have their properties altered by heat treatment. Nearly all metals can be softened by annealing, which recrystallizes the alloy and repairs the defects, but not as many can be hardened by controlled heating and cooling. Many alloys of aluminium, copper, magnesium, titanium, and nickel can be strengthened to some degree by some method of heat treatment, but few respond to this to the same degree as does steel.
The base metal iron of the iron-carbon alloy known as steel, undergoes a change in the arrangement (allotropy) of the atoms of its crystal matrix at a certain temperature (usually between and , depending on carbon content). This allows the smaller carbon atoms to enter the interstices of the iron crystal. When this diffusion happens, the carbon atoms are said to be in solution in the iron, forming a particular single, homogeneous, crystalline phase called austenite. If the steel is cooled slowly, the carbon can diffuse out of the iron and it will gradually revert to its low temperature allotrope. During slow cooling, the carbon atoms will no longer be as soluble with the iron, and will be forced to precipitate out of solution, nucleating into a more concentrated form of iron carbide (Fe3C) in the spaces between the pure iron crystals. The steel then becomes heterogeneous, as it is formed of two phases, the iron-carbon phase called cementite (or carbide), and pure iron ferrite. Such a heat treatment produces a steel that is rather soft. If the steel is cooled quickly, however, the carbon atoms will not have time to diffuse and precipitate out as carbide, but will be trapped within the iron crystals. When rapidly cooled, a diffusionless (martensite) transformation occurs, in which the carbon atoms become trapped in solution. This causes the iron crystals to deform as the crystal structure tries to change to its low temperature state, leaving those crystals very hard but much less ductile (brittle).
While the high strength of steel results when diffusion and precipitation is prevented (forming martinsite), most heat-treatable alloys are precipitation hardening alloys, that depend on the diffusion of alloying elements to achieve their strength. When heated to form a solution and then cooled quickly, these alloys become much softer than normal, during the diffusionless transformation, but then harden as they age. The solutes in these alloys will precipitate over time, forming intermetallic phases, which are difficult to discern from the base metal. Unlike steel, in which the solid solution separates into different crystal phases (carbide and ferrite), precipitation hardening alloys form different phases within the same crystal. These intermetallic alloys appear homogeneous in crystal structure, but tend to behave heterogeneously, becoming hard and somewhat brittle.
Substitutional and interstitial alloys
thumb|Different atomic mechanisms of alloy formation, showing pure metal, substitutional, interstitial, and a combination of the two.
When a molten metal is mixed with another substance, there are two mechanisms that can cause an alloy to form, called atom exchange and the interstitial mechanism. The relative size of each element in the mix plays a primary role in determining which mechanism will occur. When the atoms are relatively similar in size, the atom exchange method usually happens, where some of the atoms composing the metallic crystals are substituted with atoms of the other constituent. This is called a substitutional alloy. Examples of substitutional alloys include bronze and brass, in which some of the copper atoms are substituted with either tin or zinc atoms respectively. In the case of the interstitial mechanism, one atom is usually much smaller than the other and can not successfully substitute for the other type of atom in the crystals of the base metal. Instead, the smaller atoms become trapped in the spaces between the atoms of the crystal matrix, called the interstices. This is referred to as an interstitial alloy. Steel is an example of an interstitial alloy, because the very small carbon atoms fit into interstices of the iron matrix. Stainless steel is an example of a combination of interstitial and substitutional alloys, because the carbon atoms fit into the interstices, but some of the iron atoms are substituted by nickel and chromium atoms.Dossett, Jon L. and Boyer, Howard E. (2006) Practical heat treating. ASM International. pp. 1–14. ISBN 1-61503-110-3.
History and examples
Meteoric iron
thumb|A meteorite and a hatchet that was forged from meteoric iron.
The use of alloys by humans started with the use of meteoric iron, a naturally occurring alloy of nickel and iron. It is the main constituent of iron meteorites which occasionally fall down on Earth from outer space. As no metallurgic processes were used to separate iron from nickel, the alloy was used as it was. Meteoric iron could be forged from a red heat to make objects such as tools, weapons, and nails. In many cultures it was shaped by cold hammering into knives and arrowheads. They were often used as anvils. Meteoric iron was very rare and valuable, and difficult for ancient people to work.Buchwald, pp. 13–22
Bronze and brass
right|thumb| Bronze axe 1100 BC
thumb|Bronze doorknocker
Iron is usually found as iron ore on Earth, except for one deposit of native iron in Greenland, which was used by the Inuit people.Buchwald, pp. 35–37 Native copper, however, was found worldwide, along with silver, gold and platinum, which were also used to make tools, jewelry, and other objects since Neolithic times. Copper was the hardest of these metals, and the most widely distributed. It became one of the most important metals to the ancients. Eventually, humans learned to smelt metals such as copper and tin from ore, and, around 2500 BC, began alloying the two metals to form bronze, which is much harder than its ingredients. Tin was rare, however, being found mostly in Great Britain. In the Middle East, people began alloying copper with zinc to form brass.Buchwald, pp. 39–41 Ancient civilizations took into account the mixture and the various properties it produced, such as hardness, toughness and melting point, under various conditions of temperature and work hardening, developing much of the information contained in modern alloy phase diagrams. Arrowheads from the Chinese Qin dynasty (around 200 BC) were often constructed with a hard bronze-head, but a softer bronze-tang, combining the alloys to prevent both dulling and breaking during use.Emperor's Ghost Army. pbs.org. November 2014
Amalgams
Mercury has been smelted from cinnabar for thousands of years. Mercury dissolves many metals, such as gold, silver, and tin, to form amalgams (an alloy in a soft paste, or liquid form at ambient temperature). Amalgams have been used since 200 BC in China for plating objects with precious metals, called gilding, such as armor and mirrors. The ancient Romans often used mercury-tin amalgams for gilding their armor. The amalgam was applied as a paste and then heated until the mercury vaporized, leaving the gold, silver, or tin behind.Rapp, George (2009) Archaeomineralogy. Springer. p. 180. ISBN 3-540-78593-0 Mercury was often used in mining, to extract precious metals like gold and silver from their ores.Miskimin, Harry A. (1977) The economy of later Renaissance Europe, 1460–1600. Cambridge University Press. p. 31. ISBN 0-521-29208-5.
Precious-metal alloys
thumb|Electrum, a natural alloy of silver and gold, was often used for making coins.
Many ancient civilizations alloyed metals for purely aesthetic purposes. In ancient Egypt and Mycenae, gold was often alloyed with copper to produce red-gold, or iron to produce a bright burgundy-gold. Gold was often found alloyed with silver or other metals to produce various types of colored gold. These metals were also used to strengthen each other, for more practical purposes. Copper was often added to silver to make sterling silver, increasing its strength for use in dishes, silverware, and other practical items. Quite often, precious metals were alloyed with less valuable substances as a means to deceive buyers.Nicholson, Paul T. and Shaw, Ian (2000) Ancient Egyptian materials and technology. Cambridge University Press. pp. 164–167. ISBN 0-521-45257-0. Around 250 BC, Archimedes was commissioned by the king to find a way to check the purity of the gold in a crown, leading to the famous bath-house shouting of "Eureka!" upon the discovery of Archimedes' principle.Kay, Melvyn (2008) Practical Hydraulics. Taylor and Francis. p. 45. ISBN 0-415-35115-4.
Pewter
The term pewter covers a variety of alloys consisting primarily of tin. As a pure metal, tin was much too soft to be used for any practical purpose. However, in the Bronze age, tin was a rare metal and, in many parts of Europe and the Mediterranean, was often valued higher than gold. To make jewelry, forks and spoons, or other objects from tin, it was usually alloyed with other metals to increase its strength and hardness. These metals were typically lead, antimony, bismuth or copper. These solutes sometimes were added individually in varying amounts, or added together, making a wide variety of things, ranging from practical items, like dishes, surgical tools, candlesticks or funnels, to decorative items such as ear rings and hair clips.
The earliest examples of pewter come from ancient Egypt, around 1450 BC. The use of pewter was widespread across Europe, from France to Norway and Britain (where most of the ancient tin was mined) to the Near East.Hull, Charles (1992) Pewter. Shire Publications. pp. 3–4; ISBN 0-7478-0152-5 The alloy was also used in China and the Far East, arriving in Japan around 800 AD, where it was used for making objects like ceremonial vessels, tea canisters, or chalices used in shinto shrines.Brinkley, Frank (1904) Japan and China: Japan, its history, arts, and literature. Oxford University. p. 317
Steel and pig iron
The first known smelting of iron began in Anatolia, around 1800 BC. Called the bloomery process, it produced very soft but ductile wrought iron. By 800 BC, iron-making technology had spread to Europe, arriving in Japan around 700 AD. Pig iron, a very hard but brittle alloy of iron and carbon, was being produced in China as early as 1200 BC, but did not arrive in Europe until the Middle Ages. Pig iron has a lower melting point than iron, and was used for making cast-iron. However, these metals found little practical use until the introduction of crucible steel around 300 BC. These steels were of poor quality, and the introduction of pattern welding, around the 1st century AD, sought to balance the extreme properties of the alloys by laminating them, to create a tougher metal. Around 700 AD, the Japanese began folding bloomery-steel and cast-iron in alternating layers to increase the strength of their swords, using clay fluxes to remove slag and impurities. This method of Japanese swordsmithing produced one of the purest steel-alloys of the early Middle Ages.Smith, Cyril (1960) History of metallography. MIT Press. pp. 2–4. ISBN 0-262-69120-5.
While the use of iron started to become more widespread around 1200 BC, mainly because of interruptions in the trade routes for tin, the metal was much softer than bronze. However, very small amounts of steel, (an alloy of iron and around 1% carbon), was always a byproduct of the bloomery process. The ability to modify the hardness of steel by heat treatment had been known since 1100 BC, and the rare material was valued for the manufacture of tools and weapons. Because the ancients could not produce temperatures high enough to melt iron fully, the production of steel in decent quantities did not occur until the introduction of blister steel during the Middle Ages. This method introduced carbon by heating wrought iron in charcoal for long periods of time, but the penetration of carbon was not very deep, so the alloy was not homogeneous. In 1740, Benjamin Huntsman began melting blister steel in a crucible to even out the carbon content, creating the first process for the mass production of tool steel. Huntsman's process was used for manufacturing tool steel until the early 1900s.Roberts, George Adam; Krauss, George; Kennedy, Richard and Kennedy, Richard L. (1998) Tool steels. ASM International. pp. 2–3. ISBN 0-87170-599-0.
With the introduction of the blast furnace to Europe in the Middle Ages, pig iron was able to be produced in much higher volumes than wrought iron. Because pig iron could be melted, people began to develop processes of reducing the carbon in the liquid pig iron to create steel. Puddling was introduced during the 1700s, where molten pig iron was stirred while exposed to the air, to remove the carbon by oxidation. In 1858, Sir Henry Bessemer developed a process of steel-making by blowing hot air through liquid pig iron to reduce the carbon content. The Bessemer process was able to produce the first large scale manufacture of steel. Once the Bessemer process began to gain widespread use, other alloys of steel began to follow. Mangalloy, an alloy of steel and manganese exhibiting extreme hardness and toughness, was one of the first alloy steels, and was created by Robert Hadfield in 1882.
Precipitation-hardening alloys
In 1906, precipitation hardening alloys were discovered by Alfred Wilm. Precipitation hardening alloys, such as certain alloys of aluminium, titanium, and copper, are heat-treatable alloys that soften when quenched (cooled quickly), and then harden over time. After quenching a ternary alloy of aluminium, copper, and magnesium, Wilm discovered that the alloy increased in hardness when left to age at room temperature. Although an explanation for the phenomenon was not provided until 1919, duralumin was one of the first "age hardening" alloys to be used, and was soon followed by many others. Because they often exhibit a combination of high strength and low weight, these alloys became widely used in many forms of industry, including the construction of modern aircraft.Jacobs, M. H. Precipitation Hardnening. University of Birmingham. TALAT Lecture 1204. slideshare.net
See also
CALPHAD
Ideal mixture
List of alloys
References
Bibliography
External links
Surface Alloys
Category:Metallurgy
Category:Chemistry | 1,187 | 2017-01 |
Arsenal F.C. | Arsenal Football Club is an English professional football club based in Highbury, London, that plays in the Premier League, the top flight of English football. The club has won 12 FA Cups, a joint-record, 13 League titles, two League Cups, 14 FA Community Shields, one UEFA Cup Winners' Cup and one Inter-Cities Fairs Cup.
Arsenal was the first club from the South of England to join The Football League, in 1893. They entered the First Division in 1904, and have since accumulated the second most points. Relegated only once, in 1913, they continue the longest streak in the top division. In the 1930s, Arsenal won five League Championships and two FA Cups, and another FA Cup and two Championships after the war. In 1970–71, they won their first League and FA Cup Double. Between 1989 and 2005, they won five League titles and five FA Cups, including two more Doubles. They completed the 20th century with the highest average league position.
Herbert Chapman won Arsenal's first national trophies, but died prematurely. He helped introduce the WM formation, floodlights, and shirt numbers, and added the white sleeves and brighter red to Arsenal's kit. Arsène Wenger has been the longest-serving manager and has won the most trophies. His teams set several English records: the longest win streak; the longest unbeaten run; and the only 38 match season unbeaten.
In 1886, Woolwich munitions workers founded the club as Dial Square. In 1913, the club crossed the city to Arsenal Stadium in Highbury. They became Tottenham Hotspur's nearest club, commencing the North London derby. In 2006, they moved down the road to the Emirates Stadium. Arsenal earned €435.5m in 2014–15, with the Emirates Stadium generating the highest revenue in world football. Based on social media activity from 2014–15, Arsenal's fanbase is the fifth largest in the world. In 2016, Forbes estimated the club was the second most valuable in England, worth $2.0 billion.
History
1886–1919: Changing names
thumb|left|Dial Square: the workplace of Arsenal's founding fathers, and the club's original eponym
thumb|Royal Arsenal squad in 1888. Original captain, David Danskin, sits on the right of the bench.
On 1 December 1886, munitions workers in Woolwich, now South East London, formed Arsenal as Dial Square, with David Danskin as their first captain.Woolwich and Plumstead were officially part of Kent until the creation of the County of London in 1889.
For primary sources on the name, first meeting, and first match, see
Bernard Joy claims Danskin was captain at founding in . Forward, Arsenal!
Danskin was made official captain the next month, see Named after the heart of the Royal Arsenal complex, they took the name of the whole complex a month later. Royal Arsenal F.C.'s first home was Plumstead Common, though they spent most of their time in South East London playing on the other side of Plumstead, at the Manor Ground. Royal Arsenal won Arsenal's first trophies in 1890 and 1891, and these were the only football association trophies Arsenal won during their time in South East London.
Royal Arsenal renamed themselves for a second time upon becoming a limited liability company in 1893. They registered their new name, Woolwich Arsenal, with The Football League when the club ascended later that year. Woolwich Arsenal was the first southern member of The Football League, starting out in the Second Division and winning promotion to the First Division in 1904. Falling attendances, due to financial difficulties among the munitions workers and the arrival of more accessible football clubs elsewhere in the city, led the club close to bankruptcy by 1910. Businessmen Henry Norris and William Hall took the club over, and sought to move them elsewhere.
In 1913, soon after relegation back to the Second Division, Woolwich Arsenal moved to the new Arsenal Stadium in Highbury, North London. This saw their third change of name: the following year, they reduced Woolwich Arsenal to simply The Arsenal. In 1919, The Football League voted to promote The Arsenal, instead of relegated local rivals Tottenham Hotspur, into the newly enlarged First Division, despite only listing the club sixth in the Second Division's last pre-war season of 1914–15. Some books have speculated that the club won this election to division one by dubious means.It has been alleged that Arsenal's promotion, on historical grounds rather than merit, was thanks to underhand actions by Norris, who was chairman of the club at the time; see History of Arsenal F.C. (1886–1966) for more details. This speculation ranges from political machinations to outright bribery; no evidence of any wrongdoing has ever been found.
A brief account is given in . Arsenal 125 Years in the Making: The Official Illustrated History 1886–2011
A more detailed account can be found in . Rebels for the Cause: The Alternative History of Arsenal Football Club
Various primary sources can be found in Later that year, The Arsenal started dropping "The" in official documents, gradually shifting its name for the final time towards Arsenal, as it is generally known today.
1919–1953: The Bank of England Club
thumb|Chart showing Arsenal's league positions since admission to The Football League in 1893
With a new home and First Division football, attendances were more than double those at the Manor Ground, and Arsenal's budget grew rapidly. Their location and record-breaking salary offer lured star Huddersfield Town manager Herbert Chapman in 1925. Over the next five years, Chapman built a new Arsenal. He appointed enduring new trainer Tom Whittaker, implemented Charlie Buchan's new twist on the nascent WM formation, captured young players like Cliff Bastin and Eddie Hapgood, and lavished Highbury's income on stars like David Jack and Alex James. With record-breaking spending and gate receipts, Arsenal quickly became known as the Bank of England club.
thumb|left|Highbury's Art Deco east facade
Transformed, Chapman's Arsenal claimed their first national trophy, the FA Cup, in 1930. Two League Championships followed, in 1930–31 and 1932–33. Chapman also presided over multiple off the pitch changes: white sleeves and shirt numbers were added to the kit; The new shirts are exhibited in . The Arsenal Shirt: Iconic Match Worn Shirts from the History of the Gunners
Newspaper accounts of the addition of white sleeves are provided by
The contemporary discussion around the first use of shirt numbers, and its initial trial by Chelsea F.C., is provided by a Tube station was named after the club; and the first of two opulent, Art Deco stands was completed, with some of the first floodlights in English football. Suddenly, in the middle of the 1933–34 season, Chapman died of pneumonia. His work was left to Joe Shaw and George Allison, who saw out a hat-trick with the 1933–34 and 1934–35 titles, and then won the 1936 FA Cup and 1937–38 title.
World War II meant The Football League was suspended for seven years, but Arsenal returned to win it in the second post-war season, 1947–48. This was Tom Whittaker's first season as manager, after his promotion to succeed Allison, and the club had equalled the champions of England record. They won a third FA Cup in 1950, and then won a record-breaking seventh championship in 1952–53. However, the war had taken its toll on Arsenal. The club had had more players killed than any top flight club, and debt from reconstructing the North Bank Stand bled Arsenal's resources.
1953–1986: The long sleep, Mee and Neill
Arsenal were not to win the League or the FA Cup for another 18 years. The '53 Champions squad was old, and the club failed to attract strong enough replacements. Although Arsenal were competitive during these years, their fortunes had waned; the club spent most of the 1950s and 1960s in midleague mediocrity. Even former England captain Billy Wright could not bring the club any success as manager, in a stint between 1962 and 1966.
thumb|upright|Bertie Mee 1972
Arsenal tentatively appointed club physiotherapist Bertie Mee as acting manager in 1966. With new assistant Don Howe and new players such as Bob McNab and George Graham, Mee led Arsenal to their first League Cup finals, in 1967–68 and 1968–69. Next season saw a breakthrough: Arsenal's first competitive European trophy, the 1969–70 Inter-Cities Fairs Cup. And the season after, an even greater triumph: Arsenal's first League and FA Cup double, and a new champions of England record. This marked a premature high point of the decade; the Double-winning side was soon broken up and the rest of the decade was characterised by a series of near misses, starting with Arsenal finishing as FA Cup runners up in 1972, and First Division runners-up in 1972–73.
Former player Terry Neill succeeded Mee in 1976. At the age of 34, he became the youngest Arsenal manager to date. With new signings like Malcolm Macdonald and Pat Jennings, and a crop of talent in the side such as Liam Brady and Frank Stapleton, the club reached a trio of FA Cup finals (1978, 1979 and 1980), and lost the 1980 European Cup Winners' Cup Final on penalties. The club's only trophy during this time was a last-minute 3–2 victory over Manchester United in the 1979 FA Cup Final, widely regarded as a classic.A 2005 poll of English football fans rated the 1979 FA Cup Final the 15th greatest game of all time. Reference:
1986–present: Graham to Wenger
thumb|left|upright|Tony Adams statue and the Emirates Stadium
One of Bertie Mee's double winners, George Graham, returned as manager in 1986. Arsenal won their first League Cup in 1987, Graham's first season in charge. By 1988, new signings Nigel Winterburn, Lee Dixon and Steve Bould had joined the club to complete the "famous Back Four" led by existing player Tony Adams.Martin Keown was the 'fifth' member of the Back Four, but didn't play for the club between 1986 and 1993. They immediately won the 1988–89 Football League, snatched with a last-minute goal in the final game of the season against fellow title challengers Liverpool. Graham's Arsenal won another title in 1990–91, losing only one match, won the FA Cup and League Cup double in 1993, and the European Cup Winners' Cup, in 1994. Graham's reputation was tarnished when he was found to have taken kickbacks from agent Rune Hauge for signing certain players,Graham was banned for a year by the Football Association for his involvement in the scandal after he admitted he had received an "unsolicited gift" from Hauge.
and he was dismissed in 1995. His permanent replacement, Bruce Rioch, lasted for only one season, leaving the club after a dispute with the board of directors.
thumb|Arsenal's players and fans celebrate their 2004 League title win with an open-top bus parade.|alt=A group of people on a red open-topped bus wave to a crowd of onlookers.
The club metamorphosed during the long tenure of manager Arsène Wenger, appointed in 1996. New, attacking football, an overhaul of dietary and fitness practices, For details, see . "Chapter 1: What's in a name?". Arsene Wenger: The Inside Story of Arsenal Under Wenger
For more, see
For contemporary reaction, see
For context of the broader use of science in English football, see and efficiency with moneyThe following analyses all indicate strong league performance across the Wenger period, given Arsenal's footballing outlays.
For regression analysis on wage bills, see
For regression on transfer spending, see
For regression on both, see
A bootstrapping approach for the period 2004–09 is applied in have defined his reign. Accumulating key players from Wenger's homeland, such as Patrick Vieira and Thierry Henry, Arsenal won a second League and Cup double in 1997–98 and a third in 2001–02. In addition, the club reached the final of the 1999–2000 UEFA Cup, were victorious in the 2003 and 2005 FA Cups, and won the Premier League in 2003–04 without losing a single match, an achievement which earned the side the nickname "The Invincibles". This latter feat came within a run of 49 league matches unbeaten from 7 May 2003 to 24 October 2004, a national record.
Arsenal finished in either first or second place in the league in eight of Wenger's first nine seasons at the club, although on no occasion were they able to retain the title. The club had never progressed beyond the quarter-finals of the Champions League until 2005–06; in that season they became the first club from London in the competition's fifty-year history to reach the final, in which they were beaten 2–1 by Barcelona. In July 2006, they moved into the Emirates Stadium, after 93 years at Highbury. Arsenal reached the final of the 2007 and 2011 League Cups, losing 2–1 to Chelsea and Birmingham City respectively.
The club had not gained a major trophy since the 2005 FA Cup until 17 May 2014, when Arsenal beat Hull City in the 2014 FA Cup Final, coming back from a 2–0 deficit to win the match 3–2. This qualified them for the 2014 FA Community Shield where they would play Premier League champions Manchester City. They recorded a resounding 3–0 win in the game, winning their second trophy in three months. Nine months after their Community Shield triumph, Arsenal appeared in the FA Cup final for the second year in a row, thrashing Aston Villa 4–0 in the final and becoming the most successful club in the tournament's history with 12 titles. On 2 August 2015, Arsenal beat Chelsea 1–0 at Wembley Stadium to retain the Community Shield and earn their 14th Community Shield title.
Crest
thumb|right|Arsenal's first crest from 1888
Unveiled in 1888, Royal Arsenal's first crest featured three cannons viewed from above, pointing northwards, similar to the coat of arms of the Metropolitan Borough of Woolwich (nowadays transferred to the coat of arms of the Royal Borough of Greenwich). These can sometimes be mistaken for chimneys, but the presence of a carved lion's head and a cascabel on each are clear indicators that they are cannons. This was dropped after the move to Highbury in 1913, only to be reinstated in 1922, when the club adopted a crest featuring a single cannon, pointing eastwards, with the club's nickname, The Gunners, inscribed alongside it; this crest only lasted until 1925, when the cannon was reversed to point westward and its barrel slimmed down.
In 1949, the club unveiled a modernised crest featuring the same style of cannon below the club's name, set in blackletter, and above the coat of arms of the Metropolitan Borough of Islington and a scroll inscribed with the club's newly adopted Latin motto, Victoria Concordia Crescit - "victory comes from harmony" – coined by the club's programme editor Harry Homer. For the first time, the crest was rendered in colour, which varied slightly over the crest's lifespan, finally becoming red, gold and green. Because of the numerous revisions of the crest, Arsenal were unable to copyright it. Although the club had managed to register the crest as a trademark, and had fought (and eventually won) a long legal battle with a local street trader who sold "unofficial" Arsenal merchandise, Arsenal eventually sought a more comprehensive legal protection. Therefore, in 2002 they introduced a new crest featuring more modern curved lines and a simplified style, which was copyrightable. The cannon once again faces east and the club's name is written in a sans-serif typeface above the cannon. Green was replaced by dark blue. The new crest was criticised by some supporters; the Arsenal Independent Supporters' Association claimed that the club had ignored much of Arsenal's history and tradition with such a radical modern design, and that fans had not been properly consulted on the issue.
thumb|left|Arsenal's crest used until 2002
Until the 1960s, a badge was worn on the playing shirt only for high-profile matches such as FA Cup finals, usually in the form of a monogram of the club's initials in red on a white background.
The monogram theme was developed into an Art Deco-style badge on which the letters A and C framed a football rather than the letter F, the whole set within a hexagonal border. This early example of a corporate logo, introduced as part of Herbert Chapman's rebranding of the club in the 1930s, was used not only on Cup Final shirts but as a design feature throughout Highbury Stadium, including above the main entrance and inlaid in the floors. From 1967, a white cannon was regularly worn on the shirts, until replaced by the club crest, sometimes with the addition of the nickname "The Gunners", in the 1990s.
In the 2011–12 season, Arsenal celebrated their 125th year anniversary. The celebrations included a modified version of the current crest worn on their jerseys for the season. The crest was all white, surrounded by 15 oak leaves to the right and 15 laurel leaves to the left. The oak leaves represent the 15 founding members of the club who met at the Royal Oak pub. The 15 laurel leaves represent the design detail on the six pence pieces paid by the founding fathers to establish the club. The laurel leaves also represent strength. To complete the crest, 1886 and 2011 are shown on either sides of the motto "Forward" at the bottom of the crest.
Colours
For much of Arsenal's history, their home colours have been bright red shirts with white sleeves and white shorts, though this has not always been the case. The choice of red is in recognition of a charitable donation from Nottingham Forest, soon after Arsenal's foundation in 1886. Two of Dial Square's founding members, Fred Beardsley and Morris Bates, were former Forest players who had moved to Woolwich for work. As they put together the first team in the area, no kit could be found, so Beardsley and Bates wrote home for help and received a set of kit and a ball. The shirt was redcurrant, a dark shade of red, and was worn with white shorts and socks with blue and white hoops.
220px|thumb|right|Arsenal's trademark yellow and blue away strip depicted here in a Champions League game against Ludogorets Razgrad in 2016.
In 1933, Herbert Chapman, wanting his players to be more distinctly dressed, updated the kit, adding white sleeves and changing the shade to a brighter pillar box red. Two possibilities have been suggested for the origin of the white sleeves. One story reports that Chapman noticed a supporter in the stands wearing a red sleeveless sweater over a white shirt; another was that he was inspired by a similar outfit worn by the cartoonist Tom Webster, with whom Chapman played golf.
Regardless of which story is true, the red and white shirts have come to define Arsenal and the team have worn the combination ever since, aside from two seasons. The first was 1966–67, when Arsenal wore all-red shirts; this proved unpopular and the white sleeves returned the following season. The second was 2005–06, the last season that Arsenal played at Highbury, when the team wore commemorative redcurrant shirts similar to those worn in 1913, their first season in the stadium; the club reverted to their normal colours at the start of the next season. In the 2008–09 season, Arsenal replaced the traditional all-white sleeves with red sleeves with a broad white stripe.
Arsenal's home colours have been the inspiration for at least three other clubs. In 1909, Sparta Prague adopted a dark red kit like the one Arsenal wore at the time; in 1938, Hibernian adopted the design of the Arsenal shirt sleeves in their own green and white strip. In 1920, Sporting Clube de Braga's manager returned from a game at Highbury and changed his team's green kit to a duplicate of Arsenal's red with white sleeves and shorts, giving rise to the team's nickname of Os Arsenalistas. These teams still wear those designs to this day.
For many years Arsenal's away colours were white shirts and either black or white shorts. In the 1969–70 season, Arsenal introduced an away kit of yellow shirts with blue shorts. This kit was worn in the 1971 FA Cup Final as Arsenal beat Liverpool to secure the double for the first time in their history. Arsenal reached the FA Cup final again the following year wearing the red and white home strip and were beaten by Leeds United. Arsenal then competed in three consecutive FA Cup finals between 1978 and 1980 wearing their "lucky" yellow and blue strip, which remained the club's away strip until the release of a green and navy away kit in 1982–83. The following season, Arsenal returned to the yellow and blue scheme, albeit with a darker shade of blue than before.
When Nike took over from Adidas as Arsenal's kit provider in 1994, Arsenal's away colours were again changed to two-tone blue shirts and shorts. Since the advent of the lucrative replica kit market, the away kits have been changed regularly, with Arsenal usually releasing both away and third choice kits. During this period the designs have been either all blue designs, or variations on the traditional yellow and blue, such as the metallic gold and navy strip used in the 2001–02 season, the yellow and dark grey used from 2005 to 2007, and the yellow and maroon of 2010 to 2013.
As of 2009, the away kit is changed every season, and the outgoing away kit becomes the third-choice kit if a new home kit is being introduced in the same year.
Kit manufacturers and shirt sponsors
Arsenal's shirts have been made by manufacturers including Bukta (from the 1930s until the early 1970s), Umbro (from the 1970s until 1986), Adidas (1986–1994), Nike (1994–2014), and Puma (from 2014). Like those of most other major football clubs, Arsenal's shirts have featured sponsors' logos since the 1980s; sponsors include JVC (1982–1999), Sega (1999–2002), O2 (2002–2006), and Emirates (from 2006).
Stadiums
thumb|left|Manor Ground, Woolwich Arsenal vs. Everton F.C.
For most of their time in south-east London, Arsenal played at the Manor Ground in Plumstead, apart from a three-year period at the nearby Invicta Ground between 1890 and 1893. The Manor Ground was initially just a field, until the club installed stands and terracing for their first Football League match in September 1893. They played their home games there for the next twenty years (with two exceptions in the 1894–95 season), until the move to north London in 1913.
Widely referred to as Highbury, Arsenal Stadium was the club's home from September 1913 until May 2006. The original stadium was designed by the renowned football architect Archibald Leitch, and had a design common to many football grounds in the UK at the time, with a single covered stand and three open-air banks of terracing. The entire stadium was given a massive overhaul in the 1930s: new Art Deco West and East stands were constructed, opening in 1932 and 1936 respectively, and a roof was added to the North Bank terrace, which was bombed during the Second World War and not restored until 1954.
Highbury could hold more than 60,000 spectators at its peak, and had a capacity of 57,000 until the early 1990s. The Taylor Report and Premier League regulations obliged Arsenal to convert Highbury to an all-seater stadium in time for the 1993–94 season, thus reducing the capacity to 38,419 seated spectators. This capacity had to be reduced further during Champions League matches to accommodate additional advertising boards, so much so that for two seasons, from 1998 to 2000, Arsenal played Champions League home matches at Wembley, which could house more than 70,000 spectators.
thumb|right|The North Bank Stand, Arsenal Stadium, Highbury|alt=A grandstand at a sports stadium. The seats are predominantly red.
Expansion of Highbury was restricted because the East Stand had been designated as a Grade II listed building and the other three stands were close to residential properties. These limitations prevented the club from maximising matchday revenue during the 1990s and first decade of the 21st century, putting them in danger of being left behind in the football boom of that time.
After considering various options, in 2000 Arsenal proposed building a new 60,361-capacity stadium at Ashburton Grove, since named the Emirates Stadium, about 500 metres south-west of Highbury.
The project was initially delayed by red tape and rising costs,
and construction was completed in July 2006, in time for the start of the 2006–07 season.
The stadium was named after its sponsors, the airline company Emirates, with whom the club signed the largest sponsorship deal in English football history, worth around £100 million.
Some fans referred to the ground as Ashburton Grove, or the Grove, as they did not agree with corporate sponsorship of stadium names.
The stadium will be officially known as Emirates Stadium until at least 2028, and the airline will be the club's shirt sponsor until the end of the 2018–19 season. From the start of the 2010–11 season on, the stands of the stadium have been officially known as North Bank, East Stand, West Stand and Clock end.Emirates Stadium stands to be renamed Arsenal FC, 19 July 2010
Arsenal's players train at the Shenley Training Centre in Hertfordshire, a purpose-built facility which opened in 1999. Before that the club used facilities on a nearby site owned by the University College of London Students' Union. Until 1961 they had trained at Highbury. Arsenal's Academy under-18 teams play their home matches at Shenley, while the reserves play their games at Meadow Park, which is also the home of Boreham Wood F.C..
Supporters
thumb|left|Arsenal against rivals Tottenham, known as the North London derby, in November 2010.
Arsenal fans often refer to themselves as "Gooners", the name derived from the team's nickname, "The Gunners". The fanbase is large and generally loyal, and virtually all home matches sell out; in 2007–08 Arsenal had the second-highest average League attendance for an English club (60,070, which was 99.5% of available capacity), and, as of 2015, the third-highest all-time average attendance. Please note that some pre-war attendance figures used by this source were estimates and may not be entirely accurate. Arsenal have the seventh highest average attendance of European football clubs only behind Borussia Dortmund, FC Barcelona, Manchester United, Real Madrid, Bayern Munich, and Schalke. The club's location, adjoining wealthy areas such as Canonbury and Barnsbury, mixed areas such as Islington, Holloway, Highbury, and the adjacent London Borough of Camden, and largely working-class areas such as Finsbury Park and Stoke Newington, has meant that Arsenal's supporters have come from a variety of social classes.
thumb|Arsenal supporters
Like all major English football clubs, Arsenal have a number of domestic supporters' clubs, including the Arsenal Football Supporters' Club, which works closely with the club, and the Arsenal Independent Supporters' Association, which maintains a more independent line. The Arsenal Supporters' Trust promotes greater participation in ownership of the club by fans. The club's supporters also publish fanzines such as The Gooner, Gunflash and the satirical Up The Arse!. In addition to the usual English football chants, supporters sing "One-Nil to the Arsenal" (to the tune of "Go West").
There have always been Arsenal supporters outside London, and since the advent of satellite television, a supporter's attachment to a football club has become less dependent on geography. Consequently, Arsenal have a significant number of fans from beyond London and all over the world; in 2007, 24 UK, 37 Irish and 49 other overseas supporters clubs were affiliated with the club. A 2011 report by SPORT+MARKT estimated Arsenal's global fanbase at 113 million. The club's social media activity was the fifth highest in world football during the 2014–15 season.
Arsenal's longest-running and deepest rivalry is with their nearest major neighbours, Tottenham Hotspur; matches between the two are referred to as North London derbies. Other rivalries within London include those with Chelsea, Fulham and West Ham United. In addition, Arsenal and Manchester United developed a strong on-pitch rivalry in the late 1980s, which intensified in recent years when both clubs were competing for the Premier League title – so much so that a 2003 online poll by the Football Fans Census listed Manchester United as Arsenal's biggest rivals, followed by Tottenham and Chelsea. A 2008 poll listed the Tottenham rivalry as more important.
Ownership and finances
The largest shareholder on the Arsenal board is American sports tycoon Stan Kroenke. Kroenke first launched a bid for the club in April 2007, and faced competition for shares from Red and White Securities, which acquired its first shares off David Dein in August 2007. Red & White Securities was co-owned by Russian billionaire Alisher Usmanov and Iranian London-based financier Farhad Moshiri, though Usmanov bought Moshiri's stake in 2016. Kroenke came close to the 30% takeover threshold in November 2009, when he increased his holding to 18,594 shares (29.9%). In April 2011, Kroenke achieved a full takeover by purchasing the shareholdings of Nina Bracewell-Smith and Danny Fiszman, taking his shareholding to 62.89%. As of June 2015, Kroenke owns 41,698 shares (67.02%) and Red & White Securities own 18,695 shares (30.04%). Ivan Gazidis has been the club's Chief Executive since 2009.
Arsenal's parent company, Arsenal Holdings plc, operates as a non-quoted public limited company, whose ownership is considerably different from that of other football clubs. Only 62,217 shares in Arsenal have been issued, and they are not traded on a public exchange such as the FTSE or AIM; instead, they are traded relatively infrequently on the ICAP Securities and Derivatives Exchange, a specialist market. On 10 March 2016, a single share in Arsenal had a mid price of £15,670, which sets the club's market capitalisation value at approximately £975m. Most football clubs aren't listed on an exchange, which makes direct comparisons of their values difficult. Consultants Brand Finance valued the club's brand and intangible assets at $703m in 2015, and consider Arsenal an AAA global brand. Business magazine Forbes valued Arsenal as a whole at $2.0 billion (£1.4 billion) in 2016, ranked second in English football. Research by the Henley Business School also ranked Arsenal second in English football, modelling the club's value at £1.118 billion in 2015.
Arsenal's financial results for the 2014–15 season show group revenue of £344.5m, with a profit before tax of £24.7m. The footballing core of the business showed a revenue of £329.3m. The Deloitte Football Money League is a publication that homogenizes and compares clubs' annual revenue. They put Arsenal's footballing revenue at £331.3m (€435.5m), ranking Arsenal seventh among world football clubs. Arsenal and Deloitte both list the match day revenue generated by the Emirates Stadium as £100.4m, more than any other football stadium in the world.
In popular culture
Arsenal have appeared in a number of media "firsts". On 22 January 1927, their match at Highbury against Sheffield United was the first English League match to be broadcast live on radio.Firsts, Lasts & Onlys: Football – Paul Donnelley (Hamlyn, 2010) A decade later, on 16 September 1937, an exhibition match between Arsenal's first team and the reserves was the first football match in the world to be televised live. Arsenal also featured in the first edition of the BBC's Match of the Day, which screened highlights of their match against Liverpool at Anfield on 22 August 1964. BSkyB's coverage of Arsenal's January 2010 match against Manchester United was the first live public broadcast of a sports event on 3D television.
As one of the most successful teams in the country, Arsenal have often featured when football is depicted in the arts in Britain. They formed the backdrop to one of the earliest football-related films, The Arsenal Stadium Mystery (1939). The film centres on a friendly match between Arsenal and an amateur side, one of whose players is poisoned while playing. Many Arsenal players appeared as themselves and manager George Allison was given a speaking part. More recently, the book Fever Pitch by Nick Hornby was an autobiographical account of Hornby's life and relationship with football and Arsenal in particular. Published in 1992, it formed part of the revival and rehabilitation of football in British society during the 1990s. The book was twice adapted for the cinema – the 1997 British film focuses on Arsenal's 1988–89 title win, and a 2005 American version features a fan of baseball's Boston Red Sox; coincidentally the ending was re-made to feature the 2004–05 season that ended in a similar fashion.
Arsenal have often been stereotyped as a defensive and "boring" side, especially during the 1970s and 1980s; many comedians, such as Eric Morecambe, made jokes about this at the team's expense. The theme was repeated in the 1997 film The Full Monty, in a scene where the lead actors move in a line and raise their hands, deliberately mimicking the Arsenal defence's offside trap, in an attempt to co-ordinate their striptease routine. Another film reference to the club's defence comes in the film Plunkett & Macleane, in which two characters are named Dixon and Winterburn after Arsenal's long-serving full backs – the right-sided Lee Dixon and the left-sided Nigel Winterburn.
The 1991 television comedy sketch show Harry Enfield & Chums featured a sketch from the characters Mr Cholmondly-Warner and Grayson where the Arsenal team of 1933, featuring exaggerated parodies of fictitious amateur players take on the Liverpool team of 1991.Association Football – Harry Enfield – Mr Cholmondley-Warner YouTube In the 2007 movie, Goal II: Living the Dream, there is a fictional UEFA Champions League final between Real Madrid against Arsenal.
In the community
In 1985, Arsenal founded a community scheme, "Arsenal in the Community", which offered sporting, social inclusion, educational and charitable projects. The club support a number of charitable causes directly and in 1992 established The Arsenal Charitable Trust, which by 2006 had raised more than £2 million for local causes. An ex-professional and celebrity football team associated with the club also raised money by playing charity matches.
In the 2009–10 season Arsenal announced that they had raised a record breaking £818,897 for the Great Ormond Street Hospital Children's Charity. The original target was £500,000.Arsenal smash fundraising target for GOSH Arsenal FC, 2 August 2010
Save the Children has been Arsenal global charity partner since 2011 and have worked together in numerous projects to improve safety and well-being for vulnerable children in London and abroad. On September 3rd 2016 The Arsenal Foundation has donated £1m to build football pitches for children in London, Jordan and Somalia thanks to The Arsenal Foundation Legends Match against Milan Glorie at the Emirates Stadium. http://blogs.savethechildren.org.uk/2016/09/arsenal-legends-raise-money-football-pitches-child-refugees/
Statistics and records
thumb|150px|Thierry Henry is Arsenal's record goalscorer, with 228 goals in all competitions.
Arsenal's tally of 13 League Championships is the third highest in English football, after Manchester United (20) and Liverpool (18),
and they were the first club to reach a seventh and an eighth League Championship. As of May 2016, they are one of only six teams, the others being Manchester United, Blackburn Rovers, Chelsea, Manchester City and Leicester City, to have won the Premier League since its formation in 1992.
They hold the joint highest number of FA Cup trophies, 12. The club is one of only six clubs to have won the FA Cup twice in succession, in 2002 and 2003, and 2014 and 2015. Arsenal have achieved three League and FA Cup "Doubles" (in 1971, 1998 and 2002), a feat only previously achieved by Manchester United (in 1994, 1996 and 1999). They were the first side in English football to complete the FA Cup and League Cup double, in 1993. Arsenal were also the first London club to reach the final of the UEFA Champions League, in 2006, losing the final 2–1 to Barcelona.
Arsenal have one of the best top-flight records in history, having finished below fourteenth only seven times. The league wins and points they have accumulated are the second most in English top flight football. They have been in the top flight for the most consecutive seasons (90 as of 2015–16). Arsenal also have the highest average league finishing position for the 20th century, with an average league placement of 8.5.
Arsenal hold the record for the longest run of unbeaten League matches (49 between May 2003 and October 2004). This included all 38 matches of their title-winning 2003–04 season, when Arsenal became only the second club to finish a top-flight campaign unbeaten, after Preston North End (who played only 22 matches) in 1888–89. They also hold the record for the longest top flight win streak.
Arsenal set a Champions League record during the 2005–06 season by going ten matches without conceding a goal, beating the previous best of seven set by A.C. Milan. They went a record total stretch of 995 minutes without letting an opponent score; the streak ended in the final, when Samuel Eto'o scored a 76th-minute equaliser for Barcelona.
David O'Leary holds the record for Arsenal appearances, having played 722 first-team matches between 1975 and 1993. Fellow centre half and former captain Tony Adams comes second, having played 669 times. The record for a goalkeeper is held by David Seaman, with 564 appearances.
Thierry Henry is the club's top goalscorer with 228 goals in all competitions between 1999 and 2012, having surpassed Ian Wright's total of 185 in October 2005. Wright's record had stood since September 1997, when he overtook the longstanding total of 178 goals set by winger Cliff Bastin in 1939. Henry also holds the club record for goals scored in the League, with 175, a record that had been held by Bastin until February 2006.
Arsenal's record home attendance is 73,707, for a UEFA Champions League match against RC Lens on 25 November 1998 at Wembley Stadium, where the club formerly played home European matches because of the limits on Highbury's capacity. The record attendance for an Arsenal match at Highbury is 73,295, for a 0–0 draw against Sunderland on 9 March 1935, while that at Emirates Stadium is 60,161, for a 2–2 draw with Manchester United on 3 November 2007.
Players
First-team squad
Out on loan
UEFA Reserve squad
Former players
Current technical staff
thumb|right|upright|Arsène Wenger, who has been managing Arsenal since 1996.
As of August 2014.
Position Name Manager Arsène Wenger Assistant manager Steve Bould First-team coaches Boro Primorac Neil Banfield Goalkeeping coach Gerry Peyton Head of athletic performance enhancement Shad Forsythe Fitness coach Tony Colbert Head physiotherapist Colin Lewin Club doctor Gary O'Driscoll Kit manager Vic Akers Academy director Andries Jonker Under-21s coaches Steve Gatting Carl Laraman Under-18s coach Frans de Kat Under-16s coach Jan van Loon
thumb|Arsenal manager Arsène Wenger glares at Chelsea's Jose Mourinho during one of their many spats, in October 2014
Managers
There have been eighteen permanent and five caretaker managers of Arsenal since the appointment of the club's first professional manager, Thomas Mitchell in 1897. The club's longest-serving manager, in terms of both length of tenure and number of games overseen, is Arsène Wenger, who was appointed in 1996. Wenger is also Arsenal's only manager from outside the United Kingdom. Two Arsenal managers have died in the job – Herbert Chapman and Tom Whittaker.
Honours
As of June 2016.For a record of all matches participated in by Arsenal, see the AISA Arsenal History Society's line-ups database, listed first. See subsequent sources for corroboration.
. Forward, Arsenal! Seasons in bold are Double-winning seasons, when the club won the league and FA Cup or a cup double of the FA Cup and League Cup. The 2003–04 season was the only 38-match league season unbeaten in English football history. A special gold version of the Premier League trophy was commissioned and presented to the club the following season.
The Football League & Premier League
First Division (until 1992) and Premier League
Winners (13): 1930–31, 1932–33, 1933–34, 1934–35, 1937–38, 1947–48, 1952–53, 1970–71, 1988–89, 1990–91, 1997–98, 2001–02, 2003–04
League Cup
Winners (2): 1986–87, 1992–93
Southern Professional Floodlit Cup
Winners (1): 1958–59
Mercantile Credit Centenary Trophy
Winners (1): 1988–89
The Football Association
FA Cup
Winners (12): 1929–30, 1935–36, 1949–50, 1970–71, 1978–79, 1992–93, 1997–98, 2001–02, 2002–03, 2004–05, 2013–14, 2014–15 (shared record)
FA Community Shield (FA Charity Shield before 2002)
Winners (14): 1930, 1931, 1933, 1934, 1938, 1948, 1953, 1991 (shared), 1998, 1999, 2002, 2004, 2014, 2015
UEFA
UEFA Cup Winners' Cup (European Cup Winners' Cup before 1994)
Winners (1): 1993–94
Inter-Cities Fairs Cup
Winners (1): 1969–70
London Football Association
London Senior Cup
Winners (1): 1890–91
London Challenge Cup
Winners (11): 1921–22, 1923–24, 1930–31, 1933–34, 1935–36, 1953–54, 1954–55, 1957–58, 1961–62, 1962–63, 1969–70 (record)
London Charity Cup
Winners (1): 1889–90
Kent County Football Association
Kent Senior Cup
Winners (1): 1889–90
Other
Arsenal Ladies
Arsenal Ladies are the women's football club affiliated to Arsenal. Founded in 1987 by Vic Akers, they turned semi-professional in 2002 and are managed by Clare Wheatley. Akers now assumes the role of Honorary President of Arsenal Ladies. Arsenal Ladies are the most successful team in English women's football. In the 2008–09 season, they won all three major English trophies – the FA Women's Premier League, FA Women's Cup and FA Women's Premier League Cup, and, as of 2009, were the only English side to have won the UEFA Women's Cup, having done so in the 2006–07 season as part of a unique quadruple. The men's and women's clubs are formally separate entities but have quite close ties; Arsenal Ladies are entitled to play once a season at the Emirates Stadium, though they usually play their home matches at Boreham Wood. At present the ladies have won 42 trophies in their 28 year history.
Footnotes
Further reading
External links
Official websites
Arsenal at the Premier League official website
Arsenal at the UEFA official website
News sites
Arsenal news from Sky Sports
Other
Category:Association football clubs established in 1886
Category:Companies formerly listed on the Alternative Investment Market
Category:FA Cup winners
Category:Football clubs in England
Category:Football clubs in London
Category:Former Football League clubs
Category:EFL Cup winners
Category:G-14 clubs
Category:Premier League clubs
Category:1886 establishments in England | 2,174 | 2017-01 |
New Delhi | thumb|right|250px|The city of New Delhi is located within the National Capital Territory of Delhi.
New Delhi () is the capital of India and one of Delhi city's 11 districts.
The foundation stone of the city was laid by George V, Emperor of India during the Delhi Durbar of 1911. It was designed by British architects, Sir Edwin Lutyens and Sir Herbert Baker. The new capital was inaugurated on 13 February 1931, by Viceroy and Governor-General of India Lord Irwin.
Although colloquially Delhi and New Delhi as names are used interchangeably to refer to the jurisdiction of the National Capital Territory (NCT) of Delhi, these are two distinct entities, and the latter is a small part of the former. The National Capital Region is a much larger entity comprising the entire National Capital Territory along with adjoining districts.
New Delhi has been selected as one of the hundred Indian cities to be developed as a smart city under Prime Minister of India Narendra Modi's flagship Smart Cities Mission.
History
Establishment
thumb|left|250px|Lord Curzon and Lady Curzon arriving at the Delhi Durbar, 1903.
thumb|250px|left|The Delhi Durbar of 1911, with King George V and Queen Mary seated upon the dais.
Calcutta (now Kolkata) was the capital of India during the British Raj until December 1911.
Delhi had served as the political and financial centre of several empires of ancient India and the Delhi Sultanate, most notably of the Mughal Empire from 1649 to 1857. During the early 1900s, a proposal was made to the British administration to shift the capital of the British Indian Empire, as India was officially named, from Calcutta on the east coast, to Delhi. The Government of British India felt that it would be logistically easier to administer India from Delhi in the centre of northern India.
The land for building the new city of Delhi was acquired under the Land Acquisition Act 1894.Land and Acquisition Act 1894.
On 12 December 1911, during the Delhi Durbar, George V, then Emperor of India, along with Queen Mary, his Consort, made the announcement that the capital of the Raj was to be shifted from Calcutta to Delhi, while laying the foundation stone for the Viceroy's residence in the Coronation Park, Kingsway Camp.Coronation park Hindustan Times, 14 August 2008.
The foundation stone of New Delhi was laid by King George V and Queen Mary at the site of Delhi Durbar of 1911 at Kingsway Camp on 15 December 1911, during their imperial visit. Large parts of New Delhi were planned by Edwin Lutyens (Sir Edwin from 1918), who first visited Delhi in 1912, and Herbert Baker (Sir Herbert from 1926), both leading 20th-century British architects. The contract was given to Sobha Singh (later Sir Sobha Singh). The original plan called for its construction in Tughlaqabad, inside the Tughlaqabad fort, but this was given up because of the Delhi-Calcutta trunk line that passed through the fort. Construction really began after World War I and was completed by 1931. The city that was later dubbed "Lutyens' Delhi" was inaugurated in ceremonies beginning on 10 February 1931 by Lord Irwin, the Viceroy. Lutyens designed the central administrative area of the city as a testament to Britain's imperial aspirations.
thumb|left|250px|The 1931 series celebrated the inauguration of New Delhi as the seat of government. The one rupee stamp shows George V with the "Secretariat Building" and Dominion Columns.
Soon Lutyens started considering other places. Indeed, the Delhi Town Planning Committee, set up to plan the new imperial capital, with George Swinton as chairman and John A. Brodie and Lutyens as members, submitted reports for both North and South sites. However, it was rejected by the Viceroy when the cost of acquiring the necessary properties was found to be too high. The central axis of New Delhi, which today faces east at India Gate, was previously meant to be a north-south axis linking the Viceroy's House at one end with Paharganj at the other. During the project's early years, many tourists believed it was a gate from Earth to Heaven itself. Eventually, owing to space constraints and the presence of a large number of heritage sites in the North side, the committee settled on the South site.Chishti, p. 225. A site atop the Raisina Hill, formerly Raisina Village, a Meo village, was chosen for the Rashtrapati Bhawan, then known as the Viceroy's House. The reason for this choice was that the hill lay directly opposite the Dinapanah citadel, which was also considered the site of Indraprastha, the ancient region of Delhi. Subsequently, the foundation stone was shifted from the site of Delhi Durbar of 1911–1912, where the Coronation Pillar stood, and embedded in the walls of the forecourt of the Secretariat. The Rajpath, also known as King's Way, stretched from the India Gate to the Rashtrapati Bhawan. The Secretariat building, the two blocks of which flank the Rashtrapati Bhawan and houses ministries of the Government of India, and the Parliament House, both designed by Herbert Baker, are located at the Sansad Marg and run parallel to the Rajpath.
In the south, land up to Safdarjung's Tomb was acquired to create what is today known as Lutyens' Bungalow Zone.Chishti, p. 222. Before construction could begin on the rocky ridge of Raisina Hill, a circular railway line around the Council House (now Parliament House), called the Imperial Delhi Railway, was built to transport construction material and workers for the next twenty years. The last stumbling block was the Agra-Delhi railway line that cut right through the site earmarked for the hexagonal All-India War Memorial (India Gate) and Kingsway (Rajpath), which was a problem because the Old Delhi Railway Station served the entire city at that time. The line was shifted to run along the Yamuna river, and it began operating in 1924. The New Delhi Railway Station opened in 1926 with a single platform at Ajmeri Gate near Paharganj and was completed in time for the city's inauguration in 1931. As construction of the Viceroy's House (the present Rashtrapati Bhavan), Central Secretariat, Parliament House, and All-India War Memorial (India Gate) was winding down, the building of a shopping district and a new plaza, Connaught Place, began in 1929, and was completed by 1933. Named after Prince Arthur, 1st Duke of Connaught (1850–1942), it was designed by Robert Tor Russell, chief architect to the Public Works Department (PWD).
After the capital of India moved to Delhi, a temporary secretariat building was constructed in a few months in 1912 in North Delhi. Most of the government offices of the new capital moved here from the 'Old secretariat' in Old Delhi (the building now houses the Delhi Legislative Assembly), a decade before the new capital was inaugurated in 1931. Many employees were brought into the new capital from distant parts of India, including the Bengal Presidency and Madras Presidency. Subsequently, housing for them was developed around Gole Market area in the 1920s. Built in the 1940s, to house government employees, with bungalows for senior officials in the nearby Lodhi Estate area, Lodhi colony near historic Lodhi Gardens, was the last residential areas built by the British Raj.
Post-independence
thumb|200px|Rashtrapati Bhavan, the home of the President of India.
After India gained independence in 1947, a limited autonomy was conferred to New Delhi and was administered by a Chief Commissioner appointed by the Government of India. In 1956, Delhi was converted into a union territory and eventually the Chief Commissioner was replaced by a Lieutenant Governor. The Constitution (Sixty-ninth Amendment) Act, 1991 declared the Union Territory of Delhi to be formally known as National Capital Territory of Delhi. A system was introduced under which the elected Government was given wide powers, excluding law and order which remained with the Central Government. The actual enforcement of the legislation came in 1993.
The first major extension of New Delhi outside of Lutyens' Delhi came in the 1950s when the Central Public Works Department (CPWD) developed a large area of land southwest of Lutyens' Delhi to create the diplomatic enclave of Chanakyapuri, where land was allotted for embassies, chanceries, high commissions and residences of ambassadors, around a wide central vista, Shanti Path.
Geography
With a total area of , New Delhi forms a small part of the Delhi metropolitan area. Since the city is located on the Indo-Gangetic Plain, there is little difference in elevation across the city. New Delhi and surrounding areas were once a part of the Aravali Range; all that is left of those mountains is the Delhi Ridge, which is also called the Lungs of Delhi. While New Delhi lies on the floodplains of the Yamuna River, it is essentially a landlocked city. East of the river is the urban area of Shahdara. New Delhi falls under the seismic zone-IV, making it vulnerable to earthquakes.
Seismology
New Delhi lies on several fault lines and thus experiences frequent earthquakes, most of them of mild intensity. There has, however, been a spike in the number of earthquakes in the last six years, most notable being a 5.4 magnitude earthquake in 2015 with its epicentre in Nepal, a 4.7-magnitude earthquake on 25 November 2007, a 4.2-magnitude earthquake on 7 September 2011, a 5.2-magnitude earthquake on 5 March 2012, and a swarm of twelve earthquakes, including four of magnitudes 2.5, 2.8, 3.1, and 3.3, on 12 November 2013.
Climate
The climate of New Delhi is a monsoon-influenced humid subtropical climate (Köppen Cwa) with high variation between summer and winter in terms of both temperature and rainfall. The temperature varies from in summers to around in winters. The area's version of a humid subtropical climate is noticeably different from many other cities with this climate classification in that it features long and very hot summers, relatively dry and mild winters, a monsoonal period, and dust storms. Summers are long, extending from early April to October, with the monsoon season occurring in the middle of the summer. Winter starts in November and peaks in January. The annual mean temperature is around ; monthly daily mean temperatures range from approximately . New Delhi's highest temperature ever recorded is on June 28, 1883 while the lowest temperature ever recorded is on January 11, 1967, both of which are recorded at Palam Airport. The average annual rainfall is , most of which is during the monsoons in July and August.
Air quality
In recent Mercer's 2015 annual quality-of-living survey, New Delhi ranks at number 154 out of 230 cities due to bad air quality and pollution. The World Health Organization ranked New Delhi as the world's worst polluted city in 2014 among about 1,600 cities the organisation tracked around the world. In 2016, recently United States Environmental Protection Agency has listed New Delhi as the most polluted city on Earth.
thumb|Dense smog at Connaught Place, New Delhi.
In an attempt to curb air pollution in New Delhi, which gets worse during the winter, a temporary alternate-day travel scheme for cars using the odd- and even-numbered license plates system was announced by Delhi government in December 2015. In addition, trucks were to be allowed to enter India's capital only after 11 pm, two hours later than the existing restriction. The driving restriction scheme was planned to be implemented as a trial from 1 January 2016 for an initial period of 15 days. The restriction was in force between 8 am and 8 pm, and traffic was not restricted on Sundays. Public transportation service was increased during the restriction period.
On 16 December 2015, the Supreme Court of India mandated several restrictions on Delhi's transportation system to curb pollution. Among the measures, the court ordered to stop registrations of diesel cars and sport utility vehicles with an engine capacity of 2,000 cc and over until 31 March 2016. The court also ordered all taxis in the Delhi region to switch to compressed natural gas by 1 March 2016. Transportation vehicles that are more than 10 years old were banned from entering the capital.
Analysing real-time vehicle speed data from Uber Delhi revealed that during the odd-even program, average speeds went up by a statistically significant 5.4 per cent (2.8 standard deviation from normal). This means vehicles have lesser idling time in traffic and vehicle engines would run closer to minimum fuel consumption. "In bordering areas, PM 2.5 levels were recorded more than 400 (ug/m3) while in inner areas in Delhi, they were recorded between 150 and 210 on an average." However, the subcity of Dwarka, located in the southwest district, has a substantially low level of air pollution. At the NSIT University campus, located in sector 3 Dwarka, pollution levels were as low as 93 PPM.
thumb|900px|centre|2015 Air pollution in New Delhi (PM2.5 AQI).
Demographics
New Delhi has a population of 249,998. Hindi is the most widely spoken languages in New Delhi and the lingua franca of the city. English is primarily used as the formal language by business and government institutes. New Delhi has a literacy rate of 89.38% according to 2011 census, which is highest in Delhi.
Religion
thumb|200px|left|The Laxminarayan Temple is a famous Hindu temple in New Delhi.
thumb|150px|right|The Sacred Heart Cathedral is a Roman Catholic cathedral and designed by British architect Henry Medd based on Italian architecture.
thumb|200px|left|Gurudwara Bangla Sahib a Sikh gurdwara in New Delhi.
Hinduism is the religion of 79.8% of New Delhi's population. There are also communities of Muslims (12.9%), Sikhs (5.4%), Jains (1.1%) and Christians (0.9%) in Delhi.Censusindiamaps.net Other religious groups (2.5%) include Parsis, Buddhists and Jews.
Government
thumb|The Secretariat Building houses Ministries of Defence, Finance, Home Affairs and External Affairs. It also houses the Prime Minister's office.
The national capital of India, New Delhi is jointly administered by both the Central Government of India and the local Government of Delhi, it is also the capital of the National Capital Territory (NCT) of Delhi.
, the government structure of the New Delhi Municipal Council includes a chairperson, three members of New Delhi's Legislative Assembly, two members nominated by the Chief Minister of the NCT of Delhi and five members nominated by the central government.
The head of state of Delhi is the Lieutenant Governor of the Union Territory of Delhi, appointed by the President of India on the advice of the Central government and the post is largely ceremonial, as the Chief Minister of the Union Territory of Delhi is the head of government and is vested with most of the executive powers. According to the Indian constitution, if a law passed by Delhi's legislative assembly is repugnant to any law passed by the Parliament of India, then the law enacted by the parliament will prevail over the law enacted by the assembly.
New Delhi is governed through a municipal government, known as the New Delhi Municipal Council (NDMC). Other urban areas of the metropolis of Delhi are administered by the Municipal Corporation of Delhi (MCD). However, the entire metropolis of Delhi is commonly known as New Delhi in contrast to Old Delhi.
Economy
thumb|left|400px |Connaught Place in Delhi is an important economic hub of the National Capital Region|alt=A view of a road at Connaught Place showing busy traffic
New Delhi is the largest commercial city in northern India. It has an estimated net State Domestic Product (FY 2010) of in nominal terms and ~ in PPP terms. , the per capita income of Delhi was Rs. 230000, second highest in India after Goa. GSDP in Delhi at the current prices for 2012–13 is estimated at Rs 3.88 trillion (short scale) against Rs 3.11 trillion (short scale) in 2011–12.http://www.delhi.gov.in/wps/wcm/connect/cdae30804f9d52d88385c7fb6b929e93/newpaper+clip.PDF?MOD=AJPERES&lmod=-1585547974&CACHEID=cdae30804f9d52d88385c7fb6b929e93
Connaught Place, one of North India's largest commercial and financial centres, is located in the northern part of New Delhi. Adjoining areas such as Barakhamba Road, ITO are also major commercial centres. Government and quasi government sector was the primary employer in New Delhi. The city's service sector has expanded due in part to the large skilled English-speaking workforce that has attracted many multinational companies. Key service industries include information technology, telecommunications, hotels, banking, media and tourism.
The 2011 World Wealth Report ranks economic activity in New Delhi at 39, but overall the capital is ranked at 37, above cities like Jakarta and Johannesburg. New Delhi with Beijing shares the top position as the most targeted emerging markets retail destination among Asia-Pacific markets.
The Government of National Capital Territory of Delhi does not release any economic figures specifically for New Delhi but publishes an official economic report on the whole of Delhi annually. According to the Economic Survey of Delhi, the metropolis has a net State Domestic Product (SDP) of Rs. 830.85 billion (for the year 2004–05) and a per capita income of Rs. 53,976($1,200). In the year 2008–09 New Delhi had a Per Capita Income of Rs. ($2,595).It grew by 16.2% to reach Rs. ($3,018) in 2009–10 fiscal. New Delhi's Per Capita GDP (at PPP) was at $6,860 during 2009–10 fiscal, making it one of the richest cities in India. The tertiary sector contributes 78.4% of Delhi's gross SDP followed by secondary and primary sectors with 20.2% and 1.4% contribution respectively.
The gross state domestic product (GSDP) of Delhi at current prices for the year 2011–12 has been estimated at Rs 3.13 trillion (short scale), which is an increase of 18.7 per cent over the previous fiscal.
Culture
New Delhi is a cosmopolitan city due to the multi-ethnic and multi-cultural presence of the vast Indian bureaucracy and political system. The city's capital status has amplified the importance of national events and holidays. National events such as Republic Day, Independence Day and Gandhi Jayanti (Gandhi's birthday) are celebrated with great enthusiasm in New Delhi and the rest of India. On India's Independence Day (15 August) the Prime Minister of India addresses the nation from the Red Fort. Most Delhiites celebrate the day by flying kites, which are considered a symbol of freedom. The Republic Day Parade is a large cultural and military parade showcasing India's cultural diversity and military might.
Religious festivals include Diwali (the festival of light), Maha Shivaratri, Teej, Durga Puja, Mahavir Jayanti, Guru Nanak Jayanti, Holi, Lohri, Eid ul-Fitr, Eid ul-Adha, Raksha Bandhan, Christmas and Chhath Puja . The Qutub Festival is a cultural event during which performances of musicians and dancers from all over India are showcased at night, with the Qutub Minar as the chosen backdrop of the event. Other events such as Kite Flying Festival, International Mango Festival and Vasant Panchami (the Spring Festival) are held every year in Delhi.
There are also a number of Iglesia ni Cristo members, most of them Filipinos and some Indians who are married to the members.
In 2007, the Japanese Buddhist organisation Nipponzan Myohoji decided to build a Peace Pagoda in the city containing Buddha relics. It was inaugurated by the current Dalai Lama.
Historic sites, museums and gardens
thumb|right|The National Museum in New Delhi is one of the largest museums in India.
New Delhi is home to several historic sites and museums. The National Museum which began with an exhibition of Indian art and artefacts at the Royal Academy in London in the winter of 1947–48 was later at the end was shown at the Rashtrapati Bhawan in 1949. Later it was to form a permanent National Museum. On 15 August 1949, the National Museum was formally inaugurated and currently has 200,000 works of art, both of Indian and foreign origin, covering over 5,000 years.
The India Gate built in 1931 was inspired by the Arc de Triomphe in Paris. It is the national monument of India commemorating the 90,000 soldiers of the Indian Army who lost their lives while fighting for the British Raj in World War I and the Third Anglo-Afghan War.
The Rajpath which was built similar to the Champs-Élysées in Paris is the ceremonial boulevard for the Republic of India located in New Delhi. The annual Republic Day parade takes place here on 26 January.
thumb|right|The Rajghat, the final resting place of Mahatma Gandhi.
Gandhi Smriti in New Delhi is the location where Mahatma Gandhi spent the last 144 days of his life and was assassinated on 30 January 1948. Rajghat is the place where Mahatma Gandhi was cremated on 31 January 1948 after his assassination and his ashes were buried and make it a final resting place beside the sanctity of the Yamuna River. The Raj Ghat in the shape of large square platform with black marble was designed by architect Vanu Bhuta.
Jantar Mantar located in Connaught Place was built by Maharaja Jai Singh II of Jaipur. It consists of 13 architectural astronomy instruments. The primary purpose of the observatory was to compile astronomical tables, and to predict the times and movements of the sun, moon and planets.
New Delhi is home to Indira Gandhi Memorial Museum, National Gallery of Modern Art, National Museum of Natural History, National Rail Museum, National Handicrafts and Handlooms Museum, National Philatelic Museum, Nehru Planetarium, Shankar's International Dolls Museum. and Supreme Court of India Museum.
In the coming years, a new National War Memorial and Museum will be constructed in New Delhi for .
New Delhi is particularly renowned for its beautifully landscaped gardens that can look quite stunning in spring. The largest of these include Buddha Jayanti Park and the historic Lodi Gardens. In addition, there are the gardens in the Presidential Estate, the gardens along the Rajpath and India Gate, the gardens along Shanti Path, the Rose Garden, Nehru Park and the Railway Garden in Chanakya Puri. Also of note is the garden adjacent to the Jangpura Metro Station near the Defence Colony Flyover, as are the roundabout and neighbourhood gardens throughout the city.
Transport
Air
Indira Gandhi International Airport, situated to the southwest of Delhi, is the main gateway for the city's domestic and international civilian air traffic. In 2012–13, the airport was used by more than 35 million passengers,Indira Gandhi International Airport making it one of the busiest airports in South Asia. Terminal 3, which cost to construct between 2007 and 2010, handles an additional 37 million passengers annually.
The Delhi Flying Club, established in 1928 with two de Havilland Moth aircraft named Delhi and Roshanara, was based at Safdarjung Airport which started operations in 1929, when it was the Delhi's only airport and the second in India. The airport functioned until 2001, however in January 2002 the government closed the airport for flying activities because of security concerns following the New York attacks in September 2001. Since then, the club only carries out aircraft maintenance courses, and is used for helicopter rides to Indira Gandhi International Airport for VIP including the president and the prime minister.
In 2010, Indira Gandhi International Airport (IGIA) was conferred the fourth best airport award in the world in the 15–25 million category, and Best Improved Airport in the Asia-Pacific Region by Airports Council International.ACI Airport Service Quality Awards 2009, Asia Pacific airports sweep top places in worldwide awards from the Wayback Machine The airport was rated as the Best airport in the world in the 25–40 million passengers category in 2015, by Airports Council International. Delhi Airport also bags two awards for The Best Airport in Central Asia/India and Best Airport Staff in Central Asia/India at the Skytrax World Airport Awards 2015.
Road
New Delhi has one of India's largest bus transport systems. Buses are operated by the state-owned Delhi Transport Corporation (DTC), which owns largest fleet of compressed natural gas (CNG)-fueled buses in the world. Personal vehicles especially cars also form a major chunk of vehicles plying on New Delhi roads. New Delhi has the highest number of registered cars compared to any other metropolitan city in India. Taxis and Auto Rickshaws also ply on New Delhi roads in large numbers. New Delhi has one of the highest road density in India.
Important Roads in New Delhi
Some roads and expressways serve as important pillars of New Delhi's road infrastructure:
Inner Ring Road is one of the most important "state highways" in New Delhi. It is a 51 km long circular road, which connects important areas in New Delhi. Owing to more than 2 dozen grade-separators/flyovers, the road is almost signal-free.
Outer Ring Road is another major artery in New Delhi that links far-flung areas of Delhi.
The Delhi Noida Direct Flyway (DND Flyway) is an eight-laned access controlled tolled expressway which connects New Delhi and Delhi to Noida (an important satellite city of Uttar Pradesh). The acronym DND stands for "Delhi-Noida Direct".
'The Delhi Gurgaon Expressway is a 28 km (17 mi) expressway connecting New Delhi to Gurgaon, an important satellite city of Haryana.
The Delhi Faridabad Skyway is controlled tolled expressway which connects New Delhi to Faridabad, an important satellite city of Haryana.
National Highways passing through New Delhi
New Delhi is connected by road to the rest of India through National highways:
National Highway 19 (India) (old number: NH 2), commonly referred as Delhi-Kolkata Road is a busy Indian National Highway that runs through the states of Delhi, Haryana, Uttar Pradesh, Bihar, Jharkhand, and West Bengal.
National Highway 44 (India) is a National Highway that connects Srinagar with Kanyakumari and passes through Delhi.
National Highway 48 (India) is a National Highway that connects New Delhi with Chennai.
National Highway 9 (India) is a National Highway that connects Malout in Punjab to Pithoragarh in Uttarakhand and passes through Delhi.
Railway
Station Name Station Code Railway Zone Total Platforms New Delhi NDLS Northern Railway 16 Old Delhi DLI Northern Railway 16 Hazrat Nizamuddin NZM Northern Railway 7 Anand Vihar Terminal ANVT Northern Railway 7 Delhi Sarai Rohilla DEE Northern Railway 7
New Delhi is a major junction in the Indian railway network and is the headquarters of the Northern Railway. The five main railway stations are New Delhi railway station, Old Delhi, Nizamuddin Railway Station, Anand Vihar Railway Terminal and Sarai Rohilla. The Delhi Metro, a mass rapid transit system built and operated by Delhi Metro Rail Corporation (DMRC), serves many parts of Delhi and the neighbouring cities Faridabad, Gurgaon, Noida and Ghaziabad. As of December 2016, the metro consists of six operational lines with a total length of and 160 stations. Several other lines are under construction and expected to be commissioned in 2017 adding another 150 km length. It carries almost 3 million passengers every day. In addition to the Delhi Metro, a suburban railway, the Delhi Suburban Railway exists.
Metro
The Delhi Metro is a rapid transit system serving New Delhi, Delhi, Gurgaon, Faridabad, Noida, and Ghaziabad in the National Capital Region of India. Delhi Metro is the world's 12th largest metro system in terms of length. Delhi Metro was India's first modern public transportation system, which had revolutionised travel by providing a fast, reliable, safe, and comfortable means of transport. Presently, the Delhi Metro network consists of of track, with 160 stations along with six more stations of the Airport Express Link. The network has now crossed the boundaries of Delhi to reach NOIDA and Ghaziabad in Uttar Pradesh,Gurgaon and faridabad in Haryana. All stations have escalators, elevators, and tactile tiles to guide the visually impaired from station entrances to trains. It has a combination of elevated, at-grade, and underground lines, and uses both broad gauge and standard gauge rolling stock. Four types of rolling stock are used: Mitsubishi-ROTEM Broad gauge, Bombardier MOVIA, Mitsubishi-ROTEM Standard gauge, and CAF Beasain Standard gauge.According to a study, Delhi Metro has helped in removing about 390,000 vehicles from the streets of Delhi.
Delhi Metro is being built and operated by the Delhi Metro Rail Corporation Limited (DMRC), a state-owned company with equal equity participation from Government of India and Government of National Capital Territory of Delhi. However, the organisation is under administrative control of Ministry of Urban Development, Government of India. Besides construction and operation of Delhi metro, DMRC is also involved in the planning and implementation of metro rail, monorail and high-speed rail projects in India and providing consultancy services to other metro projects in the country as well as abroad. The Delhi Metro project was spearheaded by Padma Vibhushan E. Sreedharan, the managing director of DMRC and popularly known as the "Metro Man" of India. He famously resigned from DMRC, taking moral responsibility for a metro bridge collapse which took five lives. Sreedharan was awarded with the prestigious Legion of Honour by the French Government for his contribution to Delhi Metro.
Cityscape
thumb|upright|Rashtrapati Bhavan is the official residence of the President of India and is the largest residence of any head of state in the world.
Much of New Delhi, planned by the leading 20th-century British architect Edwin Lutyens, was laid out to be the central administrative area of the city as a testament to Britain's imperial pretensions. New Delhi is structured around two central promenades called the Rajpath and the Janpath. The Rajpath, or King's Way, stretches from the Rashtrapati Bhavan to the India Gate. The Janpath (Hindi: "Path of the People"), formerly Queen's Way, begins at Connaught Circus and cuts the Rajpath at right angles. 19 foreign embassies are located on the nearby Shantipath (Hindi: "Path of Peace"), making it the largest diplomatic enclave in India.
At the heart of the city is the magnificent Rashtrapati Bhavan (formerly known as Viceroy's House) which sits atop Raisina Hill. The Secretariat, which houses ministries of the Government of India, flanks out of the Rashtrapati Bhavan. The Parliament House, designed by Herbert Baker, is located at the Sansad Marg, which runs parallel to the Rajpath. Connaught Place is a large, circular commercial area in New Delhi, modelled after the Royal Crescent in England. Twelve separate roads lead out of the outer ring of Connaught Place, one of them being the Janpath.
Architecture
The New Delhi town plan, like its architecture, was chosen with one single chief consideration: to be a symbol of British power and supremacy. All other decisions were subordinate to this, and it was this framework that dictated the choice and application of symbology and influences from both Hindu and Islamic architecture.
It took about 20 years to build the city from 1911. Many elements of New Delhi architecture borrow from indigenous sources; however, they fit into a British Classical/Palladian tradition. The fact that there were any indigenous features in the design were due to the persistence and urging of both the Viceroy Lord Hardinge and historians like E.B. Havell.
Sports
thumb|The 2010 Commonwealth Games opening ceremony in Jawaharlal Nehru Stadium. In the foreground is the aerostat
The city hosted the 2010 Commonwealth Games and annually hosts Delhi Half Marathon foot-race. The city has previously hosted the 1951 Asian Games and the 1982 Asian Games. New Delhi was interested in bidding for the 2019 Asian Games but was turned down by the government on 2 August 2010 amid allegations of corruption in 2010 Commonwealth Games .
Major sporting venues in New Delhi include the Jawaharlal Nehru Stadium, Ambedkar Stadium, Indira Gandhi Indoor Stadium, Feroz Shah Kotla Ground, R.K. Khanna Tennis Complex, Dhyan Chand National Stadium and Siri Fort Sports Complex. There are new pvt grounds as well such as Den and tiki-taka.
Club Sport League Stadium Span Delhi Daredevils Cricket IPL Feroz Shah Kotla Ground 2008–present Delhi Wizards Field hockey WSH Dhyan Chand National Stadium 2011–present Delhi Waveriders Field hockey HIL Dhyan Chand National Stadium 2013–present Delhi Acers Badminton PBL DDA Badminton and Squash Stadium 2015–present Dabang Delhi Kabaddi PKL Thyagaraj Sports Complex 2014–present Delhi Dynamos FC Football ISL Jawaharlal Nehru Stadium 2014–present Indian Aces Tennis IPTL Indira Gandhi Arena 2014–present Dilli Veer Wrestling PWL K. D. Jadhav Wrestling Stadium 2015–present
Aerial View Of New Delhi
International relations and organisations
The city is home to numerous international organisations. The Asian and Pacific Centre for Transfer of Technology of the UNESCAP servicing the Asia-Pacific region is headquartered in New Delhi. New Delhi is home to most UN regional offices in India namely the UNDP, UNODC, UNESCO, UNICEF, WFP, UNV, UNCTAD, FAO, UNFPA, WHO, World Bank, ILO, IMF, UNIFEM, IFC and UNAIDS. UNHCR Representation in India is also located in the city.
New Delhi hosts 145 foreign embassies and high commissions.
Summits
New Delhi hosted the 7th NAM Summit in 1983, 4th BRICS Summit in 2012 and the IBSA Summit in 2015.
See also
Delhi
Delhi Tourism and Transportation Development Corporation
Faridabad
References
Bibliography
Johnson, David A. "A British Empire for the twentieth century: the inauguration of New Delhi, 1931," Urban History, Dec 2008, Vol. 35 Issue 3, pp 462–487
Ridley, Jane. "Edwin Lutyens, New Delhi, and the Architecture of Imperialism," Journal of Imperial & Commonwealth History, May 1998, Vol. 26 Issue 2, pp 67–83.
Sonne, Wolfgang. Representing the State: Capital City Planning in the Early Twentieth Century (2003) 367pp; compares New Delhi, Canberra, Washington & Berlin.
External links
New Delhi Government Portal
New Delhi Municipal Council
Detailed map of New Delhi
Official Website of Delhi Tourism
New Delhi Smart City Portal
N
Category:Capitals in Asia
.
.
Category:Indian Union Territory capitals
Category:Neighbourhoods of Delhi
Category:North India
Category:Planned capitals
Category:Cities and towns in New Delhi district
Category:Populated places established in 1911
Category:1911 establishments in British India
Category:1911 establishments in India | 51,585 | 2017-01 |
Translation | Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text.The Oxford Companion to the English Language, Namit Bhatia, ed., 1992, pp. 1,051–54. While interpreting—the facilitating of oral or sign-language communication between users of different languages—antedates writing, translation began only after the appearance of written literature. There exist partial translations of the Sumerian Epic of Gilgamesh (ca. 2000 BCE) into Southwest Asian languages of the second millennium BCE.J.M. Cohen, "Translation", Encyclopedia Americana, 1986, vol. 27, p. 12.
Translators always risk inappropriate spill-over of source-language idiom and usage into the target-language translation. On the other hand, spill-overs have imported useful source-language calques and loanwords that have enriched the target languages. Indeed, translators have helped substantially to shape the languages into which they have translated.Christopher Kasparek, "The Translator's Endless Toil", The Polish Review, vol. XXVIII, no. 2, 1983, pp. 84-87.
Owing to the demands of business documentation consequent to the Industrial Revolution that began in the mid-18th century, some translation specialties have become formalized, with dedicated schools and professional associations.Andrew Wilson, Translators on Translating: Inside the Invisible Art, Vancouver, CCSP Press, 2009.
Because of the laboriousness of translation, since the 1940s engineers have sought to automate translation or to mechanically aid the human translator.W.J. Hutchins, Early Years in Machine Translation: Memoirs and Biographies of Pioneers, Amsterdam, John Benjamins, 2000. The rise of the Internet has fostered a world-wide market for translation services and has facilitated language localization.M. Snell-Hornby, The Turns of Translation Studies: New Paradigms or Shifting Viewpoints?, Philadelphia, John Benjamins, 2006, p. 133.
Translation studies systematically study the theory and practice of translation.Susan Bassnett, Translation studies, pp. 13-37.
Etymology
thumb|125px|left|Rosetta Stone, a secular icon for the art of translation."Rosetta Stone", The Columbia Encyclopedia, 5th ed., 1994, p. 2,361.
The English word "translation" derives from the Latin word translatio, which comes from trans, "across" + ferre, "to carry" or "to bring" (-latio in turn coming from latus, the past participle of ferre). Thus translatio is "a carrying across" or "a bringing across": in this case, of a text from one language to another.Christopher Kasparek, "The Translator's Endless Toil", p. 83.
The Germanic languagesExcept in the case of the Dutch equivalent, "vertaling"—a "re-language-ing": ver + talen = "to change the language". and some Slavic languages have calqued their words for the concept of "translation" from translatio.Christopher Kasparek, "The Translator's Endless Toil", p. 83.
The Romance languages have derived their words for the concept of "translation" from an alternative Latin word, traductio, itself derived from traducere ("to lead across" or "to bring across", from trans, "across" + ducere, "to lead" or "to bring"). The remaining Slavic languages have calqued their words for the concept of "translation" from this same alternative Latin word, traductio.Christopher Kasparek, "The Translator's Endless Toil", p. 83.
The Ancient Greek term for "translation", (metaphrasis, "a speaking across"), has supplied English with "metaphrase" (a "literal," or "word-for-word," translation) — as contrasted with "paraphrase" ("a saying in other words", from , paraphrasis). "Metaphrase" corresponds, in one of the more recent terminologies, to "formal equivalence"; and "paraphrase", to "dynamic equivalence."Kasparek, "The Translator's Endless Toil", p. 84.
Strictly speaking, the concept of metaphrase — of "word-for-word translation" — is an imperfect concept, because a given word in a given language often carries more than one meaning; and because a similar given meaning may often be represented in a given language by more than one word. Nevertheless, "metaphrase" and "paraphrase" may be useful as ideal concepts that mark the extremes in the spectrum of possible approaches to translation."Ideal concepts" are useful as well in other fields, such as physics and chemistry, which include the concepts of perfectly solid bodies, perfectly rigid bodies, perfectly plastic bodies, perfectly black bodies, perfect crystals, perfect fluids, and perfect gases. Władysław Tatarkiewicz, On Perfection (first published in Polish in 1976 as O doskonałości); English translation by Christopher Kasparek subsequently serialized in 1979–1981 in Dialectics and Humanism: The Polish Philosophical Quarterly, and reprinted in Władysław Tatarkiewicz, On Perfection, Warsaw University Press, 1992.
Theories
Western theory
thumb|right|200px|John Dryden
Discussions of the theory and practice of translation reach back into antiquity and show remarkable continuities. The ancient Greeks distinguished between metaphrase (literal translation) and paraphrase. This distinction was adopted by English poet and translator John Dryden (1631–1700), who described translation as the judicious blending of these two modes of phrasing when selecting, in the target language, "counterparts," or equivalents, for the expressions used in the source language:
thumb|left|125px|Cicero
Dryden cautioned, however, against the license of "imitation", i.e., of adapted translation: "When a painter copies from the life... he has no privilege to alter features and lineaments..."
This general formulation of the central concept of translation — equivalence — is as adequate as any that has been proposed since Cicero and Horace, who, in 1st-century-BCE Rome, famously and literally cautioned against translating "word for word" ().
Despite occasional theoretical diversity, the actual practice of translation has hardly changed since antiquity. Except for some extreme metaphrasers in the early Christian period and the Middle Ages, and adapters in various periods (especially pre-Classical Rome, and the 18th century), translators have generally shown prudent flexibility in seeking equivalents — "literal" where possible, paraphrastic where necessary — for the original meaning and other crucial "values" (e.g., style, verse form, concordance with musical accompaniment or, in films, with speech articulatory movements) as determined from context.
thumb|100px|Samuel Johnson
In general, translators have sought to preserve the context itself by reproducing the original order of sememes, and hence word order — when necessary, reinterpreting the actual grammatical structure, for example, by shifting from active to passive voice, or vice versa. The grammatical differences between "fixed-word-order" languagesTypically, analytic languages. (e.g. English, French, German) and "free-word-order" languagesTypically, synthetic languages. (e.g., Greek, Latin, Polish, Russian) have been no impediment in this regard. The particular syntax (sentence-structure) characteristics of a text's source language are adjusted to the syntactic requirements of the target language.
thumb|left|110px|Martin Luther
When a target language has lacked terms that are found in a source language, translators have borrowed those terms, thereby enriching the target language. Thanks in great measure to the exchange of calques and loanwords between languages, and to their importation from other languages, there are few concepts that are "untranslatable" among the modern European languages.A greater problem, however, is translating terms relating to cultural concepts that have no equivalent in the target language. Some examples of this are described in the article, "Translating the 17th of May into English and other horror stories" , retrieved 2010-04-15. For full comprehension, such situations require the provision of a gloss.
Generally, the greater the contact and exchange that have existed between two languages, or between those languages and a third one, the greater is the ratio of metaphrase to paraphrase that may be used in translating among them. However, due to shifts in ecological niches of words, a common etymology is sometimes misleading as a guide to current meaning in one or the other language. For example, the English actual should not be confused with the cognate French ("present", "current"), the Polish ("present", "current," "topical," "timely," "feasible"),Kasparek, "The Translator's Endless Toil", p. 85. the Swedish aktuell ("topical", "presently of importance"), the Russian ("urgent", "topical") or the Dutch actueel ("current").
The translator's role as a bridge for "carrying across" values between cultures has been discussed at least since Terence, the 2nd-century-BCE Roman adapter of Greek comedies. The translator's role is, however, by no means a passive, mechanical one, and so has also been compared to that of an artist. The main ground seems to be the concept of parallel creation found in critics such as Cicero. Dryden observed that "Translation is a type of drawing after life..." Comparison of the translator with a musician or actor goes back at least to Samuel Johnson’s remark about Alexander Pope playing Homer on a flageolet, while Homer himself used a bassoon.
thumb|left|110px|Johann Gottfried Herder
If translation be an art, it is no easy one. In the 13th century, Roger Bacon wrote that if a translation is to be true, the translator must know both languages, as well as the science that he is to translate; and finding that few translators did, he wanted to do away with translation and translators altogether.Kasparek, "The Translator's Endless Toil", pp. 85-86.
thumb|125px|Ignacy Krasicki
The translator of the Bible into German, Martin Luther, is credited with being the first European to posit that one translates satisfactorily only toward his own language. L.G. Kelly states that since Johann Gottfried Herder in the 18th century, "it has been axiomatic" that one translates only toward his own language.L.G. Kelly, cited in Kasparek, "The Translator's Endless Toil", p. 86.
Compounding the demands on the translator is the fact that no dictionary or thesaurus can ever be a fully adequate guide in translating. The Scottish historian Alexander Tytler, in his Essay on the Principles of Translation (1790), emphasized that assiduous reading is a more comprehensive guide to a language than are dictionaries. The same point, but also including listening to the spoken language, had earlier, in 1783, been made by the Polish poet and grammarian Onufry Andrzej Kopczyński.Kasparek, "The Translator's Endless Toil", p. 86.
The translator’s special role in society is described in a posthumous 1803 essay by "Poland's La Fontaine", the Roman Catholic Primate of Poland, poet, encyclopedist, author of the first Polish novel, and translator from French and Greek, Ignacy Krasicki:
Other traditions
Due to Western colonialism and cultural dominance in recent centuries, Western translation traditions have largely replaced other traditions. The Western traditions draw on both ancient and medieval traditions, and on more recent European innovations.
Though earlier approaches to translation are less commonly used today, they retain importance when dealing with their products, as when historians view ancient or medieval records to piece together events which took place in non-Western or pre-Western environments. Also, though heavily influenced by Western traditions and practiced by translators taught in Western-style educational systems, Chinese and related translation traditions retain some theories and philosophies unique to the Chinese tradition.
Near East
The traditions of translating material among Egyptian, Mesopotamian, Syriac, Anatolian and Hebrew go back several millennia. An early example of a bilingual document is the 1274 BCE Treaty of Kadesh.
Asia
thumb|200px|right|Diamond Sutra, translated by Kumārajīva
There is a separate tradition of translation in South, Southeast and East Asia (primarily of texts from the Indian and Chinese civilizations), connected especially with the rendering of religious, particularly Buddhist, texts and with the governance of the Chinese empire. Classical Indian translation is characterized by loose adaptation, rather than the closer translation more commonly found in Europe; and Chinese translation theory identifies various criteria and limitations in translation.
In the East Asian sphere of Chinese cultural influence, more important than translation per se has been the use and reading of Chinese texts, which also had substantial influence on the Japanese, Korean and Vietnamese languages, with substantial borrowings of Chinese vocabulary and writing system. Notable is the Japanese kanbun, a system for glossing Chinese texts for Japanese speakers.
Though Indianized states in Southeast Asia often translated Sanskrit material into the local languages, the literate elites and scribes more commonly used Sanskrit as their primary language of culture and government.
thumb|200px|Perry Link
Some special aspects of translating from Chinese are illustrated in Perry Link's discussion of translating the work of the Tang Dynasty poet Wang Wei (699–759 CE).Perry Link, "A Magician of Chinese Poetry" (review of Eliot Weinberger, with an afterword by Octavio Paz, 19 Ways of Looking at Wang Wei (with More Ways), New Directions, 88 pp., $10.95 [paper]; and Eliot Weinberger, The Ghosts of Birds, New Directions, 211 pp., $16.95 [paper]), The New York Review of Books, vol. LXIII, no. 18 (November 24, 2016), pp. 49–50.
Once the untranslatables have been set aside, the problems for a translator, especially of Chinese poetry, are two: What does the translator think the poetic line says? And once he thinks he understands it, how can he render it into the target language? Most of the difficulties, according to Link, arise in addressing the second problem, "where the impossibility of perfect answers spawns endless debate." Almost always at the center is the letter-versus-spirit dilemma. At the literalist extreme, efforts are made to dissect every conceivable detail about the language of the original Chinese poem. "The dissection, though," writes Link, "normally does to the art of a poem approximately what the scalpel of an anatomy instructor does to the life of a frog."Perry Link, "A Magician of Chinese Poetry", The New York Review of Books, vol. LXIII, no. 18 (November 24, 2016), p. 49.
Chinese characters, in avoiding grammatical specificity, offer advantages to poets (and, simultaneously, challenges to poetry translators) that are associated primarily with absences of subject, number, and tense.Perry Link, "A Magician of Chinese Poetry", The New York Review of Books, vol. LXIII, no. 18 (November 24, 2016), p. 50.
It is the norm in classical Chinese poetry, and common even in modern Chinese prose, to omit subjects; the reader or listener infers a subject. Western languages, however, ask by grammatical rule that subjects always be stated. Most of the translators cited in Eliot Weinberger's 19 Ways of Looking at Wang Wei supply a subject. Weinberger points out, however, that when an "I" as a subject is inserted, a "controlling individual mind of the poet" enters and destroys the effect of the Chinese line. Without a subject, he writes, "the experience becomes both universal and immediate to the reader." Another approach to the subjectlessness is to use the target language's passive voice; but this again particularizes the experience too much.Perry Link, "A Magician of Chinese Poetry", The New York Review of Books, vol. LXIII, no. 18 (November 24, 2016), p. 50.
Nouns have no number in Chinese. "If," writes Link, "you want to talk in Chinese about one rose, you may, but then you use a "measure word" to say "one blossom-of roseness."Perry Link, "A Magician of Chinese Poetry", The New York Review of Books, vol. LXIII, no. 18 (November 24, 2016), p. 50.
Chinese verbs are tense-less: there are several ways to specify when something happened or will happen, but verb tense is not one of them. For poets, this creates the great advantage of ambiguity. According to Link, Weinberger's insight about subjectlessness—that it produces an effect "both universal and immediate"—applies to timelessness as well.Perry Link, "A Magician of Chinese Poetry", The New York Review of Books, vol. LXIII, no. 18 (November 24, 2016), p. 50.
Link proposes a kind of uncertainty principle that may be applicable not only to translation from the Chinese language, but to all translation:
Islamic world
Translation of material into Arabic expanded after the creation of Arabic script in the 5th century, and gained great importance with the rise of Islam and Islamic empires. Arab translation initially focused primarily on politics, rendering Persian, Greek, even Chinese and Indic diplomatic materials into Arabic. It later focused on translating classical Greek and Persian works, as well as some Chinese and Indian texts, into Arabic for scholarly study at major Islamic learning centers, such as the Al-Karaouine (Fes, Morocco), Al-Azhar (Cairo, Egypt), and the Al-Nizamiyya of Baghdad. In terms of theory, Arabic translation drew heavily on earlier Near Eastern traditions as well as more contemporary Greek and Persian traditions.
Arabic translation efforts and techniques are important to Western translation traditions due to centuries of close contacts and exchanges. Especially after the Renaissance, Europeans began more intensive study of Arabic and Persian translations of classical works as well as scientific and philosophical works of Arab and oriental origins. Arabic and, to a lesser degree, Persian became important sources of material and perhaps of techniques for revitalized Western traditions, which in time would overtake the Islamic and oriental traditions.
In the 19th century, after the Middle East's Islamic clerics and copyists
thumb|150px|Muhammad Abduh
The movement to translate English and European texts transformed the Arabic and Ottoman Turkish languages, and new words, simplified syntax, and directness came to be valued over the previous convolutions. Educated Arabs and Turks in the new professions and the modernized civil service expressed skepticism, writes Christopher de Bellaigue, "with a freedom that is rarely witnessed today.... No longer was legitimate knowledge defined by texts in the religious schools, interpreted for the most part with stultifying literalness. It had come to include virtually any intellectual production anywhere in the world." One of the neologisms that, in a way, came to characterize the infusion of new ideas via translation was "darwiniya", or "Darwinism".Christopher de Bellaigue, "Dreams of Islamic Liberalism" (review of Marwa Elshakry, Reading Darwin in Arabic, 1860–1950), The New York Review of Books, vol. LXII, no. 10 (June 4, 2015), p. 77.
One of the most influential liberal Islamic thinkers of the time was Muhammad Abduh (1849–1905), Egypt's senior judicial authority—its chief mufti—at the turn of the 20th century and an admirer of Darwin who in 1903 visited Darwin's exponent Herbert Spencer at his home in Brighton. Spencer's view of society as an organism with its own laws of evolution paralleled Abduh's ideas.Christopher de Bellaigue, "Dreams of Islamic Liberalism" (review of Marwa Elshakry, Reading Darwin in Arabic, 1860–1950), The New York Review of Books, vol. LXII, no. 10 (June 4, 2015), p. 77–78.
After World War I, when Britain and France divided up the Middle East's countries, apart from Turkey, between them, pursuant to the Sykes-Picot agreement—in violation of solemn wartime promises of postwar Arab autonomy—there came an immediate reaction: the Muslim Brotherhood emerged in Egypt, the House of Saud took over the Hijaz, and regimes led by army officers came to power in Iran and Turkey. "[B]oth illiberal currents of the modern Middle East," writes de Bellaigue, "Islamism and militarism, received a major impetus from Western empire-builders." As often happens in countries undergoing social crisis, the aspirations of the Muslim world's translators and modernizers, such as Muhammad Abduh, largely had to yield to retrograde currents.Christopher de Bellaigue, "Dreams of Islamic Liberalism" (review of Marwa Elshakry, Reading Darwin in Arabic, 1860–1950), The New York Review of Books, vol. LXII, no. 10 (June 4, 2015), p. 78.
Fidelity and transparency
Fidelity (or faithfulness) and transparency, dual ideals in translation, are often at odds. A 17th-century French critic coined the phrase "" to suggest that translations, like women, can be either faithful or beautiful, but not both.French philosopher and writer Gilles Ménage (1613-92) commented on translations by humanist Perrot Nicolas d'Ablancourt (1606-64): ("They remind me of a woman whom I greatly loved in Tours, who was beautiful but unfaithful.") Quoted in Amparo Hurtado Albir, (The Idea of Fidelity in Translation), Paris, Didier Érudition, 1990, p. 231.
Faithfulness is the extent to which a translation accurately renders the meaning of the source text, without distortion.
Transparency is the extent to which a translation appears to a native speaker of the target language to have originally been written in that language, and conforms to its grammar, syntax and idiom.
A translation that meets the first criterion is said to be "faithful"; a translation that meets the second, "idiomatic". The two qualities are not necessarily mutually exclusive.
The criteria for judging the fidelity of a translation vary according to the subject, type and use of the text, its literary qualities, its social or historical context, etc.
thumb|right|120px|Friedrich Schleiermacher
The criteria for judging the transparency of a translation appear more straightforward: an unidiomatic translation "sounds wrong"; and, in the extreme case of word-for-word translations generated by many machine-translation systems, often results in patent nonsense.
Nevertheless, in certain contexts a translator may consciously seek to produce a literal translation. Translators of literary, religious or historic texts often adhere as closely as possible to the source text, stretching the limits of the target language to produce an unidiomatic text. A translator may adopt expressions from the source language in order to provide "local color".
Current Western translation practice is dominated by the dual concepts of "fidelity" and "transparency". This has not always been the case, however; there have been periods, especially in pre-Classical Rome and in the 18th century, when many translators stepped beyond the bounds of translation proper into the realm of adaptation.
thumb|100px|Lawrence Venuti
Adapted translation retains currency in some non-Western traditions. The Indian epic, the Ramayana, appears in many versions in the various Indian languages, and the stories are different in each. Similar examples are to be found in medieval Christian literature, which adjusted the text to local customs and mores.
Many non-transparent-translation theories draw on concepts from German Romanticism, the most obvious influence being the German theologian and philosopher Friedrich Schleiermacher. In his seminal lecture "On the Different Methods of Translation" (1813) he distinguished between translation methods that move "the writer toward [the reader]", i.e., transparency, and those that move the "reader toward [the author]", i.e., an extreme fidelity to the foreignness of the source text. Schleiermacher favored the latter approach; he was motivated, however, not so much by a desire to embrace the foreign, as by a nationalist desire to oppose France's cultural domination and to promote German literature.
In recent decades, prominent advocates of such "non-transparent" translation have included the French scholar Antoine Berman, who identified twelve deforming tendencies inherent in most prose translations,Antoine Berman, L'épreuve de l'étranger, 1984. and the American theorist Lawrence Venuti, who has called on translators to apply "foreignizing" rather than domesticating translation strategies.Lawrence Venuti, "Call to Action", in The Translator's Invisibility, 1994.
Equivalence
The question of fidelity vs. transparency has also been formulated in terms of, respectively, "formal equivalence" and "dynamic [or functional] equivalence". The latter expressions are associated with the translator Eugene Nida and were originally coined to describe ways of translating the Bible, but the two approaches are applicable to any translation.
"Formal equivalence" corresponds to "metaphrase", and "dynamic equivalence" to "paraphrase".
"Dynamic equivalence" (or "functional equivalence") conveys the essential thoughts expressed in a source text — if necessary, at the expense of literality, original sememe and word order, the source text's active vs. passive voice, etc.
By contrast, "formal equivalence" (sought via "literal" translation) attempts to render the text literally, or "word for word" (the latter expression being itself a word-for-word rendering of the classical Latin ) — if necessary, at the expense of features natural to the target language.
There is, however, no sharp boundary between functional and formal equivalence. On the contrary, they represent a spectrum of translation approaches. Each is used at various times and in various contexts by the same translator, and at various points within the same text — sometimes simultaneously. Competent translation entails the judicious blending of functional and formal equivalents.Christopher Kasparek, "The Translator's Endless Toil", pp. 83-87.
Common pitfalls in translation, especially when practiced by inexperienced translators, involve false equivalents such as "false friends" and false cognates.
Back-translation
A "back-translation" is a translation of a translated text back into the language of the original text, made without reference to the original text.
Comparison of a back-translation with the original text is sometimes used as a check on the accuracy of the original translation, much as the accuracy of a mathematical operation is sometimes checked by reversing the operation. But the results of such reverse-translation operations, while useful as approximate checks, are not always precisely reliable. Back-translation must in general be less accurate than back-calculation because linguistic symbols (words) are often ambiguous, whereas mathematical symbols are intentionally unequivocal.
In the context of machine translation, a back-translation is also called a "round-trip translation."
When translations are produced of material used in medical clinical trials, such as informed-consent forms, a back-translation is often required by the ethics committee or institutional review board.
thumb|left|95px|Mark Twain, back-translator
Mark Twain provided humorously telling evidence for the frequent unreliability of back-translation when he issued his own back-translation of a French translation of his short story, "The Celebrated Jumping Frog of Calaveras County". He published his back-translation in a 1903 volume together with his English-language original, the French translation, and a "Private History of the 'Jumping Frog' Story". The latter included a synopsized adaptation of his story that Twain stated had appeared, unattributed to Twain, in a Professor Sidgwick’s Greek Prose Composition (p. 116) under the title, "The Athenian and the Frog"; the adaptation had for a time been taken for an independent ancient Greek precursor to Twain's "Jumping Frog" story.Mark Twain, The Jumping Frog: In English, Then in French, and Then Clawed Back into a Civilized Language Once More by Patient, Unremunerated Toil, illustrated by F. Strothman, New York and London, Harper & Brothers, Publishers, MCMIII [1903].
When a historic document survives only in translation, the original having been lost, researchers sometimes undertake back-translation in an effort to reconstruct the original text. An example involves the novel The Saragossa Manuscript by the Polish aristocrat Jan Potocki (1761–1815), who wrote the novel in French and anonymously published fragments in 1804 and 1813–14. Portions of the original French-language manuscript were subsequently lost; however, the missing fragments survived in a Polish translation that was made by Edmund Chojecki in 1847 from a complete French copy, now lost. French-language versions of the complete Saragossa Manuscript have since been produced, based on extant French-language fragments and on French-language versions that have been back-translated from Chojecki’s Polish version.Czesław Miłosz, The History of Polish Literature, pp. 193–94.
Similarly, when historians suspect that a document is actually a translation from another language, back-translation into that hypothetical original language can provide supporting evidence by showing that such characteristics as idioms, puns, peculiar grammatical structures, etc., are in fact derived from the original language.
For example, the known text of the Till Eulenspiegel folk tales is in High German but contains puns that work only when back-translated to Low German. This seems clear evidence that these tales (or at least large portions of them) were originally written in Low German and translated into High German by an over-metaphrastic translator.
Similarly, supporters of Aramaic primacy—of the view that the Christian New Testament or its sources were originally written in the Aramaic language—seek to prove their case by showing that difficult passages in the existing Greek text of the New Testament make much better sense when back-translated to Aramaic: that, for example, some incomprehensible references are in fact Aramaic puns that do not work in Greek.
Due to similar indications, it is believed that the 2nd century Gnostic Gospel of Judas, which survives only in Coptic, was originally written in Greek.
Translators
Competent translators show the following attributes:
a very good knowledge of the language, written and spoken, from which they are translating (the source language);
an excellent command of the language into which they are translating (the target language);
familiarity with the subject matter of the text being translated;
a profound understanding of the etymological and idiomatic correlates between the two languages; and
a finely tuned sense of when to metaphrase ("translate literally") and when to paraphrase, so as to assure true rather than spurious equivalents between the source- and target-language texts.*Christopher Kasparek, "Prus' Pharaoh and Curtin's Translation," The Polish Review, vol. XXXI, nos. 2–3 (1986), p. 135.
A competent translator is not only bilingual but bicultural. A language is not merely a collection of words and of rules of grammar and syntax for generating sentences, but also a vast interconnecting system of connotations and cultural references whose mastery, writes linguist Mario Pei, "comes close to being a lifetime job."Mario Pei, The Story of Language, p. 424.
The complexity of the translator's task cannot be overstated; one author suggests that becoming an accomplished translator—after having already acquired a good basic knowledge of both languages and cultures—may require a minimum of ten years' experience. Viewed in this light, it is a serious misconception to assume that a person who has fair fluency in two languages will, by virtue of that fact alone, be consistently competent to translate between them.
The translator's role in relation to a text has been compared to that of an artist, e.g., a musician or actor, who interprets a work of art. Translation, like other arts, inescapably involves choice, and choice implies interpretation."Interpretation" in this sense is to be distinguished from the function of an ""interpreter" who translates orally or by the use of sign language. The English-language novelist Joseph Conrad, whose writings Zdzisław Najder has described as verging on "auto-translation" from Conrad's Polish and French linguistic personae,Zdzisław Najder, Joseph Conrad: A Life, 2007, p. IX. advised his niece and Polish translator Aniela Zagórska:
Conrad thought C.K. Scott Moncrieff's English translation of Marcel Proust's À la recherche du temps perdu (In Search of Lost Time—or, in Scott Moncrieff's rendering, Remembrance of Things Past) to be preferable to the French original.Walter Kaiser, "A Hero of Translation" (a review of Jean Findlay, Chasing Lost Time: The Life of C.K. Scott Moncrieff: Soldier, Spy, and Translator), The New York Review of Books, vol. LXII, no. 10 (June 4, 2015), p. 55.
A translator may render only parts of the original text, provided he indicates that this is what he is doing. But a translator should not assume the role of censor and surreptitiously delete or bowdlerize passages merely to please a political or moral interest.Billiani, Francesca (2001)
Translation has served as a school of writing for many authors. Translators, including monks who spread Buddhist texts in East Asia, and the early modern European translators of the Bible, in the course of their work have shaped the very languages into which they have translated. They have acted as bridges for conveying knowledge between cultures; and along with ideas, they have imported from the source languages, into their own languages, loanwords and calques of grammatical structures, idioms and vocabulary.
Interpreting
thumb|190px|left|Cortés (seated) and La Malinche (beside him) at Xaltelolco
thumb|250px|Lewis and Clark and their Native American interpreter, Sacagawea: 150th-anniversary commemorative stamp
Interpreting, or "interpretation," is the facilitation of oral or sign-language communication, either simultaneously or consecutively, between two, or among three or more, speakers who are not speaking, or signing, the same language.
The term "interpreting," rather than "interpretation," is preferentially used for this activity by Anglophone translators, to avoid confusion with other meanings of the word "interpretation."
Unlike English, many languages do not employ two separate words to denote the activities of written and live-communication (oral or sign-language) translators.For example, in Polish, a "translation" is or ." Both "translator" and "interpreter" are ." For a time in the 18th century, however, for "translator," some writers used a word, ," that is no longer in use. Edward Balcerzan, (Polish Writers on the Art of Translation, 1440–1974: an Anthology), 1977, passim. Even English does not always make the distinction, frequently using "translation" as a synonym for "interpreting."
Interpreters have sometimes played crucial roles in history. A prime example is La Malinche, also known as Malintzin, Malinalli and Doña Marina, an early-16th-century Nahua woman from the Mexican Gulf Coast. As a child she had been sold or given to Maya slave-traders from Xicalango, and thus had become bilingual. Subsequently given along with other women to the invading Spaniards, she became instrumental in the Spanish conquest of Mexico, acting as interpreter, adviser, intermediary and lover to Hernán Cortés.Hugh Thomas, Conquest: Montezuma, Cortes and the Fall of Old Mexico, New York, Simon and Schuster, 1993, pp. 171-72.
Nearly three centuries later, in the United States, a comparable role as interpreter was played for the Lewis and Clark Expedition of 1804–6 by Sacagawea. As a child, the Lemhi Shoshone woman had been kidnapped by Hidatsa Indians and thus had become bilingual. Sacagawea facilitated the expedition's traverse of the North American continent to the Pacific Ocean."Sacagawea", The Encyclopedia Americana, 1986, volume 24, p. 72. Four decades later, in 1846, the Pacific would become the western border of the United States.
Sworn translation
Sworn translation, also called "certified translation," aims at legal equivalence between two documents written in different languages. It is performed by someone authorized to do so by local regulations. Some countries recognize declared competence. Others require the translator to be an official state appointee.
Telephone
Many commercial services exist that will interpret spoken language via telephone. There is also at least one custom-built mobile device that does the same thing. The device connects users to human interpreters who can translate between English and 180 other languages.
Internet
Web-based human translation is generally favored by companies and individuals that wish to secure more accurate translations. In view of the frequent inaccuracy of machine translations, human translation remains the most reliable, most accurate form of translation available. With the recent emergence of translation crowdsourcing, translation-memory techniques, and internet applications, translation agencies have been able to provide on-demand human-translation services to businesses, individuals, and enterprises.
While not instantaneous like its machine counterparts such as Google Translate and Yahoo! Babel Fish, web-based human translation has been gaining popularity by providing relatively fast, accurate translation for business communications, legal documents, medical records, and software localization.Speaklike offers human-powered translation for blogs | VentureBeat Web-based human translation also appeals to private website users and bloggers.
Computer assist
Computer-assisted translation (CAT), also called "computer-aided translation," "machine-aided human translation" (MAHT) and "interactive translation," is a form of translation wherein a human translator creates a target text with the assistance of a computer program. The machine supports a human translator.
Computer-assisted translation can include standard dictionary and grammar software. The term, however, normally refers to a range of specialized programs available to the translator, including translation-memory, terminology-management, concordance, and alignment programs.
These tools speed up and facilitate human translation, but they do not provide translation. That is a function of tools known broadly as machine translation.
Machine translation
thumb|100px|Claude Piron
Machine translation (MT) is a process whereby a computer program analyzes a source text and, in principle, produces a target text without human intervention. In reality, however, machine translation typically does involve human intervention, in the form of pre-editing and post-editing.See the annually performed NIST tests since 2001 and Bilingual Evaluation Understudy
With proper terminology work, with preparation of the source text for machine translation (pre-editing), and with reworking of the machine translation by a human translator (post-editing), commercial machine-translation tools can produce useful results, especially if the machine-translation system is integrated with a translation-memory or globalization-management system.
Unedited machine translation is publicly available through tools on the Internet such as Google Translate, Babel Fish, Babylon, and StarDict. These produce rough translations that, under favorable circumstances, "give the gist" of the source text.
With the Internet, translation software can help non-native-speaking individuals understand web pages published in other languages. Whole-page-translation tools are of limited utility, however, since they offer only a limited potential understanding of the original author's intent and context; translated pages tend to be more humorous and confusing than enlightening.
Interactive translations with pop-up windows are becoming more popular. These tools show one or more possible equivalents for each word or phrase. Human operators merely need to select the likeliest equivalent as the mouse glides over the foreign-language text. Possible equivalents can be grouped by pronunciation.
Also, companies such as Ectaco produce pocket devices that provide machine translations.
Relying exclusively on unedited machine translation, however, ignores the fact that communication in human language is context-embedded and that it takes a person to comprehend the context of the original text with a reasonable degree of probability. It is certainly true that even purely human-generated translations are prone to error; therefore, to ensure that a machine-generated translation will be useful to a human being and that publishable-quality translation is achieved, such translations must be reviewed and edited by a human.J.M. Cohen observes (p.14): "Scientific translation is the aim of an age that would reduce all activities to techniques. It is impossible however to imagine a literary-translation machine less complex than the human brain itself, with all its knowledge, reading, and discrimination."
Claude Piron writes that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolve ambiguities in the source text, which the grammatical and lexical exigencies of the target language require to be resolved.Claude Piron, Le défi des langues (The Language Challenge), Paris, L'Harmattan, 1994. Such research is a necessary prelude to the pre-editing necessary in order to provide input for machine-translation software, such that the output will not be meaningless.
Literary translation
Translation of literary works (novels, short stories, plays, poems, etc.) is considered a literary pursuit in its own right. For example, notable in Canadian literature specifically as translators are figures such as Sheila Fischman, Robert Dickson and Linda Gaboriau, and the Governor General's Awards annually present prizes for the best English-to-French and French-to-English literary translations.
Other writers, among many who have made a name for themselves as literary translators, include Vasily Zhukovsky, Tadeusz Boy-Żeleński, Vladimir Nabokov, Jorge Luis Borges, Robert Stiller and Haruki Murakami.
History
The first important translation in the West was that of the Septuagint, a collection of Jewish Scriptures translated into early Koine Greek in Alexandria between the 3rd and 1st centuries BCE. The dispersed Jews had forgotten their ancestral language and needed Greek versions (translations) of their Scriptures.J.M. Cohen, p. 12.
Throughout the Middle Ages, Latin was the lingua franca of the western learned world. The 9th-century Alfred the Great, king of Wessex in England, was far ahead of his time in commissioning vernacular Anglo-Saxon translations of Bede's Ecclesiastical History and Boethius' Consolation of Philosophy. Meanwhile, the Christian Church frowned on even partial adaptations of St. Jerome's Vulgate of ca. 384 CE,J.M Cohen, pp. 12-13. the standard Latin Bible.
In Asia, the spread of Buddhism led to large-scale ongoing translation efforts spanning well over a thousand years. The Tangut Empire was especially efficient in such efforts; exploiting the then newly invented block printing, and with the full support of the government (contemporary sources describe the Emperor and his mother personally contributing to the translation effort, alongside sages of various nationalities), the Tanguts took mere decades to translate volumes that had taken the Chinese centuries to render.
The Arabs undertook large-scale efforts at translation. Having conquered the Greek world, they made Arabic versions of its philosophical and scientific works. During the Middle Ages, translations of some of these Arabic versions were made into Latin, chiefly at Córdoba in Spain.J.M. Cohen, p. 13. King Alfonso X el Sabio (Alphonse the Wise) of Castille in the 13th century promoted this effort by founding a Schola Traductorum (School of Translation) in Toledo. There Arabic texts, Hebrew texts, and Latin texts were translated into the other tongues by Muslim, Jewish and Christian scholars, who also argued the merits of their respective religions. Latin translations of Greek and original Arab works of scholarship and science helped advance European Scholasticism, and thus European science and culture.
thumb|150px|Geoffrey Chaucer
The broad historic trends in Western translation practice may be illustrated on the example of translation into the English language.
The first fine translations into English were made in the 14th century by Geoffrey Chaucer, who adapted from the Italian of Giovanni Boccaccio in his own Knight's Tale and Troilus and Criseyde; began a translation of the French-language Roman de la Rose; and completed a translation of Boethius from the Latin. Chaucer founded an English poetic tradition on adaptations and translations from those earlier-established literary languages.
The first great English translation was the Wycliffe Bible (ca. 1382), which showed the weaknesses of an underdeveloped English prose. Only at the end of the 15th century did the great age of English prose translation begin with Thomas Malory's Le Morte Darthur—an adaptation of Arthurian romances so free that it can, in fact, hardly be called a true translation. The first great Tudor translations are, accordingly, the Tyndale New Testament (1525), which influenced the Authorized Version (1611), and Lord Berners' version of Jean Froissart's Chronicles (1523–25).
thumb|100px|Marsilio Ficino
Meanwhile, in Renaissance Italy, a new period in the history of translation had opened in Florence with the arrival, at the court of Cosimo de' Medici, of the Byzantine scholar Georgius Gemistus Pletho shortly before the fall of Constantinople to the Turks (1453). A Latin translation of Plato's works was undertaken by Marsilio Ficino. This and Erasmus' Latin edition of the New Testament led to a new attitude to translation. For the first time, readers demanded rigor of rendering, as philosophical and religious beliefs depended on the exact words of Plato, Aristotle and Jesus.
Non-scholarly literature, however, continued to rely on adaptation. France's Pléiade, England's Tudor poets, and the Elizabethan translators adapted themes by Horace, Ovid, Petrarch and modern Latin writers, forming a new poetic style on those models. The English poets and translators sought to supply a new public, created by the rise of a middle class and the development of printing, with works such as the original authors would have written, had they been writing in England in that day.
thumb|left|100px|Edward FitzGerald
The Elizabethan period of translation saw considerable progress beyond mere paraphrase toward an ideal of stylistic equivalence, but even to the end of this period, which actually reached to the middle of the 17th century, there was no concern for verbal accuracy.J.M. Cohen, p. 14.
In the second half of the 17th century, the poet John Dryden sought to make Virgil speak "in words such as he would probably have written if he were living and an Englishman". Dryden, however, discerned no need to emulate the Roman poet's subtlety and concision. Similarly, Homer suffered from Alexander Pope's endeavor to reduce the Greek poet's "wild paradise" to order.
thumb|95px|Benjamin Jowett
Throughout the 18th century, the watchword of translators was ease of reading. Whatever they did not understand in a text, or thought might bore readers, they omitted. They cheerfully assumed that their own style of expression was the best, and that texts should be made to conform to it in translation. For scholarship they cared no more than had their predecessors, and they did not shrink from making translations from translations in third languages, or from languages that they hardly knew, or—as in the case of James Macpherson's "translations" of Ossian—from texts that were actually of the "translator's" own composition.
The 19th century brought new standards of accuracy and style. In regard to accuracy, observes J.M. Cohen, the policy became "the text, the whole text, and nothing but the text", except for any bawdy passages and the addition of copious explanatory footnotes.For instance, Henry Benedict Mackey's translation of St. Francis de Sales's "Treatise on the Love of God" consistently omits the saint's analogies comparing God to a nursing mother, references to Bible stories such as the rape of Tamar, and so forth. In regard to style, the Victorians' aim, achieved through far-reaching metaphrase (literality) or pseudo-metaphrase, was to constantly remind readers that they were reading a foreign classic. An exception was the outstanding translation in this period, Edward FitzGerald's Rubaiyat of Omar Khayyam (1859), which achieved its Oriental flavor largely by using Persian names and discreet Biblical echoes and actually drew little of its material from the Persian original.
In advance of the 20th century, a new pattern was set in 1871 by Benjamin Jowett, who translated Plato into simple, straightforward language. Jowett's example was not followed, however, until well into the new century, when accuracy rather than style became the principal criterion.
Modern translation
As a language evolves, texts in an earlier version of the language—original texts, or old translations—may become difficult for modern readers to understand. Such a text may therefore be translated into more modern language, producing a "modern translation" (e.g., a "modern English translation" or "modernized translation").
Such modern rendering is applied either to literature from classical languages such as Latin or Greek, notably to the Bible (see "Modern English Bible translations"), or to literature from an earlier stage of the same language, as with the works of William Shakespeare (which are largely understandable by a modern audience, though with some difficulty) or with Geoffrey Chaucer's Canterbury Tales (which is not generally understandable by modern readers).
Modern translation is applicable to any language with a long literary history. For example, in Japanese the 11th-century Tale of Genji is generally read in modern translation (see "Genji: modern readership").
Modern translation often involves literary scholarship and textual revision, as there is frequently not one single canonical text. This is particularly noteworthy in the case of the Bible and Shakespeare, where modern scholarship can result in substantive textual changes.
Modern translation meets with opposition from some traditionalists. In English, some readers prefer the Authorized King James Version of the Bible to modern translations, and Shakespeare in the original of ca. 1600 to modern translations.
An opposite process involves translating modern literature into classical languages, for the purpose of extensive reading (for examples, see "List of Latin translations of modern literature").
Poetry
thumb|100px|Douglas Hofstadter
Poetry presents special challenges to translators, given the importance of a text's formal aspects, in addition to its content. In his influential 1959 paper "On Linguistic Aspects of Translation", the Russian-born linguist and semiotician Roman Jakobson went so far as to declare that "poetry by definition [is] untranslatable". Robert Frost was equally pessimistic: "Poetry is that which is lost in translations."
In 1974 the American poet James Merrill wrote a poem, "Lost in Translation", which in part explores this idea. The question was also discussed in Douglas Hofstadter's 1997 book, Le Ton beau de Marot; he argues that a good translation of a poem must convey as much as possible of not only its literal meaning but also its form and structure (meter, rhyme or alliteration scheme, etc.).A discussion of Hofstadter's otherwise latitudinarian views on translation is found in Tony Dokoupil, "Translation: Pardon My French: You Suck at This," Newsweek, May 18, 2009, p. 10.
Poetry translations can result in either a faithful or a free translation. A faithful translation focuses on the essential information, whereas a free translation focuses on the general content and structure.Jiří Levý, The Art of Translation, Philadelphia, John Benjamins Publishing Company, 2011, p. 84.
When translating English poems into French, pentameter lines are replaced with alexandrines. However, not all language pairs have a standard form and structure conversion when translating this type of literature.John Hollander, The Poet’s Other Voice. By Edwin Honig. University of Massachusetts Press, 1985, p. 31. Compared to prose translation, which favours meaning over form, poetry is considered to favour sound and form over meaning.Susan Ouriou, Beyond Words / Translating the World, Alberta, Banff Centre Press, 2010, p. 77. Translators run into the common problem of recreating meaning by combining non-cacophonous sounds and rhythm to produce an effective and rich-sounding text in the target language.Susan Ouriou, Beyond Words / Translating the World, Alberta, Banff Centre Press, 2010, p. 79.
Book Titles
Book-title translations can be either descriptive or symbolic. Descriptive book titles, for example Antoine de Saint-Exupéry’s Le Petit Prince (The Little Prince), are meant to be informative, and can name the protagonist, and indicate the theme of the book. An example of a symbolic book title is Stieg Larsson’s The Girl with the Dragon Tattoo, whose original Swedish title is Män som hatar kvinnor (Men Who Hate Women). Such symbolic book titles usually indicate the theme, issues, or atmosphere of the work.
When translators are working with long book titles, the translated titles are often shorter and indicate the theme of the book.Jiří Levý, The Art of Translation, Philadelphia, John Benjamins Publishing Company, 2011, p. 122.
Plays
The translation of plays poses many problems such as the added element of actors, speech duration, translation literalness, and the relationship between the arts of drama and acting. Successful play translators are able to create language that allows the actor and the playwright to work together effectively.Harry G. Carlson, "Problems in Play Translation", Educational Theatre Journal 16, no. 1 (1964), pp. 55-58. , p. 55. Play translators must also take into account several other aspects: the final performance, varying theatrical and acting traditions, characters’ speaking styles, modern theatrical discourse, and even the acoustics of the auditorium, i.e., whether certain words will have the same effect on the new audience as they had on the original audience.Jiří Levý, The Art of Translation, Philadelphia, John Benjamins Publishing Company, 2011, pp. 129-39.
Audiences in Shakespeare’s time were more accustomed than modern playgoers to actors having longer stage time.Harry G. Carlson, "Problems in Play Translation", Educational Theatre Journal 16, no. 1 (1964), pp. 55-58. , p. 56. Modern translators tend to simplify the sentence structures of earlier dramas, which included compound sentences with intricate hierarchies of subordinate clauses.Jiří Levý, The Art of Translation, Philadelphia, John Benjamins Publishing Company, 2011, p. 129.Loren Kruger, "Keywords and Contexts: Translating Theatre Theory", Theatre Journal 59, no. 3 (2007), pp. 355-58.
Chinese Literature
In translating Chinese literature, translators struggle to find true fidelity in translating into the target language. In The Poem Behind the Poem, Barnstone argues that poetry “can’t be made to sing through a mathematics that doesn’t factor in the creativity of the translator.”Frank Stewart, The Poem Behind the Poem, Washington, Copper Canyon Press, 2004.
A notable piece of work translated into English is the Wen Xuan, an anthology representative of major works of literature. Translating this work requires a high knowledge of the genres presented in the book, such as poetic forms, various prose types including memorials, letters, proclamations, praise poems, edicts, and historical, philosophical and political disquisitions, threnodies and laments for the dead, and examination essays. Thus the literary translator must be familiar with the writings, lives, and thought of a large number of its 130 authors, making the Wen Xuan one of the most difficult literary works to translate.Eugene Eoyang and Lin Yao-fu, Translating Chinese Literature, Indiana University Press, 1995, pp. 42–43.
Translation generally, much as with Kurt Gödel’s conception of mathematics, requires, to varying extents, more information than appears in the page of text being translated.
Sung texts
thumb|left|100px|Catherine Winkworth
Translation of a text that is sung in vocal music for the purpose of singing in another language—sometimes called "singing translation"—is closely linked to translation of poetry because most vocal music, at least in the Western tradition, is set to verse, especially verse in regular patterns with rhyme. (Since the late 19th century, musical setting of prose and free verse has also been practiced in some art music, though popular music tends to remain conservative in its retention of stanzaic forms with or without refrains.) A rudimentary example of translating poetry for singing is church hymns, such as the German chorales translated into English by Catherine Winkworth.For another example of poetry translation, including translation of sung texts, see Rhymes from Russia.
Translation of sung texts is generally much more restrictive than translation of poetry, because in the former there is little or no freedom to choose between a versified translation and a translation that dispenses with verse structure. One might modify or omit rhyme in a singing translation, but the assignment of syllables to specific notes in the original musical setting places great challenges on the translator. There is the option in prose sung texts, less so in verse, of adding or deleting a syllable here and there by subdividing or combining notes, respectively, but even with prose the process is almost like strict verse translation because of the need to stick as closely as possible to the original prosody of the sung melodic line.
Other considerations in writing a singing translation include repetition of words and phrases, the placement of rests and/or punctuation, the quality of vowels sung on high notes, and rhythmic features of the vocal line that may be more natural to the original language than to the target language. A sung translation may be considerably or completely different from the original, thus resulting in a contrafactum.
Translations of sung texts—whether of the above type meant to be sung or of a more or less literal type meant to be read—are also used as aids to audiences, singers and conductors, when a work is being sung in a language not known to them. The most familiar types are translations presented as subtitles or surtitles projected during opera performances, those inserted into concert programs, and those that accompany commercial audio CDs of vocal music. In addition, professional and amateur singers often sing works in languages they do not know (or do not know well), and translations are then used to enable them to understand the meaning of the words they are singing.
Religious texts
thumb|left|120px|Saint Jerome, patron saint of translators and encyclopedists
thumb|150px|right|Mistranslation: horned Moses, by Michelangelo
An important role in history has been played by translation of religious texts. Such translations may be influenced by tension between the text and the religious values the translators wish to convey. For example, Buddhist monks who translated the Indian sutras into Chinese occasionally adjusted their translations to better reflect China's distinct culture, emphasizing notions such as filial piety.
One of the first recorded instances of translation in the West was the rendering of the Old Testament into Greek in the 3rd century BCE. The translation is known as the "Septuagint", a name that refers to the seventy translators (seventy-two, in some versions) who were commissioned to translate the Bible at Alexandria, Egypt. Each translator worked in solitary confinement in his own cell, and according to legend all seventy versions proved identical. The Septuagint became the source text for later translations into many languages, including Latin, Coptic, Armenian and Georgian.
Still considered one of the greatest translators in history, for having rendered the Bible into Latin, is Jerome of Stridon, the patron saint of translation. For centuries the Roman Catholic Church used his translation (known as the Vulgate), though even this translation at first stirred controversy.
The period preceding, and contemporary with, the Protestant Reformation saw the translation of the Bible into local European languages—a development that contributed to Western Christianity's split into Roman Catholicism and Protestantism due to disparities between Catholic and Protestant versions of crucial words and passages (although the Protestant movement was largely based on other things, such as a perceived need for reformation of the Roman Catholic Church to eliminate corruption). Lasting effects on the religions, cultures and languages of their respective countries have been exerted by such Bible translations as Martin Luther's into German, Jakub Wujek's into Polish, and the King James Bible's translators' into English. Debate and religious schism over different translations of religious texts remain to this day, as demonstrated by, for example, the King James Only movement.
A famous "mistranslation" of the Bible is the rendering of the Hebrew word (keren), which has several meanings, as "horn" in a context where it also means "beam of light". As a result, for centuries artists have depicted Moses the Lawgiver with horns growing out of his forehead; an example is Michelangelo's famous sculpture.
See also
References
Bibliography
Christopher de Bellaigue, "Dreams of Islamic Liberalism" (review of Marwa Elshakry, Reading Darwin in Arabic, 1860–1950, University of Chicago Press, 439 pp., $45.00), The New York Review of Books, vol. LXII, no. 10 (June 4, 2015), pp. 77–78.
Excerpted in English in
English translation:
Work in progress version (pdf).
Poets and critics Seamus Heaney, Charles Tomlinson, Tim Parks, and others discuss the theory and practice of translation.
Kaiser, Walter, "A Hero of Translation" (a review of Jean Findlay, Chasing Lost Time: The Life of C.K. Scott Moncrieff: Soldier, Spy, and Translator, Farrar, Straus and Giroux, 351 pp., $30.00), The New York Review of Books, vol. LXII, no. 10 (June 4, 2015), pp. 54–56.
Includes a discussion of European-language cognates of the term, "translation".
Link, Perry, "A Magician of Chinese Poetry" (review of Eliot Weinberger, with an afterword by Octavio Paz, 19 Ways of Looking at Wang Wei (with More Ways), New Directions, 88 pp., $10.95 [paper]; and Eliot Weinberger, The Ghosts of Birds, New Directions, 211 pp., $16.95 [paper]), The New York Review of Books, vol. LXIII, no. 18 (November 24, 2016), pp. 49–50.
Introduction by Stuart Berg Flexner, revised edition.
Snell-Hornby, Mary; Schopp, Jürgen F. (2013). "Translation", European History Online, Mainz, Institute of European History, retrieved 29 August 2013.
Tatarkiewicz, Władysław, O doskonałości (On Perfection), Warsaw, Państwowe Wydawnictwo Naukowe, 1976; English translation by Christopher Kasparek subsequently serialized in Dialectics and Humanism: The Polish Philosophical Quarterly, vol. VI, no. 4 (autumn 1979)—vol. VIII, no 2 (spring 1981), and reprinted in Władysław Tatarkiewicz, On Perfection, Warsaw University Press, Center of Universalism, 1992, pp. 9–51 (the book is a collection of papers by and about Professor Tatarkiewicz).
External links
(Self)Translation and the Poetry of the ‘In-between’ Cordite Poetry Review
Exploring and Renegotiating Transparency in Poetry Translation Cordite Poetry Review
UNESCO Clearing House for Literary Translation
1920 text by Flora Ross Amos from the series Columbia University studies in English and comparative literature.
UNESCO Clearing House for Literary Translation
Guide to Translation of Legal Materials
List of universities in Europe and North America offering translation courses
Glossary of Translation Terms, English, Spanish and Definitions
Category:Applied linguistics
Category:Communication
Category:Meaning (philosophy of language) | 18,630,637 | 2017-01 |
USB | USB, short for Universal Serial Bus, is an industry standard initially developed in the mid-1990s that defines the cables, connectors and communications protocols used in a bus for connection, communication, and power supply between computers and electronic devices. It is currently developed by the USB Implementers Forum (USB IF).
USB was designed to standardize the connection of computer peripherals (including keyboards, pointing devices, digital cameras, printers, portable media players, disk drives and network adapters) to personal computers, both to communicate and to supply electric power. It has become commonplace on other devices, such as smartphones, PDAs and video game consoles. USB has effectively replaced a variety of earlier interfaces, such as parallel ports, as well as separate power chargers for portable devices.
Overview
In general, there are three basic formats of USB connectors: the default or standard format intended for desktop or portable equipment (for example, on USB flash drives), the mini intended for mobile equipment (now deprecated except the Mini-B, which is used on many cameras), and the thinner micro size, for low-profile mobile equipment (most modern mobile phones). Also, there are 5 modes of USB data transfer, in order of increasing bandwidth: Low Speed (from 1.0), Full Speed (from 1.0), High Speed (from 2.0), SuperSpeed (from 3.0), and SuperSpeed+ (from 3.1); modes have differing hardware and cabling requirements. USB devices have some choice of implemented modes, and USB version is not a reliable statement of implemented modes. Modes are identified by their names and icons, and the specifications suggests that plugs and receptacles be colour-coded (SuperSpeed is identified by blue).
Unlike other data buses (e.g., Ethernet, HDMI), USB connections are directed, with both upstream and downstream ports emanating from a single host. This applies to electrical power, with only downstream facing ports providing power; this topology was chosen to easily prevent electrical overloads and damaged equipment. Thus, USB cables have different ends: A and B, with different physical connectors for each. Therefore, in general, each different format requires four different connectors: a plug and receptacle for each of the A and B ends. USB cables have the plugs, and the corresponding receptacles are on the computers or electronic devices. In common practice, the A end is usually the standard format, and the B side varies over standard, mini, and micro. The mini and micro formats also provide for USB On-The-Go with a hermaphroditic AB receptacle, which accepts either an A or a B plug. On-the-Go allows USB between peers without discarding the directed topology by choosing the host at connection time; it also allows one receptacle to perform double duty in space-constrained applications.
There are cables with A plugs on both ends, which may be valid if the cable includes, for example, a USB host-to-host transfer device with 2 ports, but they could also be non-standard and erroneous and should be used carefully.
The micro format is the most durable from the point of designed insertion lifetime. The standard and mini connectors have a design lifetime of 1,500 insertion-removal cycles, the improved Mini-B connectors increased this to 5,000. The micro connectors were designed with frequent charging of portable devices in mind, so have a design life of 10,000 cycles and also place the flexible contacts, which wear out sooner, on the easily replaced cable, while the more durable rigid contacts are located in the receptacles. Likewise, the springy component of the retention mechanism, parts that provide required gripping force, were also moved into plugs on the cable side.
History
thumb|left|alt=Large circle is left end of horizontal line. The line forks into three branches ending in circle, triangle and square symbols.|The basic USB trident logoIcon design recommendation for Identifying USB 2.0 Ports on PCs, Hosts and Hubs http://www.usb.org/developers/docs/icon_design.pdf
thumbnail|USB logo on the head of a standard A plug, the most common USB plug
A group of seven companies began the development of USB in 1994: Compaq, DEC, IBM, Intel, Microsoft, NEC, and Nortel. The goal was to make it fundamentally easier to connect external devices to PCs by replacing the multitude of connectors at the back of PCs, addressing the usability issues of existing interfaces, and simplifying software configuration of all devices connected to USB, as well as permitting greater data rates for external devices. A team including Ajay Bhatt worked on the standard at Intel; the first integrated circuits supporting USB were produced by Intel in 1995.
The original USB 1.0 specification, which was introduced in January 1996, defined data transfer rates of 1.5 Mbit/s Low Speed and 12 Mbit/s Full Speed. Microsoft Windows 95, OSR 2.1 provided OEM support for the devices. The first widely used version of USB was 1.1, which was released in September 1998. The 12 Mbit/s data rate was intended for higher-speed devices such as disk drives, and the lower 1.5 Mbit/s rate for low data rate devices such as joysticks. Apple Inc.'s iMac was the first mainstream product with USB and the iMac's success popularized USB itself. Following Apple's design decision to remove all legacy ports from the iMac, many PC manufacturers began building legacy-free PCs, which led to the broader PC market using USB as a standard.
The USB 2.0 specification was released in April 2000 and was ratified by the USB Implementers Forum (USB-IF) at the end of 2001. Hewlett-Packard, Intel, Lucent Technologies (now Alcatel-Lucent), NEC, and Philips jointly led the initiative to develop a higher data transfer rate, with the resulting specification achieving 480 Mbit/s, a 40-times increase over the original USB 1.1 specification.
The USB 3.0 specification was published on 12 November 2008. Its main goals were to increase the data transfer rate (up to 5 Gbit/s), decrease power consumption, increase power output, and be backward compatible with USB 2.0.08-Sep-2012 USB 3.0 includes a new, higher speed bus called SuperSpeed in parallel with the USB 2.0 bus. For this reason, the new version is also called SuperSpeed. The first USB 3.0 equipped devices were presented in January 2010.
, approximately 6 billion USB ports and interfaces were in the global marketplace, and about 2 billion were being sold each year.
In December 2014, USB-IF submitted USB 3.1, USB Power Delivery 2.0 and USB Type-C specifications to the IEC (TC 100 – Audio, video and multimedia systems and equipment) for inclusion in the international standard IEC 62680 Universal Serial Bus interfaces for data and power, which is currently based on USB 2.0.
Version history
Overview
Release name Release date Maximum transfer rate Note USB 0.8 December 1994 Prerelease USB 0.9 April 1995 Prerelease USB 0.99 August 1995 Prerelease USB 1.0 Release Candidate November 1995 Prerelease USB 1.0 January 1996 Low Speed (1.5 Mbit/s) USB 1.1 August 1998 Full Speed (12 Mbit/s) USB 2.0 April 2000 High Speed (480 Mbit/s) USB 3.0 November 2008 SuperSpeed (5 Gbit/s) Also referred to as USB 3.1 Gen 1 by USB 3.1 standardhttp://www.usb.org/developers/ssusb/USB_3_1_Language_Product_and_Packaging_Guidelines_FINAL.pdf USB 3.1 July 2013 SuperSpeed+ (10 Gbit/s) Also referred to as USB 3.1 Gen 2 by USB 3.1 standard
Power related specifications
Release name Release date Max. power Note USB Battery Charging 1.0 2007-03-08 5 V, 1.5 A USB Battery Charging 1.1 2009-04-15 USB Battery Charging 1.2 2010-12-07 5 V, 5 A USB Power Delivery revision 1.0 (version 1.0) 2012-07-05 20 V, 5 A Using FSK protocol over bus power (V) USB Power Delivery revision 1.0 (version 1.3) 2014-03-11 USB Type-C 1.0 2014-08-11 5 V, 3 A New connector and cable specification USB Power Delivery revision 2.0 (version 1.0) 2014-08-11 20 V, 5 A Using BMC protocol over communication channel (CC) on type-C cables. USB Type-C 1.1 2015-04-03 5 V, 3 A USB Power Delivery revision 2.0 (version 1.1) 2015-05-07 20 V, 5 A USB Power Delivery revision 2.0 (version 1.2) 2016-03-25 20 V, 5 A
USB 1.x
Released in January 1996, USB 1.0 specified a data rate of 1.5 Mbit/s (Low Bandwidth or Low Speed). It did not allow for extension cables or pass-through monitors, due to timing and power limitations. Few USB devices made it to the market until USB 1.1 was released in August 1998, which introduced the speed of 12 Mbit/s (fast speed). USB 1.1 was the earliest revision that was widely adopted and led to Legacy-free PCs.
Neither USB 1.0 nor 1.1 specified a design for any connector smaller than the standard type A or type B. Though many designs for a miniaturised type B connector appeared on many peripheral devices, conformance to the USB 1.x standard was fudged by treating peripherals that had miniature connectors as though they had a tethered connection (that is: no plug or socket at the peripheral end). There was no known miniature type A connector until USB 2.0 (rev 1.01) introduced one.
USB 2.0
thumb|The Hi-Speed USB Logo
thumb|A USB 2.0 PCI expansion card
USB 2.0 was released in April 2000, adding a higher maximum signaling rate of 480 Mbit/s (High Speed or High Bandwidth), in addition to the USB 1.x Full Speed signaling rate of 12 Mbit/s. Due to bus access constraints, the effective throughput of the High Speed signaling rate is limited to 280 Mbit/s or 35 MB/s.
Further modifications to the USB specification have been made via Engineering Change Notices (ECN). The most important of these ECNs are included into the USB 2.0 specification package available from USB.org:
Mini-A and Mini-B Connector ECN: Released in October 2000.Specifications for Mini-A and Mini-B plug and receptacle. Also receptacle that accepts both plugs for On-The-Go. These should not be confused with Micro-B plug and receptacle.
Pull-up/Pull-down Resistors ECN: Released in May 2002
Interface Associations ECN: Released in May 2003.New standard descriptor was added that allows associating multiple interfaces with a single device function.
Rounded Chamfer ECN: Released in October 2003.A recommended, backward compatible change to Mini-B plugs that results in longer lasting connectors.
Unicode ECN: Released in February 2005.This ECN specifies that strings are encoded using UTF-16LE. USB 2.0 specified Unicode, but did not specify the encoding.
Inter-Chip USB Supplement: Released in March 2006
On-The-Go Supplement 1.3: Released in December 2006.USB On-The-Go makes it possible for two USB devices to communicate with each other without requiring a separate USB host. In practice, one of the USB devices acts as a host for the other device.
Battery Charging Specification 1.1: Released in March 2007 and updated on 15 April 2009.Adds support for dedicated chargers (power supplies with USB connectors), host chargers (USB hosts that can act as chargers) and the No Dead Battery provision, which allows devices to temporarily draw 100 mA current after they have been attached. If a USB device is connected to a dedicated charger, maximum current drawn by the device may be as high as 1.8 A. (Note that this document is not distributed with USB 2.0 specification package, only USB 3.0 and USB On-The-Go.)
Micro-USB Cables and Connectors Specification 1.01: Released in April 2007.
Link Power Management Addendum ECN: Released in July 2007.This adds sleep, a new power state between enabled and suspended states. Device in this state is not required to reduce its power consumption. However, switching between enabled and sleep states is much faster than switching between enabled and suspended states, which allows devices to sleep while idle.
Battery Charging Specification 1.2: Released in December 2010.Several changes and increasing limits including allowing 1.5 A on charging ports for unconfigured devices, allowing High Speed communication while having a current up to 1.5 A and allowing a maximum current of 5 A.
USB 3.0
thumb|The SuperSpeed USB logo.
right|thumb|USB 3, microphone, headphone, and USB 2 jacks
The USB 3.0 specification was released on 12 November 2008, with its management transferring from USB 3.0 Promoter Group to the USB Implementers Forum (USB-IF), and announced on 17 November 2008 at the SuperSpeed USB Developers Conference.
USB 3.0 defines a new SuperSpeed transfer mode, with associated new backwards-compatible plugs, receptacles, and cables. SuperSpeed plugs and receptacles are identified with a distinct logo and blue inserts in standard format receptacles.
The new SuperSpeed mode provides a data signaling rate of 5.0 Gbit/s. However, due to the overhead incurred by 8b/10b encoding, the payload throughput is actually 4 Gbit/s, and the specification considers it reasonable to achieve only around 3.2 Gbit/s (0.4 GB/s or 400 MB/s). However, this should increase with future hardware advances. Communication is full-duplex in SuperSpeed transfer mode; earlier modes are half-duplex, arbitrated by the host.
Low-power and high-power devices remain operational with this standard, but devices using SuperSpeed can take advantage of increased available current of between 150 mA and 900 mA, respectively. Additionally, there is a Battery Charging Specification (Version 1.2 – December 2010), which increases the power handling capability to 1.5 A but does not allow concurrent data transmission. The Battery Charging Specification requires that the physical ports themselves be capable of handling 5 A of current but limits the maximum current drawn to 1.5 A.
USB 3.1
A January 2013 press release from the USB group revealed plans to update USB 3.0 to 10 Gbit/s. The group ended up creating a new USB specification, USB 3.1, which was released on 31 July 2013, replacing the USB 3.0 standard. The USB 3.1 specification takes over the existing USB 3.0's SuperSpeed USB transfer rate, also referred to as USB 3.1 Gen 1, and introduces a faster transfer rate called SuperSpeed USB 10 Gbps, also referred to as USB 3.1 Gen 2, putting it on par with a single first-generation Thunderbolt channel. The new mode's logo features a caption stylized as SUPERSPEED+. The USB 3.1 standard increases the data signaling rate to 10 Gbit/s, double that of SuperSpeed USB, and reduces line encoding overhead to just 3% by changing the encoding scheme to 128b/132b. The first USB 3.1 implementation demonstrated transfer speeds of 7.2 Gbit/s.
The USB 3.1 standard is backward compatible with USB 3.0 and USB 2.0.
System design
The design architecture of USB is asymmetrical in its topology, consisting of a host, a multitude of downstream USB ports, and multiple peripheral devices connected in a tiered-star topology. Additional USB hubs may be included in the tiers, allowing branching into a tree structure with up to five tier levels. A USB host may implement multiple host controllers and each host controller may provide one or more USB ports. Up to 127 devices, including hub devices if present, may be connected to a single host controller.08-Sep-2012 USB devices are linked in series through hubs. One hub—built into the host controller—is the root hub.
A physical USB device may consist of several logical sub-devices that are referred to as device functions. A single device may provide several functions, for example, a webcam (video device function) with a built-in microphone (audio device function). This kind of device is called a composite device. An alternative to this is compound device, in which the host assigns each logical device a distinctive address and all logical devices connect to a built-in hub that connects to the physical USB cable.
thumb|alt=Diagram: inside a device are several endpoints, each of which is connected by a logical pipes to a host controller. Data in each pipe flows in one direction, although there are a mixture going to and from the host controller.|USB endpoints actually reside on the connected device: the channels to the host are referred to as pipes
USB device communication is based on pipes (logical channels). A pipe is a connection from the host controller to a logical entity, found on a device, and named an endpoint. Because pipes correspond 1-to-1 to endpoints, the terms are sometimes used interchangeably. A USB device could have up to 32 endpoints (16 IN, 16 OUT), though it's rare to have so many. An endpoint is defined and numbered by the device during initialization (the period after physical connection called "enumeration") and so is relatively permanent, whereas a pipe may be opened and closed.
There are two types of pipe: stream and message. A message pipe is bi-directional and is used for control transfers. Message pipes are typically used for short, simple commands to the device, and a status response, used, for example, by the bus control pipe number 0. A stream pipe is a uni-directional pipe connected to a uni-directional endpoint that transfers data using an isochronous, interrupt, or bulk transfer:
Isochronous transfers At some guaranteed data rate (often, but not necessarily, as fast as possible) but with possible data loss (e.g., realtime audio or video)
Interrupt transfers Devices that need guaranteed quick responses (bounded latency) (e.g., pointing devices and keyboards)
Bulk transfers Large sporadic transfers using all remaining available bandwidth, but with no guarantees on bandwidth or latency (e.g., file transfers)
An endpoint of a pipe is addressable with a tuple (device_address, endpoint_number) as specified in a TOKEN packet that the host sends when it wants to start a data transfer session. If the direction of the data transfer is from the host to the endpoint, an OUT packet (a specialization of a TOKEN packet) having the desired device address and endpoint number is sent by the host. If the direction of the data transfer is from the device to the host, the host sends an IN packet instead. If the destination endpoint is a uni-directional endpoint whose manufacturer's designated direction does not match the TOKEN packet (e.g. the manufacturer's designated direction is IN while the TOKEN packet is an OUT packet), the TOKEN packet is ignored. Otherwise, it is accepted and the data transaction can start. A bi-directional endpoint, on the other hand, accepts both IN and OUT packets.
thumb|alt=Rectangular opening where the width is twice the height. The opening has a metal rim, and within the opening a flat rectangular bar runs parallel to the top side.|Two USB 3.0 standard A sockets (left) and two USB 2.0 sockets (right) on a computer's front panel
Endpoints are grouped into interfaces and each interface is associated with a single device function. An exception to this is endpoint zero, which is used for device configuration and is not associated with any interface. A single device function composed of independently controlled interfaces is called a composite device. A composite device only has a single device address because the host only assigns a device address to a function.
When a USB device is first connected to a USB host, the USB device enumeration process is started. The enumeration starts by sending a reset signal to the USB device. The data rate of the USB device is determined during the reset signaling. After reset, the USB device's information is read by the host and the device is assigned a unique 7-bit address. If the device is supported by the host, the device drivers needed for communicating with the device are loaded and the device is set to a configured state. If the USB host is restarted, the enumeration process is repeated for all connected devices.
The host controller directs traffic flow to devices, so no USB device can transfer any data on the bus without an explicit request from the host controller. In USB 2.0, the host controller polls the bus for traffic, usually in a round-robin fashion. The throughput of each USB port is determined by the slower speed of either the USB port or the USB device connected to the port.
High-speed USB 2.0 hubs contain devices called transaction translators that convert between high-speed USB 2.0 buses and full and low speed buses. When a high-speed USB 2.0 hub is plugged into a high-speed USB host or hub, it operates in high-speed mode. The USB hub then uses either one transaction translator per hub to create a full/low-speed bus routed to all full and low speed devices on the hub, or uses one transaction translator per port to create an isolated full/low-speed bus per port on the hub.
Because there are two separate controllers in each USB 3.0 host, USB 3.0 devices transmit and receive at USB 3.0 data rates regardless of USB 2.0 or earlier devices connected to that host. Operating data rates for earlier devices are set in the legacy manner.
Device classes
The functionality of USB devices is defined by class codes, communicated to the USB host to affect the loading of suitable software driver modules for each connected device. This provides for adaptability and device independence of the host to support new devices from different manufacturers.
Device classes include:.
Class Usage Description Examples, or exception 00h Device UnspecifiedUse class information in the interface descriptors. This base class is defined to use in device descriptors to indicate that class information should be determined from the Interface Descriptors in the device. Device class is unspecified, interface descriptors are used to determine needed drivers 01h Interface Audio Speaker, microphone, sound card, MIDI 02h Both Communications and CDC Control Modem, Ethernet adapter, Wi-Fi adapter, RS232 serial adapter. Used together with class 0Ah (below) 03h Interface Human interface device (HID) Keyboard, mouse, joystick 05h Interface Physical Interface Device (PID) Force feedback joystick 06h Interface Image (PTP/MTP) Webcam, scanner 07h Interface Printer Laser printer, inkjet printer, CNC machine 08h Interface Mass storage (MSC or UMS) USB flash drive, memory card reader, digital audio player, digital camera, external drive 09h Device USB hub Full bandwidth hub 0Ah Interface CDC-Data Used together with class 02h (above) 0Bh Interface Smart Card USB smart card reader 0Dh Interface Content security Fingerprint reader 0Eh Interface Video Webcam 0Fh Interface Personal healthcare device class (PHDC) Pulse monitor (watch) 10h Interface Audio/Video (AV) Webcam, TV 11h Device Billboard Describes USB Type-C alternate modes supported by device DCh Both Diagnostic Device USB compliance testing device E0h Interface Wireless Controller Bluetooth adapter, Microsoft RNDIS EFh Both Miscellaneous ActiveSync device FEh Interface Application-specific IrDA Bridge, Test & Measurement Class (USBTMC), USB DFU (Device Firmware Upgrade) FFh Both Vendor-specific Indicates that a device needs vendor-specific drivers
USB mass storage / USB drive
thumb|right|A flash drive, a typical USB mass-storage device
thumb|right|Circuit board from a USB 3.0 external 2.5-inch SATA HDD enclosure
USB implements connections to storage devices using a set of standards called the USB mass storage device class (MSC or UMS). This was at first intended for traditional magnetic and optical drives and has been extended to support flash drives. It has also been extended to support a wide variety of novel devices as many systems can be controlled with the familiar metaphor of file manipulation within directories. The process of making a novel device look like a familiar device is also known as extension. The ability to boot a write-locked SD card with a USB adapter is particularly advantageous for maintaining the integrity and non-corruptible, pristine state of the booting medium.
Though most computers since mid-2004 can boot from USB mass storage devices, USB is not intended as a primary bus for a computer's internal storage. Buses such as Parallel ATA (PATA or IDE), Serial ATA (SATA), or SCSI fulfill that role in PC class computers. However, USB has one important advantage, in that it is possible to install and remove devices without rebooting the computer (hot-swapping), making it useful for mobile peripherals, including drives of various kinds (given SATA or SCSI devices may or may not support hot-swapping).
Firstly conceived and still used today for optical storage devices (CD-RW drives, DVD drives, etc.), several manufacturers offer external portable USB hard disk drives, or empty enclosures for disk drives. These offer performance comparable to internal drives, limited by the current number and types of attached USB devices, and by the upper limit of the USB interface (in practice about 30 MB/s for USB 2.0 and potentially 400 MB/s or moreUniversal Serial Bus 3.0 Specification,4.4.11 "Efficiency" for USB 3.0). These external drives typically include a "translating device" that bridges between a drive's interface to a USB interface port. Functionally, the drive appears to the user much like an internal drive. Other competing standards for external drive connectivity include eSATA, ExpressCard, FireWire (IEEE 1394), and most recently Thunderbolt.
Another use for USB mass storage devices is the portable execution of software applications (such as web browsers and VoIP clients) with no need to install them on the host computer.
Media Transfer Protocol
Media Transfer Protocol (MTP) was designed by Microsoft to give higher-level access to a device's filesystem than USB mass storage, at the level of files rather than disk blocks. It also has optional DRM features. MTP was designed for use with portable media players, but it has since been adopted as the primary storage access protocol of the Android operating system from the version 4.1 Jelly Bean as well as Windows Phone 8 (Windows Phone 7 devices had used the Zune protocol which was an evolution of MTP). The primary reason for this is that MTP does not require exclusive access to the storage device the way UMS does, alleviating potential problems should an Android program request the storage while it is attached to a computer. The main drawback is that MTP is not as well supported outside of Windows operating systems.
Human interface devices
Joysticks, keypads, tablets and other human-interface devices (HIDs) are also progressively migrating from MIDI, and PC game port connectors to USB.
USB mice and keyboards can usually be used with older computers that have PS/2 connectors with the aid of a small USB-to-PS/2 adapter. For mice and keyboards with dual-protocol support, an adaptor that contains no logic circuitry may be used: the hardware in the USB keyboard or mouse is designed to detect whether it is connected to a USB or PS/2 port, and communicate using the appropriate protocol. Converters also exist that connect PS/2 keyboards and mice (usually one of each) to a USB port. These devices present two HID endpoints to the system and use a microcontroller to perform bidirectional data translation between the two standards.
Device Firmware Upgrade
Device Firmware Upgrade (DFU) is a vendor- and device-independent mechanism for upgrading the firmware of USB devices with improved versions provided by their manufacturers, offering (for example) a way for firmware bugfixes to be deployed. During the firmware upgrade operation, USB devices change their operating mode effectively becoming a PROM programmer. Any class of USB device can implement this capability by following the official DFU specifications.
In addition to its intended legitimate purposes, DFU can also be exploited by uploading maliciously crafted firmwares that cause USB devices to spoof various other device types; one such exploiting approach is known as BadUSB.
Connectors
Connector properties
thumb|Type-A plug and, as part of a non-standard cable, receptacle
The connectors the USB committee specifies support a number of USB's underlying goals, and reflect lessons learned from the many connectors the computer industry has used.
Receptacles and plugs
The connector mounted on the host or device is called the receptacle, and the connector attached to the cable is called the plug. The official USB specification documents also periodically define the term male to represent the plug, and female to represent the receptacle.
Usability and orientation
thumb|upright|USB extension cable
By design, it is difficult to insert a USB plug into its receptacle incorrectly. The USB specification states that the required USB icon must be embossed on the "topside" of the USB plug, which "...provides easy user recognition and facilitates alignment during the mating process." The specification also shows that the "recommended" "Manufacturer's logo" ("engraved" on the diagram but not specified in the text) is on the opposite side of the USB icon. The specification further states, "The USB Icon is also located adjacent to each receptacle. Receptacles should be oriented to allow the icon on the plug to be visible during the mating process." However, the specification does not consider the height of the device compared to the eye level height of the user, so the side of the cable that is "visible" when mated to a computer on a desk can depend on whether the user is standing or kneeling.
While connector interfaces can be designed to allow plugging with either orientation, the original design omitted such functionality to decrease manufacturing costs. The reversible type-C plug is an addition to the USB 3.1 specification comparable in size to the Micro-B SuperSpeed connector.
Only moderate force is needed to insert or remove a USB cable. USB cables and small USB devices are held in place by the gripping force from the receptacle (without need of the screws, clips, or thumb-turns other connectors have required).
Power-use topology
The standard connectors were deliberately intended to enforce the directed topology of a USB network: type-A receptacles on host devices that supply power and type-B receptacles on target devices that draw power. This prevents users from accidentally connecting two USB power supplies to each other, which could lead to short circuits and dangerously high currents, circuit failures, or even fire. USB does not support cyclic networks and the standard connectors from incompatible USB devices are themselves incompatible.
However, some of this directed topology is lost with the advent of multi-purpose USB connections (such as USB On-The-Go in smartphones, and USB-powered Wi-Fi routers), which require A-to-A, B-to-B, and sometimes Y/splitter cables. See the USB On-The-Go connectors section below, for a more detailed summary description.
Durability
The standard connectors were designed to be robust. Because USB is hot-pluggable, the connectors would be used more frequently, and perhaps with less care, than other connectors. Many previous connector designs were fragile, specifying embedded component pins or other delicate parts that were vulnerable to bending or breaking. The electrical contacts in a USB connector are protected by an adjacent plastic tongue, and the entire connecting assembly is usually protected by an enclosing metal sheath.
The connector construction always ensures that the external sheath on the plug makes contact with its counterpart in the receptacle before any of the four connectors within make electrical contact. The external metallic sheath is typically connected to system ground, thus dissipating damaging static charges. This enclosure design also provides a degree of protection from electromagnetic interference to the USB signal while it travels through the mated connector pair (the only location when the otherwise twisted data pair travels in parallel). In addition, because of the required sizes of the power and common connections, they are made after the system ground but before the data connections. This type of staged make-break timing allows for electrically safe hot-swapping.
The newer Micro-USB receptacles are designed for a minimum rated lifetime of 10,000 cycles of insertion and removal between the receptacle and plug, compared to 1,500 for the standard USB and 5,000 for the Mini-USB receptacle. Features intended to accomplish include, a locking device was added and the leaf-spring was moved from the jack to the plug, so that the most-stressed part is on the cable side of the connection. This change was made so that the connector on the less expensive cable would bear the most wear instead of the more expensive Micro-USB device. However the idea that these changes did in fact make the connector more durable in real world use has been widely disputed, with many contending that they are in fact, much less durable.
Compatibility
The USB standard specifies relatively loose tolerances for compliant USB connectors to minimize physical incompatibilities in connectors from different vendors. To address a weakness present in some other connector standards, the USB specification also defines limits to the size of a connecting device in the area around its plug. This was done to prevent a device from blocking adjacent ports due to the size of the cable strain relief mechanism (usually molding integral with the cable outer insulation) at the connector. Compliant devices must either fit within the size restrictions or support a compliant extension cable that does.
In general, USB cables have only plugs on their ends, while hosts and devices have only receptacles. Hosts almost universally have Type-A receptacles, while devices have one or another Type-B variety. Type-A plugs mate only with Type-A receptacles, and the same applies to their Type-B counterparts; they are deliberately physically incompatible. However, an extension to the USB standard specification called USB On-The-Go (OTG) allows a single port to act as either a host or a device, which is selectable by the end of the cable that plugs into the receptacle on the OTG-enabled unit. Even after the cable is hooked up and the units are communicating, the two units may "swap" ends under program control. This capability is meant for units such as PDAs in which the USB link might connect to a PC's host port as a device in one instance, yet connect as a host itself to a keyboard and mouse device in another instance.
Connector types
thumb|400px|Various USB connectors along a centimeter ruler for scale. From left to right:
There are several types of USB connector, including some that have been added while the specification progressed. The original USB specification detailed standard-A and standard-B plugs and receptacles; the B connector was necessary so that cabling could be plug ended at both ends and still prevent users from connecting one computer receptacle to another. The first engineering change notice to the USB 2.0 specification added Mini-B plugs and receptacles.
The data pins in the standard plugs are actually recessed in the plug compared to the outside power pins. This permits the power pins to connect first, preventing data errors by allowing the device to power up first and then establish the data connection. Also, some devices operate in different modes depending on whether the data connection is made.
To reliably enable a charge-only feature, modern USB accessory peripherals now include charging cables that provide power connections to the host port but no data connections, and both home and vehicle charging docks are available that supply power from a converter device and do not include a host device and data pins, allowing any capable USB device to charge or operate from a standard USB cable.
In a charge-only cable, the data wires are shorted at the device end.
These wires are usually green and white.
If these wires are left as-is, the device will often reject the charger as unsuitable.
Standard connectors
thumb|Pin configuration of the type-A and type-B USB connectors, viewed from the mating end of plugs
The type-A plug has an elongated rectangular cross-section, inserts into a type-A receptacle on a downstream port on a USB host or hub, and carries both power and data. Captive cables on USB devices, such as keyboards or mice, will be terminated with a type-A plug.
The type-B plug has a near square cross-section with the top exterior corners beveled. As part of a removable cable, it inserts into an upstream port on a device, such as a printer. On some devices, the type-B receptacle has no data connections, being used solely for accepting power from the upstream device. This two-connector-type scheme (A/B) prevents a user from accidentally creating an electrical loop.
The spring contacts in the connectors eventually relax and wear out with repeated cycles of plugging and unplugging. The lifetime of a type-A plug is approximately 1,500 connect/disconnect cycles.
The maximum allowed cross-section of the overmold boot (which is part of the connector used for its handling) is 16 by 8 mm for the standard-A plug type, while for the type-B it is 11.5 by 10.5 mm.
Mini and micro connectors
For smaller devices such as digital cameras, smartphones, and tablet computers, various smaller connectors have been used – the USB-standard first introduced the Mini-USB connectors in April 2000, and then the Micro-USB connectors in January 2007.
Mini-USB connectors were introduced with USB 2.0 in April 2000 – however the Mini-A connector and the Mini-AB receptacle connector are deprecated (i.e. de-certified, but standardized) since May 2007. Mini-B connectors are still supported, but are not On-The-Go-compliant; the Mini-B USB connector was standard for transferring data to and from the early smartphones and PDAs. Both Mini-A and Mini-B plugs are approximately 3 by 7 mm.
Micro-USB connectors, which were announced by the USB-IF on 4 January 2007, have a similar width to Mini-USB, but approximately half the thickness, enabling their integration into thinner portable devices. The Micro-A connector is 6.85 by 1.8 mm with a maximum overmold boot size of 11.7 by 8.5 mm, while the Micro-B connector is 6.85 by 1.8 mm with a maximum overmold size of 10.6 by 8.5 mm.
The thinner Micro-USB connectors were introduced to replace the Mini connectors in devices manufactured since May 2007, including smartphones, personal digital assistants, and cameras. While some of the devices and cables still use the older Mini variant, the newer Micro connectors are widely adopted, and they are the most widely used.
The Micro plug design is rated for at least 10,000 connect-disconnect cycles, which is more than the Mini plug design. The Micro connector is also designed to reduce the mechanical wear on the device; instead the easier-to-replace cable is designed to bear the mechanical wear of connection and disconnection. The Universal Serial Bus Micro-USB Cables and Connectors Specification details the mechanical characteristics of Micro-A plugs, Micro-AB receptacles (which accept both Micro-A and Micro-B plugs), and Micro-B plugs and receptacles, along with a standard-A receptacle to Micro-A plug adapter.
thumb|Reversible micro-B plug connector fits any micro-B port The cellular phone carrier group Open Mobile Terminal Platform (OMTP) in 2007 endorsed Micro-USB as the standard connector for data and power on mobile devices On 22 October 2009 the International Telecommunication Union (ITU) announced that it had embraced Micro-USB as the Universal Charging Solution its "energy-efficient one-charger-fits-all new mobile phone solution," and added: "Based on the Micro-USB interface, UCS chargers also include a 4-star or higher efficiency rating — up to three times more energy-efficient than an unrated charger."
The European Standardisation Bodies CEN, CENELEC and ETSI (independent of the OMTP/GSMA proposal) defined a common external power supply (EPS) for use with smartphones sold in the EU based on Micro-USB. 14 of the world's largest mobile phone manufacturers signed the EU's common EPS Memorandum of Understanding (MoU). Apple, one of the original MoU signers, makes Micro-USB adapters available – as permitted in the Common EPS MoU – for its iPhones equipped with Apple's proprietary 30-pin dock connector or (later) Lightning connector.
A reversible micro connector plug that can be connected to existing Micro-B sockets has been developed by Winner Gear, crowdfunded on Indiegogo, with no functional enhancement to the USB. And there are many USB-A to reversible Micro-B cable manufacturer offerings, as well as USB On-The-Go (OTG) to reversible Micro-B cables.
USB 3.0 Connectors and Backward Compatibility
thumb|right|USB 3.0 Micro-B SuperSpeed plug
USB 3.0 introduced Type-A SuperSpeed plugs and receptacles as well as micro-sized Type-B SuperSpeed plugs and receptacles. The 3.0 receptacles are backward-compatible with the corresponding pre-3.0 plugs.
USB 3.0 and USB 1.0 Type-A plugs and receptacles are designed to interoperate. In order to achieve USB 3.0's SuperSpeed (and SuperSpeed+ for USB 3.1 Gen 2), 5 extra pins are added to the unused area of the original 4 pin USB 1.0 design, making USB 3.0 Type-A plugs and receptacles backward compatible to those of USB 1.0.
alt=USB Micro-B USB 2.0 vs USB Micro-B SuperSpeed (USB 3.0)|thumb|288x288px|USB Micro-B USB 2.0 vs USB Micro-B SuperSpeed (USB 3.0)|leftOn the device side, a modified Micro-B plug (Micro-B SuperSpeed) is used to cater for the 5 extra pins required to achieve the USB 3.0 features (USB Type-C plug can also be used). The USB 3.0 Micro-B plug effectively consists of a standard USB 2.0 Micro-B cable plug, with an additional 5 pins plug "stacked" to the side of it. In this way, cables with smaller 5 pin USB 2.0 Micro-B plugs can be plugged into devices with 10 contact USB 3.0 Micro-B receptacles and achieve backward compatibility.
USB cables exist with various combinations of plugs on each end of the cable, as displayed below in the USB cables matrix.
USB On-The-Go connectors
All current USB On-The-Go (OTG) devices are required to have one, and only one, USB connector: a Micro-AB receptacle. Non-OTG compliant devices are not allowed to use the Micro-AB receptacle, due to power supply shorting hazards on the VBUS line. The Micro-AB receptacle is capable of accepting both Micro-A and Micro-B plugs, attached to any of the legal cables and adapters as defined in revision 1.01 of the Micro-USB specification. Prior to the development of Micro-USB, USB On-The-Go devices were required to use Mini-AB receptacles to perform the equivalent job.
To enable Type-AB receptacles to distinguish which end of a cable is plugged in, mini and micro plugs have an "ID" pin in addition to the four contacts found in standard-size USB connectors. This ID pin is connected to GND in Type-A plugs, and left unconnected in Type-B plugs. Typically, a pull-up resistor in the device is used to detect the presence or absence of an ID connection.
The OTG device with the A-plug inserted is called the A-device and is responsible for powering the USB interface when required and by default assumes the role of host. The OTG device with the B-plug inserted is called the B-device and by default assumes the role of peripheral. An OTG device with no plug inserted defaults to acting as a B-device. If an application on the B-device requires the role of host, then the Host Negotiation Protocol (HNP) is used to temporarily transfer the host role to the B-device.
OTG devices attached either to a peripheral-only B-device or a standard/embedded host have their role fixed by the cable, since in these scenarios it is only possible to attach the cable one way.
USB Type-C
thumb|The USB Type-C plug
thumb|USB Type C cable
Developed at roughly the same time as the USB 3.1 specification, but distinct from it, the USB Type-C Specification 1.0 was finalized in August 2014 and defines a new small reversible-plug connector for USB devices. The Type-C plug connects to both hosts and devices, replacing various Type-A and Type-B connectors and cables with a standard meant to be future-proof, similar to Apple Lightning and Thunderbolt. The 24-pin double-sided connector provides four power-ground pairs, two differential pairs for USB 2.0 data bus (though only one pair is implemented in a Type-C cable), four pairs for SuperSpeed data bus (only two pairs are used in USB 3.1 mode), two "sideband use" pins, VCONN +5 V power for active cables, and a configuration pin for cable orientation detection and dedicated biphase mark code (BMC) configuration data channel. Type-A and Type-B adaptors and cables are required for older devices to plug into Type-C hosts. Adapters and cables with a Type-C receptacle are not allowed.Universal Serial Bus Type-C Cable and Connector Specification Revision 1.1 (April 3, 2015), section 2.2, page 20
Full-featured USB 3.1 Type-C cables are electronically marked cables that contain a full set of wires and a chip with an ID function based on the configuration data channel and vendor-defined messages (VDMs) from the USB Power Delivery 2.0 specification. USB Type-C devices also support power currents of 1.5 A and 3.0 A over the 5 V power bus in addition to baseline 900 mA; devices can either negotiate increased USB current through the configuration line, or they can support the full Power Delivery specification using both BMC-coded configuration line and legacy BFSK-coded VBUS line.
Alternate Mode dedicates some of the physical wires in the Type-C cable for direct device-to-host transmission of alternate data protocols. The four high-speed lanes, two sideband pins, andfor dock, detachable device and permanent cable applications onlytwo USB 2.0 pins and one configuration pin can be used for Alternate Mode transmission. The modes are configured using VDMs through the configuration channel.
Host and device interface receptacles
USB plugs fit one receptacle with notable exceptions for USB On-The-Go "AB" support and the general backwards compatibility of USB 3.0 as shown.
+ USB connectors mating matrix (images not to scale) Receptacle Plug 75px 75px x70px 65px 75px 75px 75px x50px 120px 75px 75px x40pxType-A SuperSpeed x60px x40pxType-B SuperSpeed 60px 60px 60px 60px 60px x60pxMicro-B SuperSpeed x60px
+ USB cables matrix Plugs, each end x55px x60px x60px x70px x60px x60px 110pxMicro-B SuperSpeed 75px x55px x60px x60px x70px x60px x60px 110pxMicro-B SuperSpeed 75px
Non-standardExisting for specific proprietary purposes, and in most cases not inter-operable with USB-IF compliant equipment. In addition to the above cable assemblies comprising two plugs, an "adapter" cable with a Micro-A plug and a standard-A receptacle is compliant with USB specifications. Other combinations of connectors are not compliant.There do exist A-to-A assemblies, referred to as cables (such as the Easy Transfer Cable); however, these have a pair of USB devices in the middle, making them more than just cables.
Deprecated Some older devices and cables with Mini-A connectors have been certified by USB-IF. The Mini-A connector is obsolete: no new Mini-A connectors and neither Mini-A nor Mini-AB receptacles will be certified.Note: Mini-B is not deprecated, but less and less used since the arrival of Micro-B.
Pinouts
USB is a serial bus, using four shielded wires for the USB 2.0 variant: two for power (VBUS and GND), and two for differential data signals (labelled as D+ and D− in pinouts). Non-Return-to-Zero Inverted (NRZI) encoding scheme is used for transferring data, with a sync field to synchronize the host and receiver clocks. D+ and D− signals are transmitted on a differential pair, providing half-duplex data transfers for USB 2.0. Mini and micro connectors have their GND connections moved from pin #4 to pin #5, while their pin #4 serves as an ID pin for the On-The-Go host/client identification.
USB 3.0 provides two additional differential pairs (four wires, SSTx+, SSTx−, SSRx+ and SSRx−), providing full-duplex data transfers at SuperSpeed, which makes it similar to Serial ATA or single-lane PCI Express.
thumb|left|Standard, Mini-, and Micro-USB plugs (not to scale). White areas are empty. The receptacles are pictured with USB logo to the top, looking into the open end; note this means the pin order is mirrored from plug to socket.
thumb|Micro-B SuperSpeed plug
+ Type-A and -B pinout Pin Name Wire color Description 1 VBUS Red, or Orange +5 V 2 D− White, or Gold Data− 3 D+ Green Data+ 4 GND Black, or Blue Ground
+ Mini/Micro-A and -B pinout Pin Name Wire color Description 1 VBUS Red +5 V 2 D− White Data− 3 D+ Green Data+ 4 ID On-The-Go ID distinguishes cable ends:
"A" plug (host): connected to GND
"B" plug (device): not connected 5 GND Black Signal ground
Proprietary connectors and formats
Manufacturers of personal electronic devices might not include a USB standard connector on their product for technical or marketing reasons. Some manufacturers provide proprietary cables that permit their devices to physically connect to a USB standard port. Full functionality of proprietary ports and cables with USB standard ports is not assured; for example, some devices only use the USB connection for battery charging and do not implement any data transfer functions.
Colors
+ Usual USB color-coding Color Description Black or white Type-A or type-B Blue (Pantone 300C) Type-A or type-B, SuperSpeed Teal blue Type-A or type-B, SuperSpeed+ Yellow, orange or red Ports only. High-current or sleep-and-charge
USB ports and connectors are often color-coded to distinguish their different functions and USB versions. These colors are not part of the USB specification and can vary between manufacturers; for example, USB 3.0 specification mandates appropriate color-coding while it only recommends blue inserts for standard-A USB 3.0 connectors and plugs.
thumb|An orange charge-only USB port on a front panel USB 3.0 switch with card reader.
thumb|A blue Standard-A USB connector on a Sagemcom F@ST 3864OP ADSL modem router without USB 3.0 contacts fitted.
Cabling
thumb|A USB twisted pair, where the Data+ and Data− conductors are twisted together in a double helix. The wires are enclosed in a further layer of shielding.
The D± signals used by low, full, and high speed are carried over a twisted pair (typically, unshielded) to reduce noise and crosstalk. SuperSpeed uses separate transmit and receive differential pairs, which additionally require shielding (typically, shielded twisted pair but twinax is also mentioned by the specification). Thus, to support SuperSpeed data transmission, cables contain twice as many wires and are thus larger in diameter.
The USB 1.1 standard specifies that a standard cable can have a maximum length of 3 meters with devices operating at full speed (12 Mbit/s), and a maximum length of 5 meters with devices operating at low speed (1.5 Mbit/s).
USB 2.0 provides for a maximum cable length of 5 meters for devices running at high speed (480 Mbit/s). The primary reason for this limit is the maximum allowed round-trip delay of about 1.5 μs. If USB host commands are unanswered by the USB device within the allowed time, the host considers the command lost. When adding USB device response time, delays from the maximum number of hubs added to the delays from connecting cables, the maximum acceptable delay per cable amounts to 26 ns. The USB 2.0 specification requires that cable delay be less than 5.2 ns per meter , which is close to the maximum achievable transmission speed for standard copper wire).
The USB 3.0 standard does not directly specify a maximum cable length, requiring only that all cables meet an electrical specification: for copper cabling with AWG 26 wires the maximum practical length is .
Power
+ USB power standards Specification Current Voltage Power Low-power device 100 mA 5 V 0.50 W Low-power SuperSpeed (USB 3.0) device 150 mA 5 V 0.75 W High-power device 500 mA 5 V 2.5 W High-power SuperSpeed (USB 3.0) device 900 mA 5 V 4.5 W Battery Charging (BC) 1.2 5 A 5 V 25 W Type-C 1.5 A 5 V 7.5 W 3 A 5 V 15 W Power Delivery micro-format 3 A 20 V 60 W Power Delivery standard format or Type-C 5 A 20 V 100 W
thumb|right|Y-shaped USB 3.0 cable; with such a cable, a device can draw power from two USB ports simultaneously
USB supplies bus power across V and GND at a nominal voltage 5 V ± 5%, at supply, to power USB devices. Power is sourced solely from upstream devices or hosts, and is consumed solely by downstream devices. USB provides for various voltage drops and losses in providing bus power. As such, the voltage at the hub port is specified to be in the range by USB 2.0, and by USB 3.0. It is specified that devices' configuration and low-power functions must operate down to 4.40 V at the hub port by USB 2.0 and that devices' configuration, low-power, and high-power functions must operate down to 4.00 V at the device port by USB 3.0.
There are limits on the power a device may draw, stated in terms of a unit load, which is 100 mA, or 150 mA for SuperSpeed devices. There are low-power and high-power devices. Low-power devices may draw at most 1 unit load, and all devices must act as low-power devices when, starting out as, unconfigured. High-power devices draw at least 1 unit load and at most 5 unit loads (500 mA), or 6 unit loads (900 mA) for SuperSpeed devices. A high-powered device must be configured, and may only draw as much power as specified in its configuration. I.e., the maximum power may not be available.
A bus-powered hub is a high-power device providing low-power ports. It draws 1 unit load for the hub controller and 1 unit load for each of at most 4 ports. The hub may also have some non-removable functions in place of ports. A self-powered hub is a device that provides high-power ports. Optionally, the hub controller draw power for its operation as a low-power device, but all high-power ports draw from the hub's self-power.
Where devices (for example, high-speed disk drives) require more power than a high-power device can draw, they function erratically, if at all, from bus power of a single port. USB provides for these devices as being self-powered. However, such devices may come with a Y-shaped cable that has 2 USB plugs (1 for power and data, the other for only power), so as to draw power as 2 devices. Such a cable is non-standard, with the USB compliance specification stating that "use of a 'Y' cable (a cable with two A-plugs) is prohibited on any USB peripheral", meaning that "if a USB peripheral requires more power than allowed by the USB specification to which it is designed, then it must be self-powered."
USB Battery Charging
thumb|right|A small device that provides voltage and current readouts for devices charged over USB
thumb|right|This USB power meter additionally provides a charge readout (in mAh) and data logging
USB Battery Charging defines a new port type, the charging port, as opposed to the standard downstream port (SDP) of the base specification. Charging ports are divided into 2 further types: the charging downstream port (CDP), which has data signals, and the dedicated charging port (DCP), which does not. Dedicated charging ports can be found on USB power adapters that convert utility power or another power source (e.g., a car's electrical system) to run attached devices and battery packs. On a host (such as a laptop computer) with both standard and charging USB ports, the charging ports should be labeled as such.
The charging device identifies the type of port through non-data signalling on the D+ and D− signals immediately after attach. A DCP simply has to place a resistance not exceeding 200 Ω across the D+ and D− signals.Section 1.4.5, pg. 2; and Table 5-3 "Resistances", pg. 45
Per the base specification, any device attached to an SDP must initially be a low-power device, with high-power mode contingent on later USB configuration by the host. Charging ports, however, can immediately supply up to at least 1.5 A. More current may be supplied up to the maximum current of 5 A, but the charging port may apply current limiting, or even shut down. The maximum current is determined by the over-current protection maximum current in the baseline specification. Note that it is specified only that USB connectors are tested to a contact current rating of at least 1.5 A.
These bus power currents being much higher than cables were designed for, though not unsafe, cause a larger voltage between the ends of the ground signal, significantly reducing noise margins causing problems with High Speed signalling. Battery Charging 1.1 specifies that charging devices must dynamically limit bus power current draw during High Speed signalling; 1.2 simply specifies that charging devices and ports must be designed to tolerate the higher ground voltage difference in High Speed signalling.
Revision 1.2 of the specification was released in 2010. Several changes are made and limits are increased including allowing 1.5 A on charging downstream ports for unconfigured devices, allowing High Speed communication while having a current up to 1.5 A, and allowing a maximum current of 5 A. Also, support is removed for charging port detection via resistive mechanisms.
Before the battery charging specification was defined, there was no standardized way for the portable device to inquire how much current was available. For example, Apple's iPod and iPhone chargers indicate the available current by voltages on the D− and D+ lines. When D+ = D− = 2.0 V, the device may pull up to 500 mA. When D+ = 2.0 V and D− = 2.8 V, the device may pull up to 1 A of current. When D+ = 2.8 V and D− = 2.0 V, the device may pull up to 2 A of current.
Accessory charging adaptors (ACA)
Portable devices having an On The Go port may want to charge and access USB peripheral at the same time, but having only a single port (both due to On The Go and space requirement) prevents this. Accessory charging adapters (ACA) are devices which allow a charging power to be injected into an On The Go connection between host and peripheral.
ACAs have three ports: the OTG port for the portable device, which is required to have a Micro-A plug on a captive cable; the accessory port, which is required to have a Micro-AB or type-A receptacle; and the charging port, which is required to have a Micro-B receptacle, or type-A plug or charger on a captive cable. The ID pin of the OTG port is not connected within plug as usual, but to the ACA itself, where signals outside the OTG floating and ground states are used for ACA detection and state signalling. The charging port does not pass data, but does use the D± signals for charging port detection. The accessory port acts as any other port. When appropriately signalled by the ACA, the portable device can charge from the bus power as if there were a charging port present; any OTG signals over bus power are instead passed to the portable device via the ID signal. Bus power is also provided to the accessory port from the charging port transparently.
Power Delivery (PD)
+ USB PD rev. 1 source profiles Profile +5 V +12 V +20 V 0 colspan=3 1 2.0 A, 10 W rowspan=3 2 1.5 A, 18 W 3 3.0 A, 36 W 4 3.0 A, 60 W 5 5.0 A, 60 W 5.0 A, 100 W
+ USB PD rev. 2 source power rules Source output power (W) Current, at: (A) +5 V +9 V +15 V +20 V 0.5–15 0.1–3.0 rowspan=2 rowspan=3 15–27 3.0 (15 W) 1.7–3.0 27–45 3.0 (27 W) 1.8–3.0 45–60 3.0 (45 W) 2.25–3.0 60–100 3.0–5.0
In July 2012, the USB Promoters Group announced the finalization of the USB Power Delivery (PD) specification, an extension that specifies using certified PD aware USB cables with standard USB Type-A and Type-B connectors to deliver increased power (more than 7.5 W) to devices with larger power demand. Devices can request higher currents and supply voltages from compliant hosts up to 2 A at 5 V (for a power consumption of up to 10 W), and optionally up to 3 A or 5 A at either 12 V (36 W or 60 W) or 20 V (60 W or 100 W). In all cases, both host-to-device and device-to-host configurations are supported.
The intent is to permit uniformly charging laptops, tablets, USB-powered disks and similarly higher-power consumer electronics, as a natural extension of existing European and Chinese mobile telephone charging standards. This may also affect the way electric power used for small devices is transmitted and used in both residential and public buildings.
The Power Delivery specification defines six fixed power profiles for the power sources. PD-aware devices implement a flexible power management scheme by interfacing with the power source through a bidirectional data channel and requesting a certain level of electrical power, variable up to 5 A and 20 V depending on supported profile. The power configuration protocol uses a 24 MHz BFSK-coded transmission channel on the VBUS line.
The USB Power Delivery revision 2.0 specification has been released as part of the USB 3.1 suite. It covers the Type-C cable and connector with four power/ground pairs and a separate configuration channel, which now hosts a DC coupled low-frequency BMC-coded data channel that reduces the possibilities for RF interference. Power Delivery protocols have been updated to facilitate Type-C features such as cable ID function, Alternate Mode negotiation, increased VBUS currents, and VCONN-powered accessories.
As of USB Power Delivery Revision 2.0 Version 1.2, the six fixed power profiles for power sources have been deprecated. USB PD Power Rules replace power profiles, defining four normative voltage levels at 5V, 9V, 15V, and 20V. Instead of six fixed profiles, power supplies may support any maximum source output power from 0.5W to 100W.
Upcoming USB Power Delivery 3.0 specification defines new power rules based on supplied wattage. Programmable power supply protocol allows granular control over VBUS power in 10 mV steps to facilitate constant current or constant voltage charging. Revision 3.0 also adds extended configuration messages, fast role swap, and deprecates the BFSK protocol.
, there are silicon controllers available from several sources (TI, Cypress) and several others. Power supplies bundled with Type-C based laptops from Apple, Google, HP, Dell, and Razer support USB PD. In addition, accessories from third party vendors including Anker, Belkin, iVoler and Innergie support USB PD 2.0 at multiple voltages. There are several PD aware projects such as the USB-PD Sniffer that are PD aware. ASUS also make a fully Power Delivery compliant adapter card the USB 3.1 UPD PANEL
Sleep-and-charge ports
thumb|right|A yellow USB port denoting sleep-and-charge
Sleep-and-charge USB ports can be used to charge electronic devices even when the computer is switched off. Normally, when a computer is powered off the USB ports are powered down, preventing phones and other devices from charging. Sleep-and-charge USB ports remain powered even when the computer is off. On laptops, charging devices from the USB port when it is not being powered from AC drains the laptop battery faster; most laptops have a facility to stop charging if their own battery charge level gets too low. This feature has also been implemented on some laptop docking stations allowing device charging even when no laptop is present.
Sleep-and-charge USB ports may be found colored differently than regular ports, mostly red or yellow, though that is not always the case.
On Dell and Toshiba laptops, the port is marked with the standard USB symbol with an added lightning bolt icon on the right side. Dell calls this feature PowerShare, while Toshiba calls it USB Sleep-and-Charge. On Acer Inc. and Packard Bell laptops, sleep-and-charge USB ports are marked with a non-standard symbol (the letters USB over a drawing of a battery); the feature is simply called Power-off USB. On some laptops such as Dell and Apple MacBook models, it is possible to plug a device in, close the laptop (putting it into sleep mode) and have the device continue to charge.
Mobile device charger standards
In China
thumb|The Micro-USB interface is commonly found on chargers for mobile phones
thumb|Australian and New Zealand power socket with USB charger socket
, all new mobile phones applying for a license in China are required to use a USB port as a power port for battery charging.The Chinese FCC's technical standard: This was the first standard to use the convention of shorting D+ and D−.
OMTP/GSMA Universal Charging Solution
In September 2007, the Open Mobile Terminal Platform group (a forum of mobile network operators and manufacturers such as Nokia, Samsung, Motorola, Sony Ericsson and LG) announced that its members had agreed on Micro-USB as the future common connector for mobile devices.
The GSM Association (GSMA) followed suit on 17 February 2009, and on 22 April 2009, this was further endorsed by the CTIA – The Wireless Association, with the International Telecommunication Union (ITU) announcing on 22 October 2009 that it had also embraced the Universal Charging Solution as its "energy-efficient one-charger-fits-all new mobile phone solution," and added: "Based on the Micro-USB interface, UCS chargers will also include a 4-star or higher efficiency rating—up to three times more energy-efficient than an unrated charger."
EU Smartphone Power Supply Standard
In June 2009, many of the world's largest mobile phone manufacturers signed an EC-sponsored Memorandum of Understanding (MoU), agreeing to make most data-enabled mobile phones marketed in the European Union compatible with a common External Power Supply (EPS). The EU's common EPS specification (EN 62684:2010) references the USB Battery Charging standard and is similar to the GSMA/OMTP and Chinese charging solutions. In January 2011, the International Electrotechnical Commission (IEC) released its version of the (EU's) common EPS standard as IEC 62684:2011.
Non-standard devices
thumb|right|USB-powered mini fans
thumb|USB vacuum cleaner novelty device
Some USB devices require more power than is permitted by the specifications for a single port. This is common for external hard and optical disc drives, and generally for devices with motors or lamps. Such devices can use an external power supply, which is allowed by the standard, or use a dual-input USB cable, one input of which is used for power and data transfer, the other solely for power, which makes the device a non-standard USB device. Some USB ports and external hubs can, in practice, supply more power to USB devices than required by the specification but a standard-compliant device may not depend on this.
In addition to limiting the total average power used by the device, the USB specification limits the inrush current (i.e., that used to charge decoupling and filter capacitors) when the device is first connected. Otherwise, connecting a device could cause problems with the host's internal power. USB devices are also required to automatically enter ultra low-power suspend mode when the USB host is suspended. Nevertheless, many USB host interfaces do not cut off the power supply to USB devices when they are suspended.
Some non-standard USB devices use the 5 V power supply without participating in a proper USB network, which negotiates power draw with the host interface. These are usually called USB decorations. Examples include USB-powered keyboard lights, fans, mug coolers and heaters, battery chargers, miniature vacuum cleaners, and even miniature lava lamps. In most cases, these items contain no digital circuitry, and thus are not standard compliant USB devices. This may cause problems with some computers, such as drawing too much current and damaging circuitry. Prior to the Battery Charging Specification, the USB specification required that devices connect in a low-power mode (100 mA maximum) and communicate their current requirements to the host, which then permits the device to switch into high-power mode.
Some devices, when plugged into charging ports, draw even more power (10 watts at 2.1 amperes) than the Battery Charging Specification allows — The iPad is one such device.
Barnes & Noble NOOK Color devices also require a special charger that runs at 1.9 amperes.
PoweredUSB
PoweredUSB is a proprietary extension that adds four additional pins supplying up to 6 A at 5 V, 12 V, or 24 V. It is commonly used in point of sale systems to power peripherals such as barcode readers, credit card terminals, and printers.
Signaling
USB allows the following signaling rates (the terms speed and bandwidth are used interchangeably, while high- is alternatively written as hi-):
A low-speed rate of 1.5 Mbit/s is defined by USB 1.0. It is very similar to full-bandwidth operation except each bit takes 8 times as long to transmit. It is intended primarily to save cost in low-bandwidth human interface devices (HID) such as keyboards, mice, and joysticks.
The full-speed (FS) rate of 12 Mbit/s is the basic USB data rate defined by USB 1.0. All USB hubs can operate at this speed.
A high-speed (HS) rate of 480 Mbit/s was introduced in 2001. All hi-speed devices are capable of falling back to full-bandwidth operation if necessary; i.e., they are backward compatible with USB 1.1. Connectors are identical for USB 2.0 and USB 1.x.
A SuperSpeed (SS) rate of 5.0 Gbit/s. The written USB 3.0 specification was released by Intel and its partners in August 2008. The first USB 3.0 controller chips were sampled by NEC in May 2009, and the first products using the USB 3.0 specification arrived in January 2010. USB 3.0 connectors are generally backward compatible, but include new wiring and full duplex operation.
USB signals are transmitted using differential signaling on a twisted-pair data cable with characteristic impedance.
Low- and full-speed modes use a single data pair, labeled D+ and D−, in half-duplex. Transmitted signal levels are 0.0–0.3 V for logical low, and 2.8–3.6 V for logical high level. The signal lines are not terminated.
High-speed mode uses the same wire pair, but with different electrical conventions. Lower signal voltages of for low and for logical high level, and termination of 45 Ω to ground, or 90 Ω differential to match the data cable impedance.
SuperSpeed adds two additional pairs of shielded twisted wire (and new, mostly compatible expanded connectors), dedicated to full-duplex SuperSpeed operation. The half-duplex lines are still used for configuration.
A USB connection is always between a host or hub at the A connector end, and a device or hub's "upstream" port at the other end. Originally, this was a B connector, preventing erroneous loop connections, but additional upstream connectors were specified, and some cable vendors designed and sold cables that permitted erroneous connections (and potential damage to circuitry). USB interconnections are not as fool-proof or as simple as originally intended.
The host includes 15 kΩ pull-down resistors on each data line. When no device is connected, this pulls both data lines low into the so-called single-ended zero state (SE0 in the USB documentation), and indicates a reset or disconnected connection.
A USB device pulls one of the data lines high with a 1.5 kΩ resistor. This overpowers one of the pull-down resistors in the host and leaves the data lines in an idle state called J. For USB 1.x, the choice of data line indicates what signal rates the device is capable of; full-bandwidth devices pull D+ high, while low-bandwidth devices pull D− high. The K state is just the opposite polarity to the J state.
center|frame|none|Example of a Negative Acknowledge packet transmitted by USB 1.1 full-speed device when there is no more data to read. It consists of the following fields: clock synchronization byte, type of packet and end of packet. Data packets would have more information between the type of packet and end of packet.
USB data is transmitted by toggling the data lines between the J state and the opposite K state. USB encodes data using the NRZI line coding; a 0 bit is transmitted by toggling the data lines from J to K or vice versa, while a 1 bit is transmitted by leaving the data lines as-is. To ensure a minimum density of signal transitions remains in the bitstream, USB uses bit stuffing; an extra 0 bit is inserted into the data stream after any appearance of six consecutive 1 bits. Seven consecutive received 1 bits is always an error. USB 3.0 has introduced additional data transmission encodings.
A USB packet begins with an 8-bit synchronization sequence, 00000001₂. That is, after the initial idle state J, the data lines toggle KJKJKJKK. The final 1 bit (repeated K state) marks the end of the sync pattern and the beginning of the USB frame. For high bandwidth USB, the packet begins with a 32-bit synchronization sequence.
A USB packet's end, called EOP (end-of-packet), is indicated by the transmitter driving 2 bit times of SE0 (D+ and D− both below max.) and 1 bit time of J state. After this, the transmitter ceases to drive the D+/D− lines and the aforementioned pull up resistors hold it in the J (idle) state. Sometimes skew due to hubs can add as much as one bit time before the SE0 of the end of packet. This extra bit can also result in a "bit stuff violation" if the six bits before it in the CRC are 1s. This bit should be ignored by receiver.
A USB bus is reset using a prolonged (10 to 20 milliseconds) SE0 signal.
USB 2.0 devices use a special protocol during reset, called chirping, to negotiate the high bandwidth mode with the host/hub. A device that is HS capable first connects as an FS device (D+ pulled high), but upon receiving a USB RESET (both D+ and D− driven LOW by host for 10 to 20 ms) it pulls the D− line high, known as chirp K. This indicates to the host that the device is high bandwidth. If the host/hub is also HS capable, it chirps (returns alternating J and K states on D− and D+ lines) letting the device know that the hub operates at high bandwidth. The device has to receive at least three sets of KJ chirps before it changes to high bandwidth terminations and begins high bandwidth signaling. Because USB 3.0 uses wiring separate and additional to that used by USB 2.0 and USB 1.x, such bandwidth negotiation is not required.
Clock tolerance is 480.00±0.24 Mbit/s, 12.00±0.03 Mbit/s, 1.50±0.18 Mbit/s.
Though high bandwidth devices are commonly referred to as "USB 2.0" and advertised as "up to 480 Mbit/s," not all USB 2.0 devices are high bandwidth. The USB-IF certifies devices and provides licenses to use special marketing logos for either "basic bandwidth" (low and full) or high bandwidth after passing a compliance test and paying a licensing fee. All devices are tested according to the latest specification, so recently compliant low bandwidth devices are also 2.0 devices.
USB 3 uses tinned copper stranded AWG-28 cables with impedance for its high-speed differential pairs and linear feedback shift register and 8b/10b encoding sent with a voltage of 1 V nominal with a 100 mV receiver threshold; the receiver uses equalization. 100717 usb3.com SSC clock and precision is used. Packet headers are protected with CRC-16, while data payload is protected with CRC-32. 100717 usb3.com Power up to 3.6 W may be used. One unit load in superspeed mode is equal to 150 mA.
Transmission rates
Mode Gross data rate Introduced in Low Speed 1.5 Mbit/s USB 1.0 Full Speed 12 Mbit/s USB 1.0 High Speed Also, Hi-Speed 480 Mbit/s USB 2.0 SuperSpeed 5 Gbit/s USB 3.0 SuperSpeed+ 10 Gbit/s USB 3.1
The theoretical maximum data rate in USB 2.0 is 480 Mbit/s (60 MB/s) per controller and is shared amongst all attached devices. Some chipset manufacturers overcome this bottleneck by providing multiple USB 2.0 controllers within the southbridge.
According to routine testing performed by CNet, write operations to typical Hi-Speed hard drives can sustain rates of 25–30 MB/s, while read operations are at 30–42 MB/s; this is 70% of the total available bus bandwidth. For USB 3.0, typical write speed is 70–90 MB/s, while read speed is 90–110 MB/s. Mask tests, also known as eye diagram tests, are used to determine the quality of a signal in the time domain. They are defined in the referenced document as part of the electrical test description for the high-speed (HS) mode at 480 Mbit/s.
According to a USB-IF chairman, "at least 10 to 15 percent of the stated peak 60 MB/s (480 Mbit/s) of Hi-Speed USB goes to overhead—the communication protocol between the card and the peripheral. Overhead is a component of all connectivity standards". Tables illustrating the transfer limits are shown in Chapter 5 of the USB spec.
For isochronous devices like audio streams, the bandwidth is constant, and reserved exclusively for a given device. The bus bandwidth therefore only has an effect on the number of channels that can be sent at a time, not the "speed" or latency of the transmission.
Latency
For low-speed () and full-speed devices the shortest time for a transaction in one direction is . High-speed uses transactions within each micro frame where using 1-byte interrupt packet results in a minimal response time of 4-byte interrupt packet results in
Communication
During USB communication, data is transmitted as packets. Initially, all packets are sent from the host, via the root hub and possibly more hubs, to devices. Some of those packets direct a device to send some packets in reply.
After the sync field, all packets are made of 8-bit bytes, transmitted least-significant bit first. The first byte is a packet identifier (PID) byte. The PID is actually 4 bits; the byte consists of the 4-bit PID followed by its bitwise complement. This redundancy helps detect errors. (Note also that a PID byte contains at most four consecutive 1 bits, and thus never needs bit-stuffing, even when combined with the final 1 bit in the sync byte. However, trailing 1 bits in the PID may require bit-stuffing within the first few bits of the payload.)
+ USB PID bytes Type PID value(msb-first) Transmitted byte(lsb-first) Name Description 0000 0000 1111 Token 1000 0001 1110 SPLIT High-bandwidth (USB 2.0) split transaction 0100 0010 1101 PING Check if endpoint can accept data (USB 2.0) Special 1100 0011 1100 PRE Low-bandwidth USB preamble Handshake ERR Split transaction error (USB 2.0) 0010 0100 1011 ACK Data packet accepted 1010 0101 1010 NAK Data packet not accepted; please retransmit 0110 0110 1001 NYET Data not ready yet (USB 2.0) 1110 0111 1000 STALL Transfer impossible; do error recovery Token 0001 1000 0111 OUT Address for host-to-device transfer 1001 1001 0110 IN Address for device-to-host transfer 0101 1010 0101 SOF Start of frame marker (sent each ms) 1101 1011 0100 SETUP Address for host-to-device control transfer Data 0011 1100 0011 DATA0 Even-numbered data packet 1011 1101 0010 DATA1 Odd-numbered data packet 0111 1110 0001 DATA2 Data packet for high-bandwidth isochronous transfer (USB 2.0) 1111 1111 0000 MDATA Data packet for high-bandwidth isochronous transfer (USB 2.0)
Packets come in three basic types, each with a different format and CRC (cyclic redundancy check):
Handshake packets
Handshake packets consist of only a single PID byte, and are generally sent in response to data packets. Error detection is provided by transmitting four bits that represent the packet type twice, in a single PID byte using complemented form. Three basic types are ACK, indicating that data was successfully received, NAK, indicating that the data cannot be received and should be retried, and STALL, indicating that the device has an error condition and cannot transfer data until some corrective action (such as device initialization) occurs.
USB 2.0 added two additional handshake packets: NYET and ERR. NYET indicates that a split transaction is not yet complete, while ERR handshake indicates that a split transaction failed. A second use for a NYET packet is to tell the host that the device has accepted a data packet, but cannot accept any more due to full buffers. This allows a host to switch to sending small PING tokens to inquire about the device's readiness, rather than sending an entire unwanted DATA packet just to elicit a NAK.
The only handshake packet the USB host may generate is ACK. If it is not ready to receive data, it should not instruct a device to send.
Token packets
Token packets consist of a PID byte followed by two payload bytes: 11 bits of address and a five-bit CRC. Tokens are only sent by the host, never a device.
IN and OUT tokens contain a seven-bit device number and four-bit function number (for multifunction devices) and command the device to transmit DATAx packets, or receive the following DATAx packets, respectively. An IN token expects a response from a device. The response may be a NAK or STALL response, or a DATAx frame. In the latter case, the host issues an ACK handshake if appropriate. An OUT token is followed immediately by a DATAx frame. The device responds with ACK, NAK, NYET, or STALL, as appropriate.
SETUP operates much like an OUT token, but is used for initial device setup. It is followed by an eight-byte DATA0 frame with a standardized format.
Every millisecond (12000 full-bandwidth bit times), the USB host transmits a special SOF (start of frame) token, containing an 11-bit incrementing frame number in place of a device address. This is used to synchronize isochronous and interrupt data transfers. High-bandwidth USB 2.0 devices receive seven additional SOF tokens per frame, each introducing a 125 µs "microframe" (60000 high-bandwidth bit times each).
USB 2.0 added PING token, which asks a device if it is ready to receive an OUT/DATA packet pair. PING is usually sent by a host when polling a device that most recently responded with NAK or NYET. This avoids the need to send a large data packet to a device that the host suspects to be unwilling to accept it. The device responds with ACK, NAK or STALL, as appropriate.
USB 2.0 also added a larger three-byte SPLIT token with a seven-bit hub number, 12 bits of control flags, and a five-bit CRC. This is used to perform split transactions. Rather than tie up the high-bandwidth USB bus sending data to a slower USB device, the nearest high-bandwidth capable hub receives a SPLIT token followed by one or two USB packets at high bandwidth, performs the data transfer at full or low bandwidth, and provides the response at high bandwidth when prompted by a second SPLIT token.
Data packets
A data packet consists of the PID followed by 0–1,024 bytes of data payload (up to 1,024 bytes for high-speed devices, up to 64 bytes for full-speed devices, and at most eight bytes for low-speed devices), and a 16-bit CRC.
There are two basic forms of data packet, DATA0 and DATA1. A data packet must always be preceded by an address token, and is usually followed by a handshake token from the receiver back to the transmitter. The two packet types provide the 1-bit sequence number required by stop-and-wait ARQ. If a USB host does not receive a response (such as an ACK) for data it has transmitted, it does not know if the data was received or not; the data might have been lost in transit, or it might have been received but the handshake response was lost.
To solve this problem, the device keeps track of the type of DATAx packet it last accepted. If it receives another DATAx packet of the same type, it is acknowledged but ignored as a duplicate. Only a DATAx packet of the opposite type is actually received.
If the data is corrupted while transmitted or received, the CRC check fails. When this happens, the receiver does not generate an ACK, which makes the sender resend the packet.
When a device is reset with a SETUP packet, it expects an 8-byte DATA0 packet next.
USB 2.0 added DATA2 and MDATA packet types as well. They are used only by high-bandwidth devices doing high-bandwidth isochronous transfers that must transfer more than 1024 bits per 125 µs micro frame (8,192 kbit/s).
PRE packet
Low-bandwidth devices are supported with a special PID value, PRE. This marks the beginning of a low-bandwidth packet, and is used by hubs that normally do not send full-bandwidth packets to low-bandwidth devices. Since all PID bytes include four 0 bits, they leave the bus in the full-bandwidth K state, which is the same as the low-bandwidth J state. It is followed by a brief pause, during which hubs enable their low-bandwidth outputs, already idling in the J state. Then a low-bandwidth packet follows, beginning with a sync sequence and PID byte, and ending with a brief period of SE0. Full-bandwidth devices other than hubs can simply ignore the PRE packet and its low-bandwidth contents, until the final SE0 indicates that a new packet follows.
Audio streaming
The USB Device Working Group has laid out specifications for audio streaming. Although USB technology wasn't designed with audio streaming in mind, specific standards have been developed and implemented for audio class uses.
The DWG distinguishes two audio device modes specifications: Audio 1.0 specification and Audio 2.0 specification. Three types of devices are defined:
USB headphone devices
USB microphone devices
USB headset devices
Three levels of synchronisation were defined: asynchronous, synchronous, and adaptive.
Comparisons with other connection methods
thumb|right|200px|A variety of USB cables for sale in Hong Kong
FireWire
At first, USB was considered a complement to IEEE 1394 (FireWire) technology, which was designed as a high-bandwidth serial bus that efficiently interconnects peripherals such as disk drives, audio interfaces, and video equipment. In the initial design, USB operated at a far lower data rate and used less sophisticated hardware. It was suitable for small peripherals such as keyboards and pointing devices.
The most significant technical differences between FireWire and USB include:
USB networks use a tiered-star topology, while IEEE 1394 networks use a tree topology.
USB 1.0, 1.1 and 2.0 use a "speak-when-spoken-to" protocol, meaning that each peripheral communicates with the host when the host specifically requests it to. USB 3.0 allows for device-initiated communications towards the host. A FireWire device can communicate with any other node at any time, subject to network conditions.
A USB network relies on a single host at the top of the tree to control the network. All communications are between the host and one peripheral. In a FireWire network, any capable node can control the network.
USB runs with a 5 V power line, while FireWire in current implementations supplies 12 V and theoretically can supply up to 30 V.
Standard USB hub ports can provide from the typical 500 mA/2.5 W of current, only 100 mA from non-hub ports. USB 3.0 and USB On-The-Go supply 1.8 A/9.0 W (for dedicated battery charging, 1.5 A/7.5 W Full bandwidth or 900 mA/4.5 W High Bandwidth), while FireWire can in theory supply up to 60 watts of power, although 10 to 20 watts is more typical.
These and other differences reflect the differing design goals of the two buses: USB was designed for simplicity and low cost, while FireWire was designed for high performance, particularly in time-sensitive applications such as audio and video. Although similar in theoretical maximum transfer rate, FireWire 400 is faster than USB 2.0 Hi-Bandwidth in real-use, especially in high-bandwidth use such as external hard drives. The newer FireWire 800 standard is twice as fast as FireWire 400 and faster than USB 2.0 Hi-Bandwidth both theoretically and practically. However, Firewire's speed advantages rely on low-level techniques such as direct memory access (DMA), which in turn have created opportunities for security exploits such as the DMA attack.
The chipset and drivers used to implement USB and FireWire have a crucial impact on how much of the bandwidth prescribed by the specification is achieved in the real world, along with compatibility with peripherals.
Ethernet
The IEEE 802.3af Power over Ethernet (PoE) standard specifies a more elaborate power negotiation scheme than powered USB. It operates at 48 V DC and can supply more power (up to 12.95 W, PoE+ 25.5 W) over a cable up to 100 meters compared to USB 2.0, which provides 2.5 W with a maximum cable length of 5 meters. This has made PoE popular for VoIP telephones, security cameras, wireless access points and other networked devices within buildings. However, USB is cheaper than PoE provided that the distance is short, and power demand is low.
Ethernet standards require electrical isolation between the networked device (computer, phone, etc.) and the network cable up to or for 60 seconds. USB has no such requirement as it was designed for peripherals closely associated with a host computer, and in fact it connects the peripheral and host grounds. This gives Ethernet a significant safety advantage over USB with peripherals such as cable and DSL modems connected to external wiring that can assume hazardous voltages under certain fault conditions.
MIDI
Digital musical instruments are another example where USB is competitive for low-cost devices. However Power over Ethernet and the MIDI plug standard have an advantage in high-end devices that may have long cables. USB can cause ground loop problems between equipment, because it connects ground references on both transceivers. By contrast, the MIDI plug standard and Ethernet have built-in isolation to or more.
eSATA/eSATAp
The eSATA connector is a more robust SATA connector, intended for connection to external hard drives and SSDs. eSATA's transfer rate (up to 6 Gbit/s) is similar to that of USB 3.0 (up to 5 Gbit/s on current devices; 10 Gbit/s speeds via USB 3.1, announced on 31 July 2013). A device connected by eSATA appears as an ordinary SATA device, giving both full performance and full compatibility associated with internal drives.
eSATA does not supply power to external devices. This is an increasing disadvantage compared to USB. Even though USB 3.0's 4.5 W is sometimes insufficient to power external hard drives, technology is advancing and external drives gradually need less power, diminishing the eSATA advantage. eSATAp (power over eSATA; aka ESATA/USB) is a connector introduced in 2009 that supplies power to attached devices using a new, backward compatible, connector. On a notebook eSATAp usually supplies only 5 V to power a 2.5-inch HDD/SSD; on a desktop workstation it can additionally supply 12 V to power larger devices including 3.5-inch HDD/SSD and 5.25-inch optical drives.
eSATAp support can be added to a desktop machine in the form of a bracket connecting to motherboard SATA, power, and USB resources.
eSATA, like USB, supports hot plugging, although this might be limited by OS drivers and device firmware.
Thunderbolt
Thunderbolt combines PCI Express and Mini DisplayPort into a new serial data interface. Original Thunderbolt implementations have two channels, each with a transfer speed of 10 Gbit/s, resulting in an aggregate unidirectional bandwidth of 20 Gbit/s.
Thunderbolt 2 uses link aggregation to combine the two 10 Gbit/s channels into one bi-directional 20 Gbit/s channel.
Thunderbolt 3 is announced to use USB Type-C connectors. Thunderbolt 3 has one 40 Gbit/s channel.
Interoperability
Various protocol converters are available that convert USB data signals to and from other communications standards.
Related standards
thumb|right|The Wireless USB logo
The USB Implementers Forum is working on a wireless networking standard based on the USB protocol. Wireless USB is a cable-replacement technology, and uses ultra-wideband wireless technology for data rates of up to 480 Mbit/s.
USB 2.0 High-Speed Inter-Chip (HSIC) is a chip-to-chip variant of USB 2.0 that eliminates the conventional analog transceivers found in normal USB. It was adopted as a standard by the USB Implementers Forum in 2007. The HSIC physical layer uses about 50% less power and 75% less board area compared to traditional USB 2.0. HSIC uses two signals at 1.2 V and has a throughput of 480 Mbit/s. Maximum PCB trace length for HSIC is 10 cm. It does not have low enough latency to support RAM memory sharing between two chips.
The USB 3.0 successor of HSIC is called SuperSpeed Inter-Chip (SSIC).
See also
DockPort
Easy Transfer Cable
Extensible Host Controller Interface (XHCI)
LIO Target
List of device bit rates#Peripheral
Media Transfer Protocol
Mobile High-Definition Link
References
Further reading
524 pp.
External links
Muller, Henk. "How To Create And Program USB Devices," Electronic Design, July 2012
An Analysis of Throughput Characteristics of Universal Serial Bus, June 1996, by John Garney
USB 2.0 Protocol Engine, October 2010, by Razi Hershenhoren and Omer Reznik
IEC International standard: IEC62680 Universal serial bus interfaces for data and power:
IEC62680-1.1:2015 - Part 1-1: Common components - USB Battery Charging Specification, revision 1.2
IEC62680-1-2:2016 - Part 1-2: Common components - USB Power Delivery specification revision 1.0
IEC62680-1-3:2016 - Part 1-3: Universal Serial Bus interfaces - Common components -, revision 1.0
IEC62680-2-1:2015 - Part 2-1: Universal Serial Bus Specification, Revision 2.0
IEC62680-2-2:2015 - Part 2-2: Micro-USB Cables and Connectors Specification, Revision 1.01
IEC62680-2-3:2015 - Part 2-3: Universal Serial Bus Cables and Connectors Class Document Revision 2.0
Category:1996 introductions
Category:American inventions
Category:Computer connectors
Category:Serial buses
| 32,073 | 2017-01 |
Transistor | thumb|upright|Assorted discrete transistors. Packages in order from top to bottom: TO-3, TO-126, TO-92, SOT-23.
A transistor is a semiconductor device used to amplify or switch electronic signals and electrical power. It is composed of semiconductor material usually with at least three terminals for connection to an external circuit. A voltage or current applied to one pair of the transistor's terminals controls the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Today, some transistors are packaged individually, but many more are found embedded in integrated circuits.
The transistor is the fundamental building block of modern electronic devices, and is ubiquitous in modern electronic systems. Julius Lilienfeld patented a field-effect transistor in 1926 but it was not possible to actually construct a working device at that time. The first practically implemented device was a point-contact transistor invented in 1947 by American physicists John Bardeen, Walter Brattain, and William Shockley. The transistor revolutionized the field of electronics, and paved the way for smaller and cheaper radios, calculators, and computers, among other things. The transistor is on the list of IEEE milestones in electronics, and Bardeen, Brattain, and Shockley shared the 1956 Nobel Prize in Physics for their achievement.
History
thumb|A replica of the first working transistor.|316x316px
The thermionic triode, a vacuum tube invented in 1907, enabled amplified radio technology and long-distance telephony. The triode, however, was a fragile device that consumed a substantial amount of power. Physicist Julius Edgar Lilienfeld filed a patent for a field-effect transistor (FET) in Canada in 1925, which was intended to be a solid-state replacement for the triode.Vardalas, John (May 2003) Twists and Turns in the Development of the Transistor IEEE-USA Today's Engineer.Lilienfeld, Julius Edgar, "Method and apparatus for controlling electric current" January 28, 1930 (filed in Canada 1925-10-22, in US October 8, 1926). Lilienfeld also filed identical patents in the United States in 1926 and 1928. However, Lilienfeld did not publish any research articles about his devices nor did his patents cite any specific examples of a working prototype. Because the production of high-quality semiconductor materials was still decades away, Lilienfeld's solid-state amplifier ideas would not have found practical use in the 1920s and 1930s, even if such a device had been built. In 1934, German inventor Oskar Heil patented a similar device in Europe.Heil, Oskar, "Improvements in or relating to electrical amplifiers and other control arrangements and devices", Patent No. GB439457, European Patent Office, filed in Great Britain 1934-03-02, published December 6, 1935 (originally filed in Germany March 2, 1934).
thumb|left|John Bardeen, William Shockley and Walter Brattain at Bell Labs, 1948.
From November 17, 1947 to December 23, 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in the United States performed experiments and observed that when two gold point contacts were applied to a crystal of germanium, a signal was produced with the output power greater than the input. Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors. The term transistor was coined by John R. Pierce as a contraction of the term transresistance. According to Lillian Hoddeson and Vicki Daitch, authors of a biography of John Bardeen, Shockley had proposed that Bell Labs' first patent for a transistor should be based on the field-effect and that he be named as the inventor. Having unearthed Lilienfeld’s patents that went into obscurity years earlier, lawyers at Bell Labs advised against Shockley's proposal because the idea of a field-effect transistor that used an electric field as a "grid" was not new. Instead, what Bardeen, Brattain, and Shockley invented in 1947 was the first point-contact transistor. In acknowledgement of this accomplishment, Shockley, Bardeen, and Brattain were jointly awarded the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the transistor effect".
thumb|left|Herbert F. Mataré (1950)
In 1948, the point-contact transistor was independently invented by German physicists Herbert Mataré and Heinrich Welker while working at the Compagnie des Freins et Signaux, a Westinghouse subsidiary located in Paris. Mataré had previous experience in developing crystal rectifiers from silicon and germanium in the German radar effort during World War II. Using this knowledge, he began researching the phenomenon of "interference" in 1947. By June 1948, witnessing currents flowing through point-contacts, Mataré produced consistent results using samples of germanium produced by Welker, similar to what Bardeen and Brattain had accomplished earlier in December 1947. Realizing that Bell Labs' scientists had already invented the transistor before them, the company rushed to get its "transistron" into production for amplified use in France's telephone network.
thumb|Philco surface-barrier transistor developed and produced in 1953|435x435px
The first high-frequency transistor was the surface-barrier germanium transistor developed by Philco in 1953, capable of operating up to . These were made by etching depressions into an N-type germanium base from both sides with jets of Indium(III) sulfate until it was a few ten-thousandths of an inch thick. Indium electroplated into the depressions formed the collector and emitter.Wall Street Journal, December 4, 1953, page 4, Article "Philco Claims Its Transistor Outperforms Others Now In Use"Electronics magazine, January 1954, Article "Electroplated Transistors Announced"
The first "prototype" pocket transistor radio was shown by INTERMETALL (a company founded by Herbert Mataré in 1952) at the Internationale Funkausstellung Düsseldorf between August 29, 1953 and September 9, 1953.
The first "production" all-transistor car radio was produced in 1955 by Chrysler and Philco, had used surface-barrier transistors in its circuitry and which were also first suitable for high-speed computers.Wall Street Journal, "Chrysler Promises Car Radio With Transistors Instead of Tubes in '56", April 28, 1955, page 1Los Angeles Times, May 8, 1955, page A20, Article: "Chrysler Announces New Transistor Radio"Philco TechRep Division Bulletin, May–June 1955, Volume 5 Number 3, page 28
The first working silicon transistor was developed at Bell Labs on January 26, 1954 by Morris Tanenbaum. The first commercial silicon transistor was produced by Texas Instruments in 1954. This was the work of Gordon Teal, an expert in growing crystals of high purity, who had previously worked at Bell Labs.Chelikowski, J. (2004) "Introduction: Silicon in all its Forms", p. 1 in Silicon: evolution and future of a technology. P. Siffert and E. F. Krimmel (eds.). Springer, ISBN 3-540-40546-1.McFarland, Grant (2006) Microprocessor design: a practical guide from design planning to manufacturing. McGraw-Hill Professional. p. 10. ISBN 0-07-145951-0. The first MOSFET actually built was by Kahng and Atalla at Bell Labs in 1960.Heywang, W. and Zaininger, K. H. (2004) "Silicon: The Semiconductor Material", p. 36 in Silicon: evolution and future of a technology. P. Siffert and E. F. Krimmel (eds.). Springer, 2004 ISBN 3-540-40546-1.
Importance
thumb|A Darlington transistor opened up so the actual transistor chip (the small square) can be seen inside. A Darlington transistor is effectively two transistors on the same chip. One transistor is much larger than the other, but both are large in comparison to transistors in large-scale integration because this particular example is intended for power applications.|268x268px
The transistor is the key active component in practically all modern electronics. Many consider it to be one of the greatest inventions of the 20th century. Its importance in today's society rests on its ability to be mass-produced using a highly automated process (semiconductor device fabrication) that achieves astonishingly low per-transistor costs. The invention of the first transistor at Bell Labs was named an IEEE Milestone in 2009.
Although several companies each produce over a billion individually packaged (known as discrete) transistors every year,FETs/MOSFETs: Smaller apps push up surface-mount supply. globalsources.com (April 18, 2007)
the vast majority of transistors are now produced in integrated circuits (often shortened to IC, microchips or simply chips), along with diodes, resistors, capacitors and other electronic components, to produce complete electronic circuits. A logic gate consists of up to about twenty transistors whereas an advanced microprocessor, as of 2009, can use as many as 3 billion transistors (MOSFETs)."ATI and Nvidia face off." CNET (October 7, 2009). Retrieved on February 2, 2011.
"About 60 million transistors were built in 2002… for [each] man, woman, and child on Earth."Turley, Jim (December 18, 2002).
"The Two Percent Solution". embedded.com
The transistor's low cost, flexibility, and reliability have made it a ubiquitous device. Transistorized mechatronic circuits have replaced electromechanical devices in controlling appliances and machinery. It is often easier and cheaper to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical system to control that same function.
Simplified operation
thumb|A simple circuit diagram to show the labels of a n–p–n bipolar transistor.|312x312px
The essential usefulness of a transistor comes from its ability to use a small signal applied between one pair of its terminals to control a much larger signal at another pair of terminals. This property is called gain. It can produce a stronger output signal, a voltage or current, which is proportional to a weaker input signal; that is, it can act as an amplifier. Alternatively, the transistor can be used to turn current on or off in a circuit as an electrically controlled switch, where the amount of current is determined by other circuit elements.
There are two types of transistors, which have slight differences in how they are used in a circuit. A bipolar transistor has terminals labeled base, collector, and emitter. A small current at the base terminal (that is, flowing between the base and the emitter) can control or switch a much larger current between the collector and emitter terminals. For a field-effect transistor, the terminals are labeled gate, source, and drain, and a voltage at the gate can control a current between source and drain.
The image represents a typical bipolar transistor in a circuit. Charge will flow between emitter and collector terminals depending on the current in the base. Because internally the base and emitter connections behave like a semiconductor diode, a voltage drop develops between base and emitter while the base current exists. The amount of this voltage depends on the material the transistor is made from, and is referred to as VBE.
Transistor as a switch
thumb|300x300px|BJT used as an electronic switch, in grounded-emitter configuration.
Transistors are commonly used in digital circuits as electronic switches which can be either in an "on" or "off" state, both for high-power applications such as switched-mode power supplies and for low-power applications such as logic gates. Important parameters for this application include the current switched, the voltage handled, and the switching speed, characterised by the rise and fall times.
In a grounded-emitter transistor circuit, such as the light-switch circuit shown, as the base voltage rises, the emitter and collector currents rise exponentially. The collector voltage drops because of reduced resistance from collector to emitter. If the voltage difference between the collector and emitter were zero (or near zero), the collector current would be limited only by the load resistance (light bulb) and the supply voltage. This is called saturation because current is flowing from collector to emitter freely. When saturated, the switch is said to be on.
Providing sufficient base drive current is a key problem in the use of bipolar transistors as switches. The transistor provides current gain, allowing a relatively large current in the collector to be switched by a much smaller current into the base terminal. The ratio of these currents varies depending on the type of transistor, and even for a particular type, varies depending on the collector current. In the example light-switch circuit shown, the resistor is chosen to provide enough base current to ensure the transistor will be saturated.
In a switching circuit, the idea is to simulate, as near as possible, the ideal switch having the properties of open circuit when off, short circuit when on, and an instantaneous transition between the two states. Parameters are chosen such that the "off" output is limited to leakage currents too small to affect connected circuitry; the resistance of the transistor in the "on" state is too small to affect circuitry; and the transition between the two states is fast enough not to have a detrimental effect.
Transistor as an amplifier
thumb|366x366px|Amplifier circuit, common-emitter configuration with a voltage-divider bias circuit.
The common-emitter amplifier is designed so that a small change in voltage (Vin) changes the small current through the base of the transistor; the transistor's current amplification combined with the properties of the circuit means that small swings in Vin produce large changes in Vout.
Various configurations of single transistor amplifier are possible, with some providing current gain, some voltage gain, and some both.
From mobile phones to televisions, vast numbers of products include amplifiers for sound reproduction, radio transmission, and signal processing. The first discrete-transistor audio amplifiers barely supplied a few hundred milliwatts, but power and audio fidelity gradually increased as better transistors became available and amplifier architecture evolved.
Modern transistor audio amplifiers of up to a few hundred watts are common and relatively inexpensive.
Comparison with vacuum tubes
Before transistors were developed, vacuum (electron) tubes (or in the UK "thermionic valves" or just "valves") were the main active components in electronic equipment.
Advantages
The key advantages that have allowed transistors to replace vacuum tubes in most applications are
no cathode heater (which produces the characteristic orange glow of tubes), reducing power consumption, eliminating delay as tube heaters warm up, and immune from cathode poisoning and depletion;
very small size and weight, reducing equipment size;
large numbers of extremely small transistors can be manufactured as a single integrated circuit;
low operating voltages compatible with batteries of only a few cells;
circuits with greater energy efficiency are usually possible. For low-power applications (e.g., voltage amplification) in particular, energy consumption can be very much less than for tubes;
inherent reliability and very long life; tubes always degrade and fail over time. Some transistorized devices have been in service for more than 50 years ;
complementary devices available, providing design flexibility including complementary-symmetry circuits, not possible with vacuum tubes;
very low sensitivity to mechanical shock and vibration, providing physical ruggedness and virtually eliminating shock-induced spurious signals (e.g., microphonics in audio applications);
not susceptible to breakage of a glass envelope, leakage, outgassing, and other physical damage.
Limitations
Transistors have the following limitations:
silicon transistors can age and fail;John Keane and Chris H. Kim, "Transistor Aging," IEEE Spectrum (web feature), April 25, 2011.
high-power, high-frequency operation, such as that used in over-the-air television broadcasting, is better achieved in vacuum tubes due to improved electron mobility in a vacuum;
solid-state devices are susceptible to damage from very brief electrical and thermal events, including electrostatic discharge in handling; vacuum tubes are electrically much more rugged;
sensitivity to radiation and cosmic rays (special radiation-hardened chips are used for spacecraft devices);
vacuum tubes in audio applications create significant lower-harmonic distortion, the so-called tube sound, which some people prefer.
Types
|- style="text-align:center;"
|80px||PNP||80px||P-channel
|- style="text-align:center;"
|80px||NPN||80px||N-channel
|- style="text-align:center;"
|BJT||||JFET||
|- style="text-align:center;"
|80px||80px||80px||80px||P-channel
|- style="text-align:center;"
|80px||80px||80px||80px||N-channel
|- style="text-align:center;"
|JFET||colspan="2"|MOSFET enh||MOSFET dep
Transistors are categorized by
semiconductor material: the metalloids germanium (first used in 1947) and silicon (first used in 1954)—in amorphous, polycrystalline and monocrystalline form—, the compounds gallium arsenide (1966) and silicon carbide (1997), the alloy silicon-germanium (1989), the allotrope of carbon graphene (research ongoing since 2004), etc. (see Semiconductor material);
structure: BJT, JFET, IGFET (MOSFET), insulated-gate bipolar transistor, "other types";
electrical polarity (positive and negative): n–p–n, p–n–p (BJTs), n-channel, p-channel (FETs);
maximum power rating: low, medium, high;
maximum operating frequency: low, medium, high, radio (RF), microwave frequency (the maximum effective frequency of a transistor in a common-emitter or common-source circuit is denoted by the term fT, an abbreviation for transition frequency—the frequency of transition is the frequency at which the transistor yields unity voltage gain)
application: switch, general purpose, audio, high voltage, super-beta, matched pair;
physical packaging: through-hole metal, through-hole plastic, surface mount, ball grid array, power modules (see Packaging);
amplification factor hFE, βF (transistor beta) 071003 bcae1.com or gm (transconductance).
Hence, a particular transistor may be described as silicon, surface-mount, BJT, n–p–n, low-power, high-frequency switch.
A popular way to remember which symbol represents which type of transistor is to look at the arrow and how it is arranged. Within an NPN transistor symbol, the arrow will Not Point iN. Conversely, within the PNP symbol you see that the arrow Points iN Proudly.
Bipolar junction transistor (BJT)
Bipolar transistors are so named because they conduct by using both majority and minority carriers. The bipolar junction transistor, the first type of transistor to be mass-produced, is a combination of two junction diodes, and is formed of either a thin layer of p-type semiconductor sandwiched between two n-type semiconductors (an n–p–n transistor), or a thin layer of n-type semiconductor sandwiched between two p-type semiconductors (a p–n–p transistor). This construction produces two p–n junctions: a base–emitter junction and a base–collector junction, separated by a thin region of semiconductor known as the base region (two junction diodes wired together without sharing an intervening semiconducting region will not make a transistor).
BJTs have three terminals, corresponding to the three layers of semiconductor—an emitter, a base, and a collector. They are useful in amplifiers because the currents at the emitter and collector are controllable by a relatively small base current. In an n–p–n transistor operating in the active region, the emitter–base junction is forward biased (electrons and holes recombine at the junction), and electrons are injected into the base region. Because the base is narrow, most of these electrons will diffuse into the reverse-biased (electrons and holes are formed at, and move away from the junction) base–collector junction and be swept into the collector; perhaps one-hundredth of the electrons will recombine in the base, which is the dominant mechanism in the base current. By controlling the number of electrons that can leave the base, the number of electrons entering the collector can be controlled. Collector current is approximately β (common-emitter current gain) times the base current. It is typically greater than 100 for small-signal transistors but can be smaller in transistors designed for high-power applications.
Unlike the field-effect transistor (see below), the BJT is a low-input-impedance device. Also, as the base–emitter voltage (VBE) is increased the base–emitter current and hence the collector–emitter current (ICE) increase exponentially according to the Shockley diode model and the Ebers-Moll model. Because of this exponential relationship, the BJT has a higher transconductance than the FET.
Bipolar transistors can be made to conduct by exposure to light, because absorption of photons in the base region generates a photocurrent that acts as a base current; the collector current is approximately β times the photocurrent. Devices designed for this purpose have a transparent window in the package and are called phototransistors.
Field-effect transistor (FET)
thumb|right|400px|Operation of a FET and its Id-Vg curve. At first, when no gate voltage is applied. There is no inversion electron in the channel, the device is OFF. As gate voltage increase, inversion electron density in the channel increase, current increase, the device turns on.
The field-effect transistor, sometimes called a unipolar transistor, uses either electrons (in n-channel FET) or holes (in p-channel FET) for conduction. The four terminals of the FET are named source, gate, drain, and body (substrate). On most FETs, the body is connected to the source inside the package, and this will be assumed for the following description.
In a FET, the drain-to-source current flows via a conducting channel that connects the source region to the drain region. The conductivity is varied by the electric field that is produced when a voltage is applied between the gate and source terminals; hence the current flowing between the drain and source is controlled by the voltage applied between the gate and source. As the gate–source voltage (VGS) is increased, the drain–source current (IDS) increases exponentially for VGS below threshold, and then at a roughly quadratic rate (IGS ∝ (VGS − VT)2) (where VT is the threshold voltage at which drain current begins) in the "space-charge-limited" region above threshold. A quadratic behavior is not observed in modern devices, for example, at the 65 nm technology node.
For low noise at narrow bandwidth the higher input resistance of the FET is advantageous.
FETs are divided into two families: junction FET (JFET) and insulated gate FET (IGFET). The IGFET is more commonly known as a metal–oxide–semiconductor FET (MOSFET), reflecting its original construction from layers of metal (the gate), oxide (the insulation), and semiconductor. Unlike IGFETs, the JFET gate forms a p–n diode with the channel which lies between the source and drain. Functionally, this makes the n-channel JFET the solid-state equivalent of the vacuum tube triode which, similarly, forms a diode between its grid and cathode. Also, both devices operate in the depletion mode, they both have a high input impedance, and they both conduct current under the control of an input voltage.
Metal–semiconductor FETs (MESFETs) are JFETs in which the reverse biased p–n junction is replaced by a metal–semiconductor junction. These, and the HEMTs (high-electron-mobility transistors, or HFETs), in which a two-dimensional electron gas with very high carrier mobility is used for charge transport, are especially suitable for use at very high frequencies (microwave frequencies; several GHz).
FETs are further divided into depletion-mode and enhancement-mode types, depending on whether the channel is turned on or off with zero gate-to-source voltage. For enhancement mode, the channel is off at zero bias, and a gate potential can "enhance" the conduction. For the depletion mode, the channel is on at zero bias, and a gate potential (of the opposite polarity) can "deplete" the channel, reducing conduction. For either mode, a more positive gate voltage corresponds to a higher current for n-channel devices and a lower current for p-channel devices. Nearly all JFETs are depletion-mode because the diode junctions would forward bias and conduct if they were enhancement-mode devices;
most IGFETs are enhancement-mode types.
Usage of bipolar and field-effect transistors
The bipolar junction transistor (BJT) was the most commonly used transistor in the 1960s and 70s. Even after MOSFETs became widely available, the BJT remained the transistor of choice for many analog circuits such as amplifiers because of their greater linearity and ease of manufacture. In integrated circuits, the desirable properties of MOSFETs allowed them to capture nearly all market share for digital circuits. Discrete MOSFETs can be applied in transistor applications, including analog circuits, voltage regulators, amplifiers, power transmitters and motor drivers.
Other transistor types
thumb|right|270px|Transistor symbol created on Portuguese pavement in the University of Aveiro.
Bipolar junction transistor (BJT):
heterojunction bipolar transistor, up to several hundred GHz, common in modern ultrafast and RF circuits;
Schottky transistor;
avalanche transistor:
Darlington transistors are two BJTs connected together to provide a high current gain equal to the product of the current gains of the two transistors;
insulated-gate bipolar transistors (IGBTs) use a medium-power IGFET, similarly connected to a power BJT, to give a high input impedance. Power diodes are often connected between certain terminals depending on specific use. IGBTs are particularly suitable for heavy-duty industrial applications. The ASEA Brown Boveri (ABB) 5SNA2400E170100 illustrates just how far power semiconductor technology has advanced. Intended for three-phase power supplies, this device houses three n–p–n IGBTs in a case measuring 38 by 140 by 190 mm and weighing 1.5 kg. Each IGBT is rated at 1,700 volts and can handle 2,400 amperes;
phototransistor;
multiple-emitter transistor, used in transistor–transistor logic and integrated current mirrors;
multiple-base transistor, used to amplify very-low-level signals in noisy environments such as the pickup of a record player or radio front ends. Effectively, it is a very large number of transistors in parallel where, at the output, the signal is added constructively, but random noise is added only stochastically.Zhong Yuan Chang, Willy M. C. Sansen, Low-Noise Wide-Band Amplifiers in Bipolar and CMOS Technologies, page 31, Springer, 1991 ISBN 0792390962.
Field-effect transistor (FET):
carbon nanotube field-effect transistor (CNFET), where the channel material is replaced by a carbon nanotube;
junction gate field-effect transistor (JFET), where the gate is insulated by a reverse-biased p–n junction;
metal–semiconductor field-effect transistor (MESFET), similar to JFET with a Schottky junction instead of a p–n junction;
high-electron-mobility transistor (HEMT);
metal–oxide–semiconductor field-effect transistor (MOSFET), where the gate is insulated by a shallow layer of insulator;
inverted-T field-effect transistor (ITFET);
fin field-effect transistor (FinFET), source/drain region shapes fins on the silicon surface;
fast-reverse epitaxial diode field-effect transistor (FREDFET);
thin-film transistor, in LCDs;
organic field-effect transistor (OFET), in which the semiconductor is an organic compound;
ballistic transistor;
floating-gate transistor, for non-volatile storage;
FETs used to sense environment;
ion-sensitive field-effect transistor (IFSET), to measure ion concentrations in solution,
electrolyte–oxide–semiconductor field-effect transistor (EOSFET), neurochip,
deoxyribonucleic acid field-effect transistor (DNAFET).
Tunnel field-effect transistor, where it switches by modulating quantum tunnelling through a barrier.
Diffusion transistor, formed by diffusing dopants into semiconductor substrate; can be both BJT and FET.
Unijunction transistor, can be used as simple pulse generators. It comprise a main body of either P-type or N-type semiconductor with ohmic contacts at each end (terminals Base1 and Base2). A junction with the opposite semiconductor type is formed at a point along the length of the body for the third terminal (Emitter).
Single-electron transistors (SET), consist of a gate island between two tunneling junctions. The tunneling current is controlled by a voltage applied to the gate through a capacitor.
Nanofluidic transistor, controls the movement of ions through sub-microscopic, water-filled channels.
Multigate devices:
tetrode transistor;
pentode transistor;
trigate transistor (prototype by Intel);
dual-gate field-effect transistors have a single channel with two gates in cascode; a configuration optimized for high-frequency amplifiers, mixers, and oscillators.
Junctionless nanowire transistor (JNT), uses a simple nanowire of silicon surrounded by an electrically isolated "wedding ring" that acts to gate the flow of electrons through the wire.
Vacuum-channel transistor, when in 2012, NASA and the National Nanofab Center in South Korea were reported to have built a prototype vacuum-channel transistor in only 150 nanometers in size, can be manufactured cheaply using standard silicon semiconductor processing, can operate at high speeds even in hostile environments, and could consume just as much power as a standard transistor.
Organic electrochemical transistor.
Part numbering standards/specifications
The types of some transistors can be parsed from the part number. There are three major semiconductor naming standards; in each the alphanumeric prefix provides clues to type of the device.
Japanese Industrial Standard (JIS)
+ JIS Transistor Prefix Table Prefix Type of transistor2SAhigh-frequency p–n–p BJTs2SBaudio-frequency p–n–p BJTs2SChigh-frequency n–p–n BJTs2SDaudio-frequency n–p–n BJTs2SJP-channel FETs (both JFETs and MOSFETs)2SKN-channel FETs (both JFETs and MOSFETs)
The JIS-C-7012 specification for transistor part numbers starts with "2S", e.g. 2SD965, but sometimes the "2S" prefix is not marked on the package – a 2SD965 might only be marked "D965"; a 2SC1815 might be listed by a supplier as simply "C1815". This series sometimes has suffixes (such as "R", "O", "BL", standing for "red", "orange", "blue", etc.) to denote variants, such as tighter hFE (gain) groupings.
European Electronic Component Manufacturers Association (EECA)
The Pro Electron standard, the European Electronic Component Manufacturers Association part numbering scheme, begins with two letters: the first gives the semiconductor type (A for germanium, B for silicon, and C for materials like GaAs); the second letter denotes the intended use (A for diode, C for general-purpose transistor, etc.). A 3-digit sequence number (or one letter then 2 digits, for industrial types) follows. With early devices this indicated the case type. Suffixes may be used, with a letter (e.g. "C" often means high hFE, such as in: BC549C) or other codes may follow to show gain (e.g. BC327-25) or voltage rating (e.g. BUK854-800A). The more common prefixes are:
+ Pro Electron / EECA Transistor Prefix Table Prefix class Type and usage Example Equivalent ReferenceACGermanium small-signal AF transistor AC126 NTE102A DatasheetADGermanium AF power transistor AD133 NTE179 DatasheetAFGermanium small-signal RF transistor AF117 NTE160 DatasheetALGermanium RF power transistor ALZ10 NTE100 DatasheetASGermanium switching transistor ASY28 NTE101 DatasheetAUGermanium power switching transistor AU103 NTE127 DatasheetBCSilicon, small-signal transistor ("general purpose") BC548 2N3904 DatasheetBDSilicon, power transistor BD139 NTE375 DatasheetBFSilicon, RF (high frequency) BJT or FET BF245 NTE133 DatasheetBSSilicon, switching transistor (BJT or MOSFET) BS170 2N7000 DatasheetBLSilicon, high frequency, high power (for transmitters) BLW60 NTE325 DatasheetBUSilicon, high voltage (for CRT horizontal deflection circuits) BU2520A NTE2354 DatasheetCFGallium Arsenide small-signal Microwave transistor (MESFET) CF739 — DatasheetCLGallium Arsenide Microwave power transistor (FET) CLY10 — Datasheet
Joint Electron Devices Engineering Council (JEDEC)
The JEDEC EIA370 transistor device numbers usually start with "2N", indicating a three-terminal device (dual-gate field-effect transistors are four-terminal devices, so begin with 3N), then a 2, 3 or 4-digit sequential number with no significance as to device properties (although early devices with low numbers tend to be germanium). For example, 2N3055 is a silicon n–p–n power transistor, 2N1301 is a p–n–p germanium switching transistor. A letter suffix (such as "A") is sometimes used to indicate a newer variant, but rarely gain groupings.
Proprietary
Manufacturers of devices may have their own proprietary numbering system, for example CK722. Since devices are second-sourced, a manufacturer's prefix (like "MPF" in MPF102, which originally would denote a Motorola FET) now is an unreliable indicator of who made the device. Some proprietary naming schemes adopt parts of other naming schemes, for example a PN2222A is a (possibly Fairchild Semiconductor) 2N2222A in a plastic case (but a PN108 is a plastic version of a BC108, not a 2N108, while the PN100 is unrelated to other xx100 devices).
Military part numbers sometimes are assigned their own codes, such as the British Military CV Naming System.
Manufacturers buying large numbers of similar parts may have them supplied with "house numbers", identifying a particular purchasing specification and not necessarily a device with a standardized registered number. For example, an HP part 1854,0053 is a (JEDEC) 2N2218 transistor which is also assigned the CV number: CV7763
Naming problems
With so many independent naming schemes, and the abbreviation of part numbers when printed on the devices, ambiguity sometimes occurs. For example, two different devices may be marked "J176" (one the J176 low-power JFET, the other the higher-powered MOSFET 2SJ176).
As older "through-hole" transistors are given surface-mount packaged counterparts, they tend to be assigned many different part numbers because manufacturers have their own systems to cope with the variety in pinout arrangements and options for dual or matched n–p–n+p–n–p devices in one pack. So even when the original device (such as a 2N3904) may have been assigned by a standards authority, and well known by engineers over the years, the new versions are far from standardized in their naming.
Construction
Semiconductor material
+Semiconductor material characteristicsSemiconductor materialJunction forward voltage V @ 25 °CElectron mobility m2/(V·s) @ 25 °CHole mobility m2/(V·s) @ 25 °CMax.junction temp.°CGe0.270.390.1970 to 100Si0.710.14 0.05150 to 200GaAs1.030.850.05150 to 200Al-Si junction0.3——150 to 200
The first BJTs were made from germanium (Ge). Silicon (Si) types currently predominate but certain advanced microwave and high-performance versions now employ the compound semiconductor material gallium arsenide (GaAs) and the semiconductor alloy silicon germanium (SiGe). Single element semiconductor material (Ge and Si) is described as elemental.
Rough parameters for the most common semiconductor materials used to make transistors are given in the adjacent table; these parameters will vary with increase in temperature, electric field, impurity level, strain, and sundry other factors.
The junction forward voltage is the voltage applied to the emitter–base junction of a BJT in order to make the base conduct a specified current. The current increases exponentially as the junction forward voltage is increased. The values given in the table are typical for a current of 1 mA (the same values apply to semiconductor diodes). The lower the junction forward voltage the better, as this means that less power is required to "drive" the transistor. The junction forward voltage for a given current decreases with increase in temperature. For a typical silicon junction the change is −2.1 mV/°C. In some circuits special compensating elements (sensistors) must be used to compensate for such changes.
The density of mobile carriers in the channel of a MOSFET is a function of the electric field forming the channel and of various other phenomena such as the impurity level in the channel. Some impurities, called dopants, are introduced deliberately in making a MOSFET, to control the MOSFET electrical behavior.
The electron mobility and hole mobility columns show the average speed that electrons and holes diffuse through the semiconductor material with an electric field of 1 volt per meter applied across the material. In general, the higher the electron mobility the faster the transistor can operate. The table indicates that Ge is a better material than Si in this respect. However, Ge has four major shortcomings compared to silicon and gallium arsenide:
Its maximum temperature is limited;
it has relatively high leakage current;
it cannot withstand high voltages;
it is less suitable for fabricating integrated circuits.
Because the electron mobility is higher than the hole mobility for all semiconductor materials, a given bipolar n–p–n transistor tends to be swifter than an equivalent p–n–p transistor. GaAs has the highest electron mobility of the three semiconductors. It is for this reason that GaAs is used in high-frequency applications. A relatively recent FET development, the high-electron-mobility transistor (HEMT), has a heterostructure (junction between different semiconductor materials) of aluminium gallium arsenide (AlGaAs)-gallium arsenide (GaAs) which has twice the electron mobility of a GaAs-metal barrier junction. Because of their high speed and low noise, HEMTs are used in satellite receivers working at frequencies around 12 GHz. HEMTs based on gallium nitride and aluminium gallium nitride (AlGaN/GaN HEMTs) provide a still higher electron mobility and are being developed for various applications.
Max. junction temperature values represent a cross section taken from various manufacturers' data sheets. This temperature should not be exceeded or the transistor may be damaged.
Al–Si junction refers to the high-speed (aluminum–silicon) metal–semiconductor barrier diode, commonly known as a Schottky diode. This is included in the table because some silicon power IGFETs have a parasitic reverse Schottky diode formed between the source and drain as part of the fabrication process. This diode can be a nuisance, but sometimes it is used in the circuit.
Packaging
thumb|Assorted discrete transistors.
thumb|Soviet KT315b transistors.
Discrete transistors are individually packaged transistors. Transistors come in many different semiconductor packages (see image). The two main categories are through-hole (or leaded), and surface-mount, also known as surface-mount device (SMD). The ball grid array (BGA) is the latest surface-mount package (currently only for large integrated circuits). It has solder "balls" on the underside in place of leads. Because they are smaller and have shorter interconnections, SMDs have better high-frequency characteristics but lower power rating.
Transistor packages are made of glass, metal, ceramic, or plastic. The package often dictates the power rating and frequency characteristics. Power transistors have larger packages that can be clamped to heat sinks for enhanced cooling. Additionally, most power transistors have the collector or drain physically connected to the metal enclosure. At the other extreme, some surface-mount microwave transistors are as small as grains of sand.
Often a given transistor type is available in several packages. Transistor packages are mainly standardized, but the assignment of a transistor's functions to the terminals is not: other transistor types can assign other functions to the package's terminals. Even for the same transistor type the terminal assignment can vary (normally indicated by a suffix letter to the part number, q.e. BC212L and BC212K).
Nowadays most transistors come in a wide range of SMT packages, in comparison the list of available through-hole packages is relatively small, here is a short list of the most common through-hole transistors packages in alphabetical order:
ATV, E-line, MRT, HRT, SC-43, SC-72, TO-3, TO-18, TO-39, TO-92, TO-126, TO220, TO247, TO251, TO262, ZTX851
Flexible transistors
Researchers have made several kinds of flexible transistors, including organic field-effect transistors. Flexible transistors are useful in some kinds of flexible displays and other flexible electronics.
See also
Band gap
Digital electronics
Moore's law
Semiconductor device modeling
Transistor count
Transistor model
Transresistance
Very-large-scale integration
Directory of external websites with datasheets
2N3904/2N3906, BC182/BC212 and BC546/BC556: Ubiquitous, BJT, general-purpose, low-power, complementary pairs. They have plastic cases and cost roughly ten cents U.S. in small quantities, making them popular with hobbyists.
AF107: Germanium, 0.5 watt, 250 MHz p–n–p BJT.
BFP183: Low-power, 8 GHz microwave n–p–n BJT.
LM394: "supermatch pair", with two n–p–n BJTs on a single substrate.
2N2219A/2N2905A: BJT, general purpose, medium power, complementary pair. With metal cases they are rated at about one watt.
2N3055/MJ2955: For years, the n–p–n 2N3055 has been the "standard" power transistor. Its complement, the p–n–p MJ2955 arrived later. These 1 MHz, 15 A, 60 V, 115 W BJTs are used in audio-power amplifiers, power supplies, and control.
2SC3281/2SA1302: Made by Toshiba, these BJTs have low-distortion characteristics and are used in high-power audio amplifiers. They have been widely counterfeited .
BU508: n–p–n, 1500 V power BJT. Designed for television horizontal deflection, its high voltage capability also makes it suitable for use in ignition systems.
MJ11012/MJ11015: 30 A, 120 V, 200 W, high power Darlington complementary pair BJTs. Used in audio amplifiers, control, and power switching.
2N5457/2N5460: JFET (depletion mode), general purpose, low power, complementary pair.
BSP296/BSP171: IGFET (enhancement mode), medium power, near complementary pair. Used for logic level conversion and driving power transistors in amplifiers.
IRF3710/IRF5210: IGFET (enhancement mode), 40 A, 100 V, 200 W, near complementary pair. For high-power amplifiers and power switches, especially in automobiles.
References
Further reading
The invention of the transistor & the birth of the information age
External links
[http://www.ck722museum.com/ The CK722 Museum]. Website devoted to the "classic" hobbyist germanium transistor
The Transistor Educational content from Nobelprize.org
BBC: Building the digital age photo history of transistors
The Bell Systems Memorial on Transistors
IEEE Global History Network, The Transistor and Portable Electronics. All about the history of transistors and integrated circuits.
[http://www.pbs.org/transistor/ Transistorized]. Historical and technical information from the Public Broadcasting Service
[http://www.aps.org/publications/apsnews/200011/history.cfm This Month in Physics History: November 17 to December 23, 1947: Invention of the First Transistor]. From the American Physical Society
[http://www.sciencefriday.com/pages/1997/Dec/hour1_121297.html 50 Years of the Transistor]. From Science Friday, December 12, 1997
Pinouts
Common transistor pinouts
Datasheets
Charts showing many characteristics and links to most datasheets for 2N, 2SA, 2SB. 2SC, 2SD, 2SH-K, and other numbers.
Discrete Databook (Historical 1978), National Semiconductor (now Texas Instruments)
Discrete Databook (Historical 1982), SGS (now STMicroelectronics)
Small-Signal Transistor Databook (Historical 1984), Motorola
Discrete Databook (Historical 1985), Fairchild
01
Category:Electrical components
Category:Semiconductor devices
Category:American inventions
Category:1947 in computer science
Category:1947 in technology
Category:Computer-related introductions in 1947
Category:Bell Labs | 30,011 | 2017-01 |
Tuvalu | Tuvalu ( or ), formerly known as the Ellice Islands, is a Polynesian island nation located in the Pacific Ocean, midway between Hawaii and Australia, lying east-northeast of the Santa Cruz Islands (belonging to the Solomons), southeast of Nauru, south of Kiribati, west of Tokelau, northwest of Samoa and Wallis and Futuna and north of Fiji. It comprises three reef islands and six true atolls spread out between the latitude of 5° to 10° south and longitude of 176° to 180°, west of the International Date Line. Tuvalu has a population of 10,640 (2012 census). The total land area of the islands of Tuvalu is .
The first inhabitants of Tuvalu were Polynesians. The pattern of settlement that is believed to have occurred is that the Polynesians spread out from Samoa and Tonga into the Tuvaluan atolls, with Tuvalu providing a stepping stone to migration into the Polynesian Outlier communities in Melanesia and Micronesia.
In 1568, Spanish navigator Álvaro de Mendaña was the first European to sail through the archipelago, sighting the island of Nui during his expedition in search of Terra Australis. In 1819 the island of Funafuti was named Ellice's Island; the name Ellice was applied to all nine islands after the work of English hydrographer Alexander George Findlay. The islands came under Britain's sphere of influence in the late 19th century, when each of the Ellice Islands was declared a British Protectorate by Captain Gibson of between 9 and 16 October 1892. The Ellice Islands were administered as British protectorate by a Resident Commissioner from 1892 to 1916 as part of the British Western Pacific Territories (BWPT), and then as part of the Gilbert and Ellice Islands colony from 1916 to 1974.
A referendum was held in December 1974 to determine whether the Gilbert Islands and Ellice Islands should each have their own administration. As a consequence of the referendum, the Gilbert and Ellice Islands colony ceased to exist on 1 January 1976 and the separate British colonies of Kiribati and Tuvalu came into existence. Tuvalu became fully independent within the Commonwealth on 1 October 1978. On 5 September 2000 Tuvalu became the 189th member of the United Nations.
thumb|right|300px|Lat. and Long. (Funafuti)
History
Pre-history
The origins of the people of Tuvalu are addressed in the theories regarding migration into the Pacific that began about 3000 years ago. During pre-European-contact times there was frequent canoe voyaging between the nearer islands including Samoa and Tonga. Eight of the nine islands of Tuvalu were inhabited; thus the name, Tuvalu, means "eight standing together" in Tuvaluan (compare to *walo meaning "eight" in Proto-Austronesian). Possible evidence of fire in the Caves of Nanumanga may indicate human occupation for thousands of years.
An important creation myth of the islands of Tuvalu is the story of the te Pusi mo te Ali (the Eel and the Flounder) who created the islands of Tuvalu; te Ali (the flounder) is believed to be the origin of the flat atolls of Tuvalu and the te Pusin (the Eel) is the model for the coconut palms that are important in the lives of Tuvaluans. The stories as to the ancestors of the Tuvaluans vary from island to island. On Niutao, Funafuti and Vaitupu the founding ancestor is described as being from Samoa;O'Brien, Talakatoa in Tuvalu: A History, Chapter 1, Genesis whereas on Nanumea the founding ancestor is described as being from Tonga.
Early contacts with other cultures
thumb|A Tuvaluan man in traditional costume drawn by Alfred Agate in 1841 during the United States Exploring Expedition.
Tuvalu was first sighted by Europeans on 16 January 1568 during the voyage of Álvaro de Mendaña from Spain who sailed past Nui and charted it as Isla de Jesús (Spanish for "Island of Jesus") because the previous day was the feast of the Holy Name. Mendaña made contact with the islanders but was unable to land.Maude, H.E. Spanish discoveries in the Central Pacific. A study in identification Journal of the Polynesian Society, Wellington, LXVIII, (1959), p.299,303. During Mendaña's second voyage across the Pacific he passed Niulakita on 29 August 1595, which he named La Solitaria.
Captain John Byron passed through the islands of Tuvalu in 1764 during his circumnavigation of the globe as captain of the . Byron charted the atolls as Lagoon Islands. Keith S. Chambers and Doug Munro (1980) identified Niutao as the island that Francisco Mourelle de la Rúa sailed past on 5 May 1781, thus solving what Europeans had called The Mystery of Gran Cocal.Kofe, Laumua; Palagi and Pastors in Tuvalu: A History, Ch. 15 Mourelle's map and journal named the island El Gran Cocal ('The Great Coconut Plantation'); however, the latitude and longitude was uncertain. Longitude could only be reckoned crudely as accurate chronometers were unavailable until the late 18th century.
The next European to visit was Arent Schuyler de Peyster, of New York, captain of the armed brigantine or privateer Rebecca, sailing under British colours,The De Peysters. corbett-family-history.com which passed through the southern Tuvaluan waters in May 1819; de Peyster sighted Nukufetau and Funafuti, which he named Ellice's Island after an English politician, Edward Ellice, the Member of Parliament for Coventry and the owner of the Rebecca'''s cargo. The name Ellice was applied to all nine islands after the work of English hydrographer Alexander George Findlay.A Directory for the Navigation of the Pacific Ocean: With Description of Its Coasts, Islands, Etc. from the Strait of Magalhaens to the Arctic Sea (1851)
In 1820 the Russian explorer Mikhail Lazarev visited Nukufetau as commander of the Mirny. Louis Isidore Duperrey, captain of La Coquille, sailed past Nanumanga in May 1824 during a circumnavigation of the earth (1822–1825). A Dutch expedition (the frigate Maria Reigersberg) found Nui on the morning of 14 June 1825 and named the main island (Fenua Tapu) as Nederlandsch Eiland.Pieter Troost: Aanteekeningen gehouden op eene reis om de wereld: met het fregat de Maria Reigersberg en de ... (1829) Digitalisat
Whalers began roving the Pacific, although visiting Tuvalu only infrequently because of the difficulties of landing on the atolls. Captain George Barrett of the Nantucket whaler Independence II has been identified as the first whaler to hunt the waters around Tuvalu. In November 1821 he bartered coconuts from the people of Nukulaelae and also visited Niulakita. A shore camp was established on Sakalua islet of Nukufetau, where coal was used to melt down the whale blubber.
For less than a year between 1862–63, Peruvian ships, engaged in what became to be called the "blackbirding" trade, combed the smaller islands of Polynesia from Easter Island in the eastern Pacific to Tuvalu and the southern atolls of the Gilbert Islands (now Kiribati), seeking recruits to fill the extreme labour shortage in Peru.Maude, H.E. (1981) Slavers in Paradise, Stanford University Press, ISBN 0804711062. While some islanders were voluntary recruits the "blackbirders" were notorious for enticing islanders on to ships with tricks, such as pretending to be Christian missionaries, as well as kidnapping islanders at gun point. The Rev. A. W. Murray,Murray A.W. (1876). Forty Years' Mission Work. London Nisbet the earliest European missionary in Tuvalu, reported that in 1863 about 170 people were taken from Funafuti and about 250 were taken from Nukulaelae, as there were fewer than 100 of the 300 recorded in 1861 as living on Nukulaelae.
Christianity came to Tuvalu in 1861 when Elekana, a deacon of a Congregational church in Manihiki, Cook Islands became caught in a storm and drifted for 8 weeks before landing at Nukulaelae on 10 May 1861. Elekana began proselytising Christianity. He was trained at Malua Theological College, a London Missionary Society (LMS) school in Samoa, before beginning his work in establishing the Church of Tuvalu. In 1865 the Rev. A. W. Murray of the LMS – a Protestant congregationalist missionary society – arrived as the first European missionary where he too proselytised among the inhabitants of Tuvalu. By 1878 Protestantism was well established with preachers on each island. In the later 19th and early 20th centuries the ministers of what became the Church of Tuvalu (Te Ekalesia Kelisiano Tuvalu) were predominantly Samoans, who influenced the development of the Tuvaluan language and the music of Tuvalu.
The islands came under Britain's sphere of influence in the late 19th century, when each of the Ellice Islands was declared a British Protectorate by Captain Gibson of , between 9 and 16 October 1892.
Trading firms and traders
thumb|right|300px|Islands of Tuvalu
Trading companies became active in Tuvalu in the mid-19th century; the trading companies engaged palagi traders who lived on the islands. John (also known as Jack) O'Brien was the first European to settle in Tuvalu, he became a trader on Funafuti in the 1850s. He married Salai, the daughter of the paramount chief of Funafuti. Louis Becke, who later found success as a writer, was a trader on Nanumanga from April 1880 until the trading-station was destroyed later that year in a cyclone. He then became a trader on Nukufetau.
In 1892 Captain Davis of the reported on trading activities and traders on each of the islands visited. Captain Davis identified the following traders in the Ellice Group: Edmund Duffy (Nanumea); Jack Buckland (Niutao); Harry Nitz (Vaitupu); Jack O'Brien (Funafuti); Alfred Restieaux and Emile Fenisot (Nukufetau); and Martin Kleis (Nui). During this time, the greatest number of palagi traders lived on the atolls, acting as agents for the trading companies. Some islands would have competing traders while dryer islands might only have a single trader.
In the later 1890s and into first decade of the 20th century, structural changes occurred in the operation of the Pacific trading companies; they moved from a practice of having traders resident on each island to instead becoming a business operation where the supercargo (the cargo manager of a trading ship) would deal directly with the islanders when a ship visited an island. From 1900 the numbers of palagi traders in Tuvalu declined and the last of the palagi traders were Fred Whibley on Niutao, Alfred Restieaux on Nukufetau, and Martin Kleis on Nui. By 1909 there were no more resident palagi traders representing the trading companies, although both Whibley and Restieaux remained in the islands until their deaths.
Scientific expeditions and travellers
thumb|left|A man from the Nukufetau atoll, drawn by Alfred Agate 1841.
The United States Exploring Expedition under Charles Wilkes visited Funafuti, Nukufetau and Vaitupu in 1841.Tyler, David B. – 1968 The Wilkes Expedition. The First United States Exploring Expedition (1838–42). Philadelphia: American Philosophical Society During this expedition Alfred Thomas Agate, engraver and illustrator, recorded the dress and tattoo patterns of the men of Nukufetau.
In 1885 or 1886 the New Zealand photographer Thomas Andrew visited Funafuti and Nui.
In 1890 Robert Louis Stevenson, his wife Fanny Vandegrift Stevenson and her son Lloyd Osbourne sailed on the Janet Nicoll, a trading steamer owned by Henderson and Macfarlane of Auckland, New Zealand, which operated between Sydney and Auckland and into the central Pacific.The Circular Saw Shipping Line. Anthony G. Flude. 1993. (Chapter 7) The Janet Nicoll visited three of the Ellice Islands;Janet Nicoll is the correct spelling of the trading steamer owned by Henderson and Macfarlane of Auckland, New Zealand, which operated between Sydney, Auckland and into the central Pacific. Fanny Vandegrift Stevenson miss-names the ship as the Janet Nicol in her account of the 1890 voyage while Fanny records that they made landfall at Funafuti, Niutao and Nanumea, Jane Resture suggests that it was more likely they landed at Nukufetau rather than Funafuti. An account of this voyage was written by Fanny Stevenson and published under the title The Cruise of the Janet Nichol,Stevenson, Fanny Van de Grift (1914) The Cruise of the Janet Nichol among the South Sea Islands, republished in 2003, Roslyn Jolly (ed.), U. of Washington Press/U. of New South Wales Press, ISBN 0868406066 together with photographs taken by Robert Louis Stevenson and Lloyd Osbourne.
In 1894 Count Rudolf Festetics de Tolna, his wife Eila (née Haggin) and her daughter Blanche Haggin visited Funafuti aboard the yacht Le Tolna.Festetics De Tolna, Comte Rodolphe (1903) Chez les cannibales: huit ans de croisière dans l'océan Pacifique à bord du, Paris: Plon-Nourrit The Count spent several days photographing men and woman on Funafuti.
thumb|upright=1.0|alt=1900, Woman on Funafuti, Tuvalu, then known as Ellice Islands| Woman on Funafuti,taken by Harry Clifford Fassett (1900)
The boreholes on Funafuti, at the site now called Darwin's Drill, are the result of drilling conducted by the Royal Society of London for the purpose of investigating the formation of coral reefs to determine whether traces of shallow water organisms could be found at depth in the coral of Pacific atolls. This investigation followed the work on The Structure and Distribution of Coral Reefs conducted by Charles Darwin in the Pacific. Drilling occurred in 1896, 1897 and 1898. Professor Edgeworth David of the University of Sydney was a member of the 1896 "Funafuti Coral Reef Boring Expedition of the Royal Society", under Professor William Sollas and lead the expedition in 1897.David, Mrs Edgeworth, Funafuti or Three Months on a Coral Atoll: an unscientific account of a scientific expedition, London: John Murray, 1899 Photographers on these trips recorded people, communities, and scenes at Funafuti.
Charles Hedley, a naturalist at the Australian Museum, accompanied the 1896 expedition and during his stay on Funafuti collected invertebrate and ethnological objects. The descriptions of these were published in Memoir III of the Australian Museum Sydney between 1896 and 1900. Hedley also wrote the General Account of the Atoll of Funafuti,Hedley The Ethnology of Funafuti, and The Mollusca of Funafuti.Fairfax, Denis (1983) "Hedley, Charles (1862–1926)", pp. 252–253 in Australian Dictionary of Biography, Volume 9, Melbourne University Press. Retrieved 5 May 2013 Edgar Waite was also part of the 1896 expedition and published The mammals, reptiles, and fishes of Funafuti. William Rainbow described the spiders and insects collected at Funafuti in The insect fauna of Funafuti.
Harry Clifford Fassett, captain's clerk and photographer, recorded people, communities and scenes at Funafuti in 1900 during a visit of USFC Albatross when the United States Fish Commission was investigating the formation of coral reefs on Pacific atolls.
Colonial administration
From 1892 to 1916 the Ellice Islands were administered as a British Protectorate, as part of the British Western Pacific Territories (BWPT), by a Resident Commissioner based in the Gilbert Islands. In 1916 the administration of the BWTP ended and the Gilbert and Ellice Islands Colony was established, which existed from 1916 to 1974.
Second World War
During the Pacific War Funafuti was used as a base to prepare for the subsequent seaborn attacks on the Gilbert Islands (Kiribati) that were occupied by Japanese forces. The United States Marine Corps landed on Funafuti on 2 October 1942 and on Nanumea and Nukufetau in August 1943. The Japanese had already occupied Tarawa and other islands in what is now Kiribati, but were delayed by the losses at the Battle of the Coral Sea. The islanders assisted the American forces to build airfields on Funafuti, Nanumea and Nukufetau and to unload supplies from ships. On Funafuti the islanders shifted to the smaller islets so as to allow the American forces to build the airfield and to build naval bases and port facilities on Fongafale. A Naval Construction Battalion (Seabees) built a sea plane ramp on the lagoon side of Fongafale islet for seaplane operations by both short and long range seaplanes and a compacted coral runway was also constructed on Fongafale, with runways also constructed to create Nanumea Airfield and Nukufetau Airfield. USN Patrol Torpedo Boats (PTs) were based at Funafuti from 2 November 1942 to 11 May 1944.
The atolls of Tuvalu acted as staging posts during the preparation for the Battle of Tarawa and the Battle of Makin that commenced on 20 November 1943, which battles were the implementation of "Operation Galvanic". After the war the military airfield on Funafuti was developed into Funafuti International Airport.
Post-World War II – transition to independence
The formation of the United Nations after World War II resulted in the United Nations Special Committee on Decolonization committing to a process of decolonization; as a consequence the British colonies in the Pacific started on a path to self-determination.
In 1974 ministerial government was introduced to the Gilbert and Ellice Islands Colony through a change to the Constitution. In that year a general election was held; and a referendum was held in December 1974 to determine whether the Gilbert Islands and Ellice Islands should each have their own administration.Nohlen, D, Grotz, F & Hartmann, C (2001) Elections in Asia: A data handbook, Volume II, p831 ISBN 0-19-924959-8 As a consequence of the referendum, separation occurred in two stages. The Tuvaluan Order 1975, which took effect on 1 October 1975, recognised Tuvalu as a separate British dependency with its own government. The second stage occurred on 1 January 1976 when separate administrations were created out of the civil service of the Gilbert and Ellice Islands Colony.
Elections to the House of Assembly of the British Colony of Tuvalu were held on 27 August 1977; with Toaripi Lauti being appointed Chief Minister in the House of Assembly of the Colony of Tuvalu on 1 October 1977. The House of Assembly was dissolved in July 1978 with the government of Toaripi Lauti continuing as a caretaker government until the 1981 elections were held. Toaripi Lauti became the first Prime Minister on 1 October 1978 when Tuvalu became an independent nation.
Tuvalu became fully independent within the Commonwealth on 1 October 1978. On 5 September 2000 Tuvalu became the 189th member of the United Nations.
Government
thumb|Government office building
Parliamentary democracy
The Constitution of Tuvalu states that it is "the supreme law of Tuvalu" and that "all other laws shall be interpreted and applied subject to this Constitution"; it sets out the Principles of the Bill of Rights and the Protection of the Fundamental Rights and Freedoms.
Tuvalu is a parliamentary democracy and Commonwealth realm with Queen Elizabeth II serving as the country's head of state and bearing the title Queen of Tuvalu. Since the Queen does not reside in the islands, she is represented in Tuvalu by a Governor General appointed by the Queen upon the advice of the Prime Minister of Tuvalu. In 1986 and 2008, referenda confirmed the monarchy.
From 1974 (the creation of the British colony of Tuvalu) until independence, the legislative body of Tuvalu was called the House of the Assembly or Fale I Fono. Following independence in October 1978 the House of the Assembly was renamed the Parliament of Tuvalu or Palamene o Tuvalu. The unicameral Parliament has 15 members with elections held every four years. The members of parliament select the Prime Minister (who is the head of government) and the Speaker of Parliament. The ministers that form the Cabinet are appointed by the Governor General on the advice of the Prime Minister.
There are no formal political parties and election campaigns are largely based on personal/family ties and reputations.
The Tuvalu National Library and Archives holds "vital documentation on the cultural, social and political heritage of Tuvalu", including surviving records from the colonial administration, as well as Tuvalu government archives."Tuvalu National Archives major project", British Library
Legal system
There are eight Island Courts and Lands Courts; appeals in relation to land disputes are made to the Lands Courts Appeal Panel. Appeals from the Island Courts and the Lands Courts Appeal Panel are made to the Magistrates Court, which has jurisdiction to hear civil cases involving up to $T10,000. The superior court is the High Court of Tuvalu as it has unlimited original jurisdiction to determine the Law of Tuvalu and to hear appeals from the lower courts. Rulings of the High Court can be appealed to the Court of Appeal of Tuvalu. From the Court of Appeal there is a right of appeal to Her Majesty in Council, i.e., the Privy Council in London.
The Law of Tuvalu comprises the Acts voted into law by the Parliament of Tuvalu and statutory instruments that become law; certain Acts passed by the Parliament of the United Kingdom (during the time Tuvalu was either a British protectorate or British colony); the common law; and customary law (particularly in relation to the ownership of land). The land tenure system is largely based on kaitasi (extended family ownership).
Foreign relations
Tuvalu participates in the work of Secretariat of the Pacific Community, or SPC (sometimes Pacific Community) and is a member of the Pacific Islands Forum, the Commonwealth of Nations and the United Nations. Tuvalu has maintained a mission at the United Nations in New York City since 2000. Tuvalu is a member of the World Bank and the Asian Development Bank. On 18 February 2016 Tuvalu signed the Pacific Islands Development Forum Charter and formally joined the Pacific Islands Development Forum (PIDF).
Tuvalu maintains close relations with Fiji, New Zealand, Australia, Japan, South Korea, Taiwan, the United States of America, the United Kingdom and the European Union. It has diplomatic relations with Taiwan; the country maintains the only resident embassy in Tuvalu and has a large assistance programme in the islands.
A major international priority for Tuvalu in the UN, at the 2002 Earth Summit in Johannesburg, South Africa and in other international fora, is promoting concern about global warming and the possible sea level rising. Tuvalu advocates ratification and implementation of the Kyoto Protocol. In December 2009 the islands stalled talks on climate change at the United Nations Climate Change Conference in Copenhagen, fearing some other developing countries were not committing fully to binding deals on a reduction in carbon emissions. Their chief negotiator stated, "Tuvalu is one of the most vulnerable countries in the world to climate change and our future rests on the outcome of this meeting."
Tuvalu participates in the Alliance of Small Island States (AOSIS), which is a coalition of small island and low-lying coastal countries that have concerns about their vulnerability to the adverse effects of global climate change. Under the Majuro Declaration, which was signed on 5 September 2013, Tuvalu has commitment to implement power generation of 100% renewable energy (between 2013 and 2020), which is proposed to be implemented using Solar PV (95% of demand) and biodiesel (5% of demand). The feasibility of wind power generation will be considered. Tuvalu participates in the operations of the Pacific Islands Applied Geoscience Commission (SOPAC) and the Secretariat of the Pacific Regional Environment Programme (SPREP).
Tuvalu is party to a treaty of friendship with the United States, signed soon after independence and ratified by the US Senate in 1983, under which the United States renounced prior territorial claims to four Tuvaluan islands (Funafuti, Nukufetau, Nukulaelae and Niulakita) under the Guano Islands Act of 1856.
Tuvalu participates in the operations of the Pacific Island Forum Fisheries Agency (FFA) and the Western and Central Pacific Fisheries Commission (WCPFC). The Tuvaluan government, the US government, and the governments of other Pacific islands, are parties to the South Pacific Tuna Treaty (SPTT), which entered into force in 1988. Tuvalu is also a member of the Nauru Agreement which addresses the management of tuna purse seine fishing in the tropical western Pacific. In May 2013 representatives from the United States and the Pacific Islands countries agreed to sign interim arrangement documents to extend the Multilateral Fisheries Treaty (which encompasses the South Pacific Tuna Treaty) to confirm access to the fisheries in the Western and Central Pacific for US tuna boats for 18 months. Tuvalu and the other members of the Pacific Island Forum Fisheries Agency (FFA) and the United States have settled a tuna fishing deal for 2015; a longer term deal will be negotiated. The treaty is an extension of the Nauru Agreement and provides for US flagged purse seine vessels to fish 8,300 days in the region in return for a payment of US$90 million made up by tuna fishing industry and US-Government contributions. In 2015 Tuvalu has refused to sell fishing days to certain nations and fleets that have blocked Tuvaluan initiatives to develop and sustain their own fishery. In 2016 the Minister of Natural Resources
drew attention to Article 30 of the WCPF Convention, which describes the collective obligation of members to consider the disproportionate burden that management measures might place on small-island developing states.
In July 2013 Tuvalu signed the Memorandum of Understanding (MOU) to establish the Pacific Regional Trade and Development Facility, which Facility originated in 2006, in the context of negotiations for an Economic Partnership Agreement (EPA) between Pacific ACP States and the European Union. The rationale for the creation of the Facility being to improve the delivery of aid to Pacific island countries in support of the Aid-for-Trade (AfT) requirements. The Pacific ACP States are the countries in the Pacific that are signatories to the Cotonou Agreement with the European Union.
Defence and law enforcement
Tuvalu has no regular military forces, and spends no money on the military. Its national police force, the Tuvalu Police Force headquartered in Funafuti includes a maritime surveillance unit, customs, prisons and immigration. Police officers wear British style uniforms.
The police have a Pacific-class patrol boat (HMTSS Te Mataili) provided by Australia in October 1994 under the Pacific Patrol Boat Programme for use in maritime surveillance and fishery patrol and for search-and-rescue missions. ("HMTSS" stands for His/Her Majesty's Tuvaluan State Ship or for His/Her Majesty's Tuvalu Surveillance Ship.)
Crime in Tuvalu is not a significant social problem due to an effective criminal justice system, also due to the influence of the Falekaupule (the traditional assembly of elders of each island) and the central role of religious institutions in the Tuvaluan community.
Administrative divisions
Tuvalu consists of six atolls and three reef islands. The smallest, Niulakita, is administered as part of Niutao.
Each island has its own high-chief, or ulu-aliki, and several sub-chiefs (alikis). The community council is the Falekaupule (the traditional assembly of elders) or te sina o fenua (literally: "grey-hairs of the land"). In the past, another caste, the priests (tofuga), were also amongst the decision-makers. The ulu-aliki and aliki exercise informal authority at the local level. Ulu-aliki are always chosen based on ancestry. Under the Falekaupule Act (1997), the powers and functions of the Falekaupule are now shared with the pule o kaupule (elected village presidents; one on each atoll).
thumb|250px|A map of Tuvalu.
Local government districts consisting of more than one islet:
Funafuti
Nanumea
Nui
Nukufetau
Nukulaelae
Vaitupu
Local government districts consisting of only one island:
Nanumanga
Niulakita
Niutao
Society
Demographics
thumb|right|Population distribution of Tuvalu by age group (2014).
The population at the 2002 census was 9,561, and the population at the 2012 census was 10,640. The 2015 estimate of the population is 10,869. The population of Tuvalu is primarily of Polynesian ethnicity with approximately 5.6% of the population being Micronesian.
Life expectancy for women in Tuvalu is 68.41 years and 64.01 years for men (2015 est.). the country's population growth rate is 0.82% (2015 est.). The net migration rate is estimated at −6.81 migrant(s)/1,000 population (2015 est.) The threat of global warming in Tuvalu is not yet a dominant motivation for migration as Tuvaluans appear to prefer to continue living on the islands for reasons of lifestyle, culture and identity.
From 1947 to 1983 a number of Tuvaluans from Vaitupu migrated to Kioa, an island in Fiji. The settlers from Tuvalu were granted Fijian citizenship in 2005. In recent years New Zealand and Australia are the primary destinations for migration or seasonal work.
In 2014 attention was drawn to an appeal to the New Zealand Immigration and Protection Tribunal against the deportation of a Tuvaluan family on the basis that they were "climate change refugees", who would suffer hardship resulting from the environmental degradation of Tuvalu. However the subsequent grant of residence permits to the family was made on grounds unrelated to the refugee claim. The family was successful in their appeal because, under the relevant immigration legislation, there were "exceptional circumstances of a humanitarian nature" that justified the grant of resident permits as the family was integrated into New Zealand society with a sizeable extended family which had effectively relocated to New Zealand. Indeed, in 2013 a claim of a Kiribati man of being a "climate change refugee" under the Convention relating to the Status of Refugees (1951) was determined by the New Zealand High Court to be untenable as there was no persecution or serious harm related to any of the five stipulated Refugee Convention grounds. Permanent migration to Australia and New Zealand, such as for family reunification, requires compliance with the immigration legislation of those countries.
New Zealand has an annual quota of 75 Tuvaluans granted work permits under the Pacific Access Category, as announced in 2001. The applicants register for the Pacific Access Category (PAC) ballots; the primary criteria is that the principal applicant must have a job offer from a New Zealand employer. Tuvaluans also have access to seasonal employment in the horticulture and viticulture industries in New Zealand under the Recognised Seasonal Employer (RSE) Work Policy introduced in 2007 allowing for employment of up to 5,000 workers from Tuvalu and other Pacific islands. Tuvaluans can participate in the Australian Pacific Seasonal Worker Program, which allows Pacific Islanders to obtain seasonal employment in the Australian agriculture industry, in particular cotton and cane operations; fishing industry, in particular aquaculture; and with accommodation providers in the tourism industry.
Languages
The Tuvaluan language and English are the national languages of Tuvalu. Tuvaluan is of the Ellicean group of Polynesian languages, distantly related to all other Polynesian languages such as Hawaiian, Māori, Tahitian, Samoan and Tongan. It is most closely related to the languages spoken on the Polynesian outliers in Micronesia and northern and central Melanesia. The Tuvaluan language has borrowed from the Samoan language, as a consequence of Christian missionaries in the late 19th and early 20th centuries being predominantly Samoan.
The Tuvaluan language is spoken by virtually everyone, while a language very similar to Gilbertese is spoken on Nui. English is also an official language but is not spoken in daily use. Parliament and official functions are conducted in the Tuvaluan language.
There are about 13,000 Tuvaluan speakers worldwide.Besnier, Niko (2000). Tuvaluan: A Polynesian Language of the Central Pacific. London: Routledge, ISBN 0-203-02712-4.Jackson, Geoff and Jackson, Jenny (1999). An introduction to Tuvaluan. Suva: Oceania Printers, ISBN 982-9027-02-3. Radio Tuvalu transmits Tuvaluan language programming.
Religion
The Congregational Christian Church of Tuvalu, which is part of the Reformed tradition, is the state church of Tuvalu;, although in practice this merely entitles it to "the privilege of performing special services on major national events". Its adherents comprise about 97% of the 10,837 (2012 census) inhabitants of the archipelago. The Church of Tuvalu is the state religion, although in practice this merely entitles it to "the privilege of performing special services on major national events"."2010 Report on International Religious Freedom – Tuvalu", United States Department of State The Constitution of Tuvalu guarantees freedom of religion, including the freedom to practice, the freedom to change religion, the right not to receive religious instruction at school or to attend religious ceremonies at school, and the right not to "take an oath or make an affirmation that is contrary to his religion or belief".Constitution of Tuvalu, article 23.
The Roman Catholic community is served by the Mission Sui Iuris of Funafuti. The other religions practised in Tuvalu include Seventh-day Adventist (1.4%), Bahá'í (1%) and the Ahmadiyya Muslim Community (0.4%).Ahmadiyya Muslim Mosques Around the World, Ahmadiyya Muslim Community, USA, 2008, p. 344 ISBN 1-882494-51-2
The introduction of Christianity ended the worship of ancestral spirits and other deities Hedley, pp. 46–52(animism), along with the power of the vaka-atua (the priests of the old religions). Laumua Kofe describes the objects of worship as varying from island to island, although ancestor worship is described by Rev. D.J. Whitmee in 1870 as being common practice.Kofe, Laumua "Old Time Religion" in Tuvalu: A History Tuvaluans continue to respect their ancestors within the context of a strong Christian faith.
Health
The Princess Margaret Hospital on Funafuti is the only hospital in Tuvalu. The Tuvaluan medical staff at PMH in 2011 comprised the Director of Health & Surgeon, the Chief Medical Officer Public Health, an anaesthetist, a paediatric medical officer and an obstetrics and gynaecology medical officer. Allied health staff include two radiographers, two pharmacists, three laboratory technicians, two dieticians and 13 nurses with specialised training in fields including surgical nursing, anaesthesia nursing/ICU, paediatric nursing and midwifery. PMH also employs a dentist. The Department of Health also employs nine or ten nurses on the outer islands to provide general nursing and midwifery services.
Like many South Pacific islands obesity is a major health issue where 65% of men and 71% of women are overweight.
Education
thumb|right|200px| Children on Niutao
Education in Tuvalu is free and compulsory between the ages of 6 and 15 years. Each island has a primary school. Motufoua Secondary School is located on Vaitupu. Students board at the school during the school term, returning to their home islands each school vacation. Fetuvalu High School, a day school operated by the Church of Tuvalu, is on Funafuti.
Fetuvalu offers the Cambridge syllabus. Motufoua offers the Fiji Junior Certificate (FJC) at year 10, Tuvaluan Certificate at Year 11 and the Pacific Senior Secondary Certificate (PSSC) at Year 12, set by the Fiji-based exam board SPBEA. Sixth form students who pass their PSSC go on to the Augmented Foundation Programme, funded by the government of Tuvalu. This program is required for tertiary education programmes outside of Tuvalu and is available at the University of the South Pacific (USP) Extension Centre in Funafuti.
Required attendance at school is 10 years for males and 11 years for females (2001). The adult literacy rate is 99.0% (2002). In 2010, there were 1,918 students who were taught by 109 teachers (98 certified and 11 uncertified). The teacher-pupil ratio for primary schools in Tuvalu is around 1:18 for all schools with the exception of Nauti School, which has a teacher-student ratio of 1:27. Nauti School on Funafuti is the largest primary in Tuvalu with more than 900 students (45 percent of the total primary school enrolment). The pupil-teacher ratio for Tuvalu is low compared to the Pacific region (ratio of 1:29).
Community Training Centres (CTCs) have been established within the primary schools on each atoll. The CTCs provide vocational training to students who do not progress beyond Class 8 because they failed the entry qualifications for secondary education. The CTCs offer training in basic carpentry, gardening and farming, sewing and cooking. At the end of their studies the graduates can apply to continue studies either at Motufoua Secondary School or the Tuvalu Maritime Training Institute (TMTI). Adults can also attend courses at the CTCs.
The Tuvaluan Employment Ordinance of 1966 sets the minimum age for paid employment at 14 years and prohibits children under the age of 15 from performing hazardous work."Tuvalu". 2009 Findings on the Worst Forms of Child Labor. Bureau of International Labor Affairs, U.S. Department of Labor (2002). This article incorporates text from this source, which is in the public domain.
Culture
thumb|right|300px|Interior of a maneapa on Funafuti, Tuvalu
Architecture
The traditional buildings of Tuvalu used plants and trees from the native broadleaf forest,Hedley, pp. 40–41 including timber from: Pouka, (Hernandia peltata); Ngia or Ingia bush, (Pemphis acidula); Miro, (Thespesia populnea); Tonga, (Rhizophora mucronata); Fau or Fo fafini, or woman's fibre tree (Hibiscus tiliaceus). and fibre from: coconut; Ferra, native fig (Ficus aspem); Fala, screw pine or Pandanus. The buildings were constructed without nails and were lashed and tied together with a plaited sennit rope that was handmade from dried coconut fibre.
Following contact with Europeans iron products were used including nails and corrugated iron roofing material. Modern building in Tuvalu are constructed from imported building materials including imported timber and concrete.
Church and community buildings (maneapa) are usually coated with white paint that is known as lase, which is made by burning a large amount of dead coral with firewood. The whitish powder that is the result is mixed with water and painted on the buildings.
thumb|upright|A Tuvaluan dancer at Auckland's Pasifika Festival
Art of Tuvalu
The women of Tuvalu use cowrie and other shells in traditional handicrafts. The artistic traditions of Tuvalu have traditionally been expressed in the design of clothing and traditional handicrafts such as the decoration of mats and fans. Crochet (kolose) is one of the art forms practiced by Tuvaluan women. The material culture of Tuvalu uses traditional design elements in artefacts used in everyday life such as the design of canoes and fish hooks made from traditional materials. The design of women's skirts (titi), tops (teuga saka), headbands, armbands, and wristbands, which continue to be used in performances of the traditional dance songs of Tuvalu, represents contemporary Tuvaluan art and design.
In 2015 an exhibition was held on Funafuti of the art of Tuvalu, with works that addressed climate change through the eyes of artists and the display of Kope ote olaga (possessions of life), which was a display of the various artefacts of Tuvalu culture.
Dance and music
The traditional music of Tuvalu consists of a number of dances, including fatele, fakanau and fakaseasea. The fatele, in its modern form, is performed at community events and to celebrate leaders and other prominent individuals, such as the visit of the Duke and Duchess of Cambridge in September 2012. The Tuvaluan style can be described "as a musical microcosm of Polynesia, where contemporary and older styles co-exist".
Cuisine
The cuisine of Tuvalu is based on the staple of coconut and the many species of fish found in the ocean and lagoons of the atolls. Desserts made on the islands include coconut and coconut milk, instead of animal milk. The traditional foods eaten in Tuvalu are pulaka, taro, bananas, breadfruit and coconut.Hedley, pp. 60–63 Tuvaluans also eat seafood, including coconut crab and fish from the lagoon and ocean. A traditional food source is seabirds (taketake or black noddy and akiaki or white tern), with pork being eaten mostly at fateles (or parties with dancing to celebrate events).
Pulaka is the main source for carbohydrates. Seafood provides protein. Bananas and breadfruit are supplemental crops. Coconut is used for its juice, to make other beverages and to improve the taste of some dishes.
A 1560-square-metre pond was built in 1996 on Vaitupu to sustain aquaculture in Tuvalu.
Flying fish are caught as a source of food; and as an exciting activity, using a boat, a butterfly net and a spotlight to attract the flying fish.
thumb|upright|Canoe carving on Nanumea
Heritage
The traditional community system still survives to a large extent on Tuvalu. Each family has its own task, or salanga, to perform for the community, such as fishing, house building or defence. The skills of a family are passed on from parents to children.
Most islands have their own fusi, community owned shops similar to convenience stores, where canned foods and bags of rice can be purchased. Goods are cheaper and fusis give better prices for their own produce.
Another important building is the falekaupule or maneapa the traditional island meeting hall, where important matters are discussed and which is also used for wedding celebrations and community activities such as a fatele involving music, singing and dancing. Falekaupule is also used as the name of the council of elders – the traditional decision making body on each island. Under the Falekaupule Act, Falekaupule means "traditional assembly in each island...composed in accordance with the Aganu of each island". Aganu means traditional customs and culture.
Sport and leisure
A traditional sport played in Tuvalu is kilikiti, which is similar to cricket. A popular sport specific to Tuvalu is Ano, which is played with two round balls of diameter. Ano is a localised version of volleyball, in which the two hard balls made from pandanus leaves are volleyed at great speed with the team members trying to stop the Ano hitting the ground. Traditional sports in the late 19th century were foot racing, lance throwing, quarterstaff fencing and wrestling, although the Christian missionaries disapproved of these activities.Hedley, p. 56
The popular sports in Tuvalu include kilikiti, Ano, football, futsal, volleyball, handball, basketball and rugby union. Tuvalu has sports organisations for athletics, badminton, tennis, table tennis, volleyball, football, basketball, rugby union, weightlifting and powerlifting. At the 2013 Pacific Mini Games, Tuau Lapua Lapua won Tuvalu's first gold medal in an international competition in the weightlifting 62 kilogram male snatch. (He also won bronze in the clean and jerk, and obtained the silver medal overall for the combined event.) In 2015 Telupe Iosefa received the first gold medal won by Tuvalu at the Pacific Games in the powerlifting 120 kg male division.
thumb|right|Tuvalu national football team (2011)
Football in Tuvalu is played at club and national team level. The Tuvalu national football team trains at the Tuvalu Sports Ground in Funafuti and competes in the Pacific Games. The Tuvalu National Football Association is an associate member of the Oceania Football Confederation (OFC) and is seeking membership in FIFA. 22 September 2008 The Tuvalu national futsal team participates in the Oceanian Futsal Championship.
A major sporting event is the "Independence Day Sports Festival" held annually on 1 October. The most important sports event within the country is arguably the Tuvalu Games, which are held yearly since 2008. Tuvalu first participated in the Pacific Games in 1978 and in the Commonwealth Games in 1998, when a weightlifter attended the games held at Kuala Lumpur, Malaysia. Two table tennis players attended the 2002 Commonwealth Games in Manchester, England; Tuvalu entered competitors in shooting, table tennis and weightlifting at the 2006 Commonwealth Games in Melbourne, Australia; three athletes participated in the 2010 Commonwealth Games in Delhi, India, entering the discus, shot put and weightlifting events; and a team of 3 weightlifters and 2 table tennis players attended the 2014 Commonwealth Games in Glasgow. Tuvaluan athletes have also participated in the men's and women's 100 metre sprints at the
World Championships in Athletics from 2009.
The Tuvalu Association of Sports and National Olympic Committee (TASNOC) was recognised as a National Olympic Committee in July 2007. Tuvalu entered the Olympic Games for the first time at the 2008 Summer Games in Beijing, China, with a weightlifter and two athletes in the men's and women's 100-metre sprints. A team with athletes in the same events represented Tuvalu at the 2012 Summer Olympics. Etimoni Timuani was the sole representative of Tuvalu at the 2016 Summer Olympics in the 100 m event.
Economy and government services
Economy
thumb|National Bank of Tuvalu
From 1996 to 2002, Tuvalu was one of the best-performing Pacific Island economies and achieved an average real gross domestic product (GDP) growth rate of 5.6% per annum. Since 2002 economic growth has slowed, with GDP growth of 1.5% in 2008. Tuvalu was exposed to rapid rises in world prices of fuel and food in 2008, with the level of inflation peaking at 13.4%. The International Monetary Fund 2010 Report on Tuvalu estimates that Tuvalu experienced zero growth in its 2010 GDP, after the economy contracted by about 2% in 2009. On 5 August 2012, the Executive Board of the International Monetary Fund (IMF) concluded the Article IV consultation with Tuvalu, and assessed the economy of Tuvalu: "A slow recovery is underway in Tuvalu, but there are important risks. GDP grew in 2011 for the first time since the global financial crisis, led by the private retail sector and education spending. We expect growth to rise slowly". The IMF 2014 Country Report noted that real GDP growth in Tuvalu had been volatile averaging only 1 percent in the past decade. The 2014 Country Report describes economic growth prospects as generally positive as the result of large revenues from fishing licenses, together with substantial foreign aid. While a budget deficit of A$0.4 million was projected for 2015, the Asian Development Bank (ADB) assessed the budget as being A$14.3m in surplus as the result of high tuna fish license fees. The ADB predicted that the 2% growth rate for 2015 would continue into 2016. Nonetheless, Tuvalu has the smallest GDP of any sovereign nation in the world.
Banking services are provided by the National Bank of Tuvalu. Public sector workers make up about 65% of those formally employed. Remittances from Tuvaluans living in Australia and New Zealand, and remittances from Tuvaluan sailors employed on overseas ships are important sources of income for Tuvaluans. Approximately 15% of adult males work as seamen on foreign-flagged merchant ships. Agriculture in Tuvalu is focused on coconut trees and growing pulaka in large pits of composted soil below the water table. Tuvaluans are otherwise involved in traditional subsistence agriculture and fishing.
Tuvaluans are well known for their seafaring skills, with the Tuvalu Maritime Training Institute on Amatuku motu (island), Funafuti, providing training to approximately 120 marine cadets each year so that they have the skills necessary for employment as seafarers on merchant shipping. The Tuvalu Overseas Seamen's Union (TOSU) is the only registered trade union in Tuvalu. It represents workers on foreign ships. The Asian Development Bank (ADB) estimates that 800 Tuvaluan men are trained, certified and active as seafarers. The ADB estimates that, at any one time, about 15% of the adult male population works abroad as seafarers. Job opportunities also exist as observers on tuna boats where the role is to monitor compliance with the boat's tuna fishing licence.
Government revenues largely come from sales of fishing licenses, income from the Tuvalu Trust Fund, and from the lease of its highly fortuitous .tv Internet Top Level Domain (TLD). In 1998, Tuvalu began deriving revenue from the use of its area code for premium-rate telephone numbers and from the commercialisation of its ".tv" Internet domain name, which is now managed by Verisign until 2021. The ".tv" domain name generates around $2.2 million each year from royalties, which is about ten per cent of the government's total revenue. Domain name income paid most of the cost of paving the streets of Funafuti and installing street lighting in mid-2002. Tuvalu also generates income from stamps by the Tuvalu Philatelic Bureau and income from the Tuvalu Ship Registry.
The Tuvalu Trust Fund was established in 1987 by the United Kingdom, Australia, and New Zealand. The value of the Tuvalu Trust Fund is approximately $100 million. Financial support to Tuvalu is also provided by Japan, South Korea and the European Union. Australia and New Zealand continue to contribute capital to the Tuvalu Trust Fund and provide other forms of development assistance.
The US government is also a major revenue source for Tuvalu. In 1999 the payment from the South Pacific Tuna Treaty (SPTT) was about $9 million, with the value increasing in the following years. In May 2013 representatives from the United States and the Pacific Islands countries agreed to sign interim arrangement documents to extend the Multilateral Fisheries Treaty (which encompasses the South Pacific Tuna Treaty) for 18 months.
The United Nations designates Tuvalu as a least developed country (LDC) because of its limited potential for economic development, absence of exploitable resources and its small size and vulnerability to external economic and environmental shocks. Tuvalu participates in the Enhanced Integrated Framework for Trade-Related Technical Assistance to Least Developed Countries (EIF), which was established in October 1997 under the auspices of the World Trade Organisation. In 2013 Tuvalu deferred its graduation from least developed country (LDC) status to a developing country to 2015. Prime Minister Enele Sopoaga said that this deferral was necessary to maintain access by Tuvalu to the funds provided by the United Nations's National Adaptation Programme of Action (NAPA), as "Once Tuvalu graduates to a developed country, it will not be considered for funding assistance for climate change adaptation programmes like NAPA, which only goes to LDCs". Tuvalu had met targets so that Tuvalu was to graduate from LDC status. Prime minister, Enele Sopoaga wants the United Nations to reconsider its criteria for graduation from LDC status as not enough weight is given to the environmental plight of small island states like Tuvalu in the application of the Environmental Vulnerability Index (EVI).
Tourism
thumb|right|300px|Funafuti lagoon (Te Namo)
Due to the country's remoteness, tourism is not significant. Visitors totalled 1,684 in 2010, 65% were on business, development officials or technical consultants, 20% were tourists (360 people), and 11% were expatriates returning to visit family.
The main island of Funafuti is the focus of travellers, since the only airport in Tuvalu is the Funafuti International Airport and the island has hotel facilities.Tuvalu's official Tourism web site. Timelesstuvalu.com. Retrieved on 14 July 2013. Ecotourism is a motivation of travellers to Tuvalu. The Funafuti Conservation Area consists of of ocean, reef, lagoon, channel and six uninhabited islets.
The outer atolls can be visited on the two passenger-cargo ships, Nivaga II and Manú Folau, which provide round-trip visits to the outer islands every three or four weeks. There is guesthouse accommodation on many of the outer islands.
Telecommunications and media
The Tuvalu Media Department of the Government of Tuvalu operates Radio Tuvalu which broadcasts from Funafuti. In 2011 the Japanese government provided financial support to construct a new AM broadcast studio. The installation of upgraded transmission equipment allows Radio Tuvalu to be heard on all nine islands of Tuvalu. The new AM radio transmitter on Funafuti replaced the FM radio service to the outer islands and freed up satellite bandwidth for mobile services. Fenui – news from Tuvalu is a free digital publication of the Tuvalu Media Department that is emailed to subscribers and operates a Facebook page, which publishes news about government activities and news about Tuvaluan events, such as a special edition covering the results of the 2015 general election.
The Tuvalu Telecommunications Corporation (TTC), a state-owned enterprise, provides fixed line telephone communications to subscribers on each island, mobile phone services on Funafuti, Vaitupu and Nukulaelae and is a distributor of the Fiji Television service (Sky Pacific satellite television service).
Communications in Tuvalu rely on satellite dishes for telephone and internet access. The available bandwidth is only 512 kbit/s uplink, and 1.5 Mbit/s downlink. Throughout Tuvalu are more than 900 subscribers who want to use the satellite service, with demand slowing down the speed of the system.
Transport
thumb|right|Manu Folau off Vaitupu Harbour
Transport services in Tuvalu are limited. There are about of roads. The streets of Funafuti were paved in mid-2002 but other roads are unpaved. Tuvalu is among a few countries that do not have railroads.
Funafuti is the only port but there is a deep-water berth in the harbour at Nukufetau. The merchant marine fleet consists of two passenger/cargo ships Nivaga III and Manu Folau. These ships carry cargo and passengers between the main atolls and travel between Suva, Fiji and Funafuti 3 to 4 times a year. The Nivaga III and Manu Folau provide round trip visits to the outer islands every three or four weeks. The Manu Folau is a 50-metre vessel that was a gift from Japan to the people of Tuvalu. In 2015 the United Nations Development Program (UNDP) assisted the government of Tuvalu to acquire MV Talamoana, a 30-metre vessel that will be used to implement Tuvalu's National Adaptation Programme of Action (NAPA) to transport government officials and project personnel to the outer islands. In 2015 the Nivaga III was donated by the government of Japan; it replaced the Nivaga II, which had serviced Tuvalu from 1989.
The single airport is Funafuti International Airport. It is a tarred strip. Fiji Airways, the owner of Fiji Airlines (trading as Fiji Link) operates services 3 times a week (Tuesday, Thursday and Saturday) between Suva (originating from Nadi) and Funafuti with ATR 72–600, a 68-seat plane.
Geography and environment
Geography
thumb|right|A beach at Funafuti atoll.
Tuvalu consists of three reef islands and six true atolls. Its small, scattered group of atolls have poor soil and a total land area of only about making it the fourth smallest country in the world. The islets that form the atolls are very low lying. Nanumanga, Niutao, Niulakita are reef islands and the six true atolls are Funafuti, Nanumea, Nui, Nukufetau, Nukulaelae and Vaitupu. Tuvalu's Exclusive Economic Zone (EEZ) covers an oceanic area of approximately 900,000 km2.
Funafuti is the largest atoll of the nine low reef islands and atolls that form the Tuvalu volcanic island chain. It comprises numerous islets around a central lagoon that is approximately (N–S) by (W-E), centred on 179°7'E and 8°30'S. On the atolls, an annular reef rim surrounds the lagoon with several natural reef channels. Surveys were carried out in May 2010 of the reef habitats of Nanumea, Nukulaelae and Funafuti and a total of 317 fish species were recorded during this Tuvalu Marine Life study. The surveys identified 66 species that had not previously been recorded in Tuvalu, which brings the total number of identified species to 607.
Climate
thumb|250px| Tuvalu Meteorological Service, Fongafale, Funafuti atoll
Tuvalu experiences two distinct seasons, a wet season from November to April and a dry season from May to October. Westerly gales and heavy rain are the predominate weather conditions from October to March, the period that is known as Tau-o-lalo, with tropical temperatures moderated by easterly winds from April to November.
Tuvalu experiences the effects of El Niño and La Niña caused by changes in ocean temperatures in the equatorial and central Pacific. El Niño effects increase the chances of tropical storms and cyclones, while La Niña effects increase the chances of drought. Typically the islands of Tuvalu receive between of rainfall per month. However, in 2011 a weak La Niña effect caused a drought by cooling the surface of the sea around Tuvalu. A state of emergency was declared on 28 September 2011; with rationing of fresh-water on the islands of Funafuti and Nukulaelae. Households on Funafuti and Nukulaelae were restricted to two buckets of fresh water per day (40 litres).
The governments of Australia and New Zealand responded to the 2011 fresh-water crisis by supplying temporary desalination plants, and assisted in the repair of the existing desalination unit that was donated by Japan in 2006. In response to the 2011 drought, Japan funded the purchase of a 100 m3/d desalination plant and two portable 10 m3/d plants as part of its Pacific Environment Community (PEC) program. Aid programs from the European Union and Australia also provided water tanks as part of the longer term solution for the storage of available fresh water. The La Niña event that caused the drought ended in April–May 2012. The central Pacific Ocean experiences changes from periods of La Niña to periods of El Niño; In June 2015 the Tuvalu Meteorological Service announced that an El Niño event has arrived in Tuvalu.
Environmental pressures
thumb|right|A wharf and beach at Funafuti atoll
The eastern shoreline of Funafuti Lagoon was modified during World War II when the airfield (what is now Funafuti International Airport) was constructed. The coral base of the atoll was used as fill to create the runway. The resulting borrow pits impacted the fresh-water aquifer. In the low areas of Funafuti the sea water can be seen bubbling up through the porous coral rock to form pools with each high tide. Since 1994 a project has been in development to assess the environmental impact of transporting sand from the lagoon to fill all the borrow pits and low-lying areas on Fongafale. In 2014 the Tuvalu Borrow Pits Remediation (BPR) project was approved in order to fill 10 borrow pits, leaving Tafua Pond, which is a natural pond. The New Zealand Government funded the BPR project. The project was carried out in 2015 with 365,000 sqm of sand being dredged from the lagoon to fill the holes and improve living conditions on the island. This project increase the usable land space on Fongafale by eight per cent.
During World War II several piers were also constructed on Fongafale in the Funafuti Lagoon; beach areas were filled and deep water access channels were excavated. These alterations to the reef and shoreline resulted in changes to wave patterns with less sand accumulating to form the beaches as compared to former times and the shoreline is now exposed to wave action. Several attempts to stabilise the shoreline have not achieved the desired effect.
The reefs at Funafuti have suffered damage, with 80 per cent of the coral becoming bleached as a consequence of the increase in ocean temperatures and ocean acidification. The coral bleaching, which includes staghorn corals, is attributed to the increase in water temperature that occurred during the El Niños that occurred from 1998–2000 and from 2000–2001. A reef restoration project has investigated reef restoration techniques; and researchers from Japan have investigated rebuilding the coral reefs through the introduction of foraminifera. The project of the Japan International Cooperation Agency is designed to increase the resilience of the Tuvalu coast against sea level rise through ecosystem rehabilitation and regeneration and through support for sand production.
The rising population has resulted in an increased demand on fish stocks, which are under stress; although the creation of the Funafuti Conservation Area has provided a fishing exclusion area to help sustain the fish population across the Funafuti lagoon. Population pressure on the resources of Funafuti and inadequate sanitation systems have resulted in pollution. The Waste Operations and Services Act of 2009 provides the legal framework for waste management and pollution control projects funded by the European Union directed at organic waste composting in eco-sanitation systems. The Environment Protection (Litter and Waste Control) Regulation 2013 is intended to improve the management of the importation of non-biodegradable materials. In Tuvalu plastic waste is a problem as much imported food and other commodities are supplied in plastic containers or packaging.
Water and sanitation
Rainwater harvesting is the principal source of freshwater in Tuvalu. Nukufetau, Vaitupu and Nanumea are the only islands with sustainable groundwater supplies. The effectiveness of rainwater harvesting is diminished because of poor maintenance of roofs, gutters and pipes.Kingston, P A (2004). Surveillance of Drinking Water Quality in the Pacific Islands: Situation Analysis and Needs Assessment, Country Reports. WHO. Retrieved 25 March 2010 Aid programmes of Australia and the European Union have been directed to improving the storage capacity on Funafuti and in the outer islands.
Reverse osmosis (R/O) desalination units supplement rainwater harvesting on Funafuti. The 65 m3 desalination plant operates at a real production level of around 40 m3 per day. R/O water is only intended to be produced when storage falls below 30%, however demand to replenish household storage supplies with tanker-delivered water means that the R/O desalination units are continually operating. Water is delivered at a cost of A$3.50 per m3. Cost of production and delivery has been estimated at A$6 per m3, with the difference subsidised by the government.
In July 2012 a United Nations Special Rapporteur called on the Tuvalu Government to develop a national water strategy to improve access to safe drinking water and sanitation. In 2012, Tuvalu developed a National Water Resources Policy under the Integrated Water Resource Management (IWRM) Project and the Pacific Adaptation to Climate Change (PACC) Project, which are sponsored by the Global Environment Fund/SOPAC. Government water planning has established a target of between 50 and 100L of water per person per day accounting for drinking water, cleaning, community and cultural activities.
Tuvalu is working with the South Pacific Applied Geoscience Commission (SOPAC) to implement composting toilets and to improve the treatment of sewage sludge from septic tanks on Fongafale as septic tanks are leaking into the freshwater lens in the sub-surface of the atoll as well as the ocean and lagoon. Composting toilets reduce water use by up to 30%.
Cyclones and king tides
Cyclones
thumb|right|Ocean side of Funafuti atoll showing the storm dunes, the highest point on the atoll.
Because of the low elevation, the islands that make up this nation are vulnerable to the effects of tropical cyclones and by the threat of current and future sea level rise. A warning system, which uses the Iridium satellite network, was introduced in 2016 in order to allow outlying islands to be better prepare for natural disasters.
The highest elevation is above sea level on Niulakita, which gives Tuvalu the second-lowest maximum elevation of any country (after the Maldives). The highest elevations are typically in narrow storm dunes on the ocean side of the islands which are prone to overtopping in tropical cyclones, as occurred with Cyclone Bebe, which was a very early-season storm that passed through the Tuvaluan atolls in October 1972.Bureau of Meteorology (1975) Tropical Cyclones in the Northern Australian Regions 1971–1972 Australian Government Publishing Service Cyclone Bebe submerged Funafuti, eliminating 90% of structures on the island. Sources of drinking water were contaminated as a result of the system's storm surge and the flooding of the sources of fresh water.
George Westbrook, a trader on Funafuti, recorded a cyclone that struck Funafuti in 1883. A cyclone caused severe damage to the islands in 1894.
Cyclone Bebe in 1972 caused severe damage to Funafuti. Funafuti's Tepuka Vili Vili islet was devastated by Cyclone Meli in 1979, with all its vegetation and most of its sand swept away during the cyclone. Along with a tropical depression that affected the islands a few days later, Severe Tropical Cyclone Ofa had a major impact on Tuvalu with most islands reporting damage to vegetation and crops. Cyclone Gavin was first identified during 2 March 1997, and was the first of three tropical cyclones to affect Tuvalu during the 1996–97 cyclone season with Cyclones Hina and Keli following later in the season.
In March 2015, the winds and storm surge created by Cyclone Pam resulted in waves of to breaking over the reef of the outer islands caused damage to houses, crops and infrastructure. On Nui the sources of fresh water were destroyed or contaminated. The flooding in Nui and Nukufetau caused many families to shelter in evacuation centres or with other families. Nui suffered the most damage of the three central islands (Nui, Nukufetau and Vaitupu); with both Nui and Nukufetau suffering the loss of 90% of the crops. Of the three northern islands (Nanumanga, Niutao, Nanumea), Nanumanga suffered the most damage, with 60–100 houses flooded, with the waves also causing damage to the health facility. Vasafua islet, part of the Funafuti Conservation Area, was severely damaged by Cyclone Pam. The coconut palms were washed away, leaving the islet as a sand bar.
The Tuvalu Government carried out assessments of the damage caused by Cyclone Pam to the islands and has provided medical aid, food as well as assistance for the cleaning-up of storm debris. Government and Non-Government Organisations provided assistance technical, funding and material support to Tuvalu to assist with recovery, including WHO, UNICEF, UNDP, OCHA, World Bank, DFAT, New Zealand Red Cross & IFRC, Fiji National University and governments of New Zealand, Netherlands, UAE, Taiwan and the United States.
King tides
Tuvalu is also affected by perigean spring tide events which raise the sea level higher than a normal high tide. The highest peak tide recorded by the Tuvalu Meteorological Service was on 24 February 2006 and again on 19 February 2015. As a result of historical sea level rise, the king tide events lead to flooding of low-lying areas, which is compounded when sea levels are further raised by La Niña effects or local storms and waves.
Impact of climate change
Challenges Tuvalu faces as a result of climate change
As low-lying islands lacking a surrounding shallow shelf, the communities of Tuvalu are especially susceptible to changes in sea level and undissipated storms. At its highest, Tuvalu is only above sea level, and Tuvaluan leaders have been concerned about the effects of rising sea levels for a few years. It is estimated that a sea level rise of 20–40 centimetres (8–16 inches) in the next 100 years could make Tuvalu uninhabitable.Hunter, J. A. (2002). Note on Relative Sea Level Change at Funafuti, Tuvalu. Retrieved 13 May 2006.
Whether there are measurable changes in the sea level relative to the islands of Tuvalu is a contentious issue. There were problems associated with the pre-1993 sea level records from Funafuti which resulted in improvements in the recording technology to provide more reliable data for analysis. The degree of uncertainty as to estimates of sea level change relative to the islands of Tuvalu was reflected in the conclusions made in 2002 from the available data. The 2011 report of the Pacific Climate Change Science Program published by the Australian Government, concludes: "The sea-level rise near Tuvalu measured by satellite altimeters since 1993 is about per year."
Tuvalu has adopted a national plan of action as the observable transformations over the last ten to fifteen years show Tuvaluans that there have been changes to the sea levels. These include sea water bubbling up through the porous coral rock to form pools at high tide and the flooding of low-lying areas including the airport during spring tides and king tides.
The atolls have shown resilience to gradual sea-level rise, with atolls and reef islands being able to grow under current climate conditions by generating sufficient sand and coral debris that accumulates and gets dumped on the islands during cyclones. Gradual sea-level rise also allows for coral polyp activity to increase the reefs. However, if the increase in sea level occurs at faster rate as compared to coral growth, or if polyp activity is damaged by ocean acidification, then the resilience of the atolls and reef islands is less certain. The 2011 report of Pacific Climate Change Science Program of Australia concludes, in relation to Tuvalu, states the conclusions that over the course of the 21st century:
Surface air temperatures and sea‑surface temperatures are projected to continually increase (very high confidence).
Annual and seasonal mean rainfalls are projected to increase (high confidence).
The intensity and frequency of extreme heat days are projected to increase (very high confidence).
The intensity and frequency of extreme rainfall days are projected to increase (high confidence).
The incidence of drought is projected to decrease (moderate confidence).
Tropical cyclone numbers are projected to decline in the south-east Pacific Ocean basin (0–40ºS, 170ºE–130ºW) (moderate confidence).
Ocean acidification is projected to continue (very high confidence).
Mean sea-level rise is projected to continue (very high confidence).
The South Pacific Applied Geoscience Commission (SOPAC) suggests that, while Tuvalu is vulnerable to climate change, environmental problems such as population growth and poor coastal management also affect sustainable development. SOPAC ranks the country as extremely vulnerable using the Environmental Vulnerability Index.SOPAC. 2005. Tuvalu – Environmental Vulnerability Index. Retrieved 13 May 2006.
While some commentators have called for the relocation of Tuvalu's population to Australia, New Zealand or Kioa in Fiji, in 2006 Maatia Toafa (Prime Minister from 2004–2006) said his government did not regard rising sea levels as such a threat that the entire population would need to be evacuated.Political Parties Cautious On Tuvalu-Kioa Plan, Pacific Magazine, 21 February 2006.Kioa relocation not priority: Tuvalu PM, Tuvalu Online, 21 February 2006. In 2013 Enele Sopoaga, the prime minister of Tuvalu, said that relocating Tuvaluans to avoid the impact of sea level rise "should never be an option because it is self defeating in itself. For Tuvalu I think we really need to mobilise public opinion in the Pacific as well as in the [rest of] world to really talk to their lawmakers to please have some sort of moral obligation and things like that to do the right thing."
2015 United Nations Climate Change Conference (COP21)
Prime Minister Enele Sopoaga said at the 2015 United Nations Climate Change Conference (COP21) that the goal for COP21 should be a global temperature goal of below 1.5 degrees Celsius relative to pre-industrial levels, which is the position of the Alliance of Small Island States.
Prime Minister Sopoaga said in his speech to the meeting of heads of state and government:
His speech concluded with the plea:
The participating countries agreed to reduce their carbon output "as soon as possible" and to do their best to keep global warming "to well below 2 degrees C". Enele Sopoaga described the important outcomes of COP21 as including the stand-alone provision for assistance to small island states and some of the least developed countries for loss and damage resulting from climate change and the ambition of limiting temperature rise to 1.5 degrees by the end of the century.
Filmography and bibliography
Filmography
Documentary films about Tuvalu:
Tu Toko Tasi (Stand by Yourself) (2000) Conrad Mill, a Secretariat of the Pacific Community (SPC) production
Paradise Domain – Tuvalu (Director: Joost De Haas, Bullfrog Films/TVE 2001) 25:52 minutes – YouTube video
Tuvalu island tales (A Tale of two Islands) (Director: Michel Lippitsch) 34 minutes – YouTube video
The Disappearing of Tuvalu: Trouble in Paradise (2004) by Christopher Horner and Gilliane Le Gallic
Paradise Drowned: Tuvalu, the Disappearing Nation (2004) Written and produced by Wayne Tourell. Directed by Mike O'Connor, Savana Jones-Middleton and Wayne Tourell
Going Under (2004) by Franny Armstrong, Spanner Films
Before the Flood: Tuvalu (2005) by Paul Lindsay
Time and Tide (2005) by Julie Bayer and Josh Salzman
Tuvalu: That Sinking Feeling (2005) by Elizabeth Pollock from PBS Rough Cut
Atlantis Approaching (2006) by Elizabeth Pollock
King Tide | The Sinking of Tuvalu (2007) by Juriaan Booij
Tuvalu (Director: Aaron Smith, ‘Hungry Beast’ program, ABC June 2011) 6:40 minutes – YouTube video
Tuvalu: Renewable Energy in the Pacific Islands Series (2012) a production of the Global Environment Facility (GEF), United Nations Development Programme (UNDP) and SPREP
Mission Tuvalu (Missie Tuvalu) (2013) feature documentary directed by Jeroen van den Kroonenberg
ThuleTuvalu (2014) by Matthias von Gunten, HesseGreutert Film/OdysseyFilm
Bibliography
Bibliography of Tuvalu
Further reading
Lonely Planet Guide: South Pacific & Micronesia, by various
Chalkley, John, (1999) Vaitupu – An Account of Life on a Remote Polynesian Atoll, Matuku Publications.
Ells, Philip, (2008) Where the Hell is Tuvalu? Virgin Books.
Watling, Dick, (2003) A Guide to the Birds of Fiji and Western Polynesia: Including American Samoa, Niue, Samoa, Tokelau, Tonga, Tuvalu and Wallis and Futuna, Environmental Consultants (Fiji) Ltd; 2nd edition.
Culture, Customs and Traditions
Barkås, Sandra Iren, Alofa – Expressions of Love: Change and Continuity in Tuvalu (2013)
Brady, Ivan, (1972) Kinship Reciprocity in the Ellice Islands, Journal of Polynesian Society 81:3, 290–316
Brady, Ivan, (1974) Land Tenure in the Ellice Islands, in Henry P. Lundsaarde (ed). Land Tenure in Oceania, Honolulu, University Press of Hawaii.
Chambers, Keith & Anne Chambers, (January 2001) Unity of Heart: Culture and Change in a Polynesian Atoll Society, Waveland Pr Inc. ISBN 1577661664 ISBN 978-1577661665
Koch, Gerd, (1961) Die Materielle Kulture der Ellice-Inseln, Berlin: Museum fur Volkerkunde; The English translation by Guy Slatter, was published as The Material Culture of Tuvalu, University of the South Pacific in Suva (1981) ASIN B0000EE805.
History
Tuvalu: A History (1983) Isala, Tito and Larcy, Hugh (eds.), Institute of Pacific Studies, University of the South Pacific and Government of Tuvalu.
Bedford, R., Macdonald, B., & Munro, D., (1980) Population estimates for Kiribati and Tuvalu, 1850–1900: Review and speculation, Journal of the Polynesian Society, 89, 199–246.
Bollard, AE., (1981) The financial adventures of J. C. Godeffroy and Son in the Pacific, Journal of Pacific History, 16: 3–19.
Firth, S., (1973) German firms in the Western Pacific Islands, 1857–1914, Journal of Pacific History, 8: 10–28.
Geddes, W. H., Chambers, A., Sewell, B., Lawrence, R., & Watters, R. (1982) Islands on the Line, team report. Atoll economy: Social change in Kiribati and Tuvalu, No. 1, Canberra: Australian National University, Development Studies Centre.
Goodall, N. (1954) A history of the London Missionary Society 1895–1945, London: Oxford University Press.
Macdonald, Barrie, (1971) Local government in the Gilbert and Ellice Islands 1892–1969 – part 1, Journal of Administration Overseas, 10, 280–293.
Macdonald, Barrie, (1972) Local government in the Gilbert and Ellice Islands 1892–1969 – part 2, Journal of Administration Overseas, 11, 11–27.
Macdonald, Barrie, (2001) Cinderellas of the Empire: towards a history of Kiribati and Tuvalu, Institute of Pacific Studies, University of the South Pacific, Suva, Fiji. ISBN 982-02-0335-X (Australian National University Press, first published 1982).
Munro, D, Firth, S., (1986) Towards colonial protectorates: the case of the Gilbert and Ellice Islands, Australian Journal of Politics and History, 32: 63–71.
Maude, H. E., (1949) The Co-operative Movement in the Gilbert and Ellice Islands (Technical Paper No. 1), South Pacific Commission, Sydney.
Suamalie N.T. Iosefa, Doug Munro, Niko Besnier, (1991) Tala O Niuoku, Te: the German Plantation on Nukulaelae Atoll 1865–1890, Published by the Institute of Pacific Studies. ISBN 9820200733.
Pulekai A. Sogivalu, (1992) A Brief History of Niutao, Published by the Institute of Pacific Studies. ISBN 982020058X.
Language
Vaiaso ote Gana, Tuvalu Language Week Education Resource 2016 (New Zealand Ministry for Pacific Peoples)
Besnier, Niko, (1995) Literacy, Emotion and Authority: Reading and Writing on a Polynesian Atoll, Cambridge University Press.
Besnier, Niko, (2000) Tuvaluan: A Polynesian Language of the Central Pacific. (Descriptive Grammars) Routledge ISBN 0415024560 ISBN 978-0415024563.
Jackson, Geoff W. & Jenny Jackson, (1999) An Introduction to Tuvaluan.
Jackson, Geoff W., (1994) Te Tikisionale O Te Gana Tuvalu, A Tuvaluan-English Dictionary, Suva, Fiji, Oceania Printers.
Music and Dance
Christensen, Dieter, (1964) Old Musical Styles in the Ellice Islands, Western Polynesia, Ethnomusicology, 8:1, 34–40.
Christensen, Dieter and Gerd Koch, (1964) Die Musik der Ellice-Inseln, Berlin: Museum fur Volkerkunde.
Koch, Gerd, (2000) Songs of Tuvalu (translated by Guy Slatter), Institute of Pacific Studies, University of the South Pacific.
External links
Tuvalu from UCB Libraries GovPubs''
Tuvalu profile from the BBC News
References
Category:1978 establishments in Oceania
Category:Archipelagoes of the Pacific Ocean
Category:Countries in Polynesia
Category:English-speaking countries and territories
Category:Island countries
Category:Least developed countries
Category:Liberal democracies
Category:Member states of the Commonwealth of Nations
Category:Member states of the United Nations
Category:Small Island Developing States
Category:States and territories established in 1978
Category:World War II sites | 30,227 | 2017-01 |
Somerset | {{Infobox English county
| official_name = Somerset
| other_name =
| image_main =
| image_caption =
| flag_image = border|160px
| flag_link = Flag of Somerset
| arms_image =
| arms_link =
| motto = Sumorsǣte ealle (All The People of Somerset)
| locator_map = 200px|Somerset within England
| coordinates =
| region = South West England
| established_date = Historic
| established_by =
| preceded_by =
| origin =
| lord_lieutenant_office =
| lord_lieutenant_name =
| high_sheriff_office =
| high_sheriff_name =
| area_total_km2 = 4171
| area_total_rank = 7th
| ethnicity = 98.5% White
| county_council = File:Somerset county coat of arms.png
| unitary_council =
| unitary_council1 =
| government =
| joint_committees =
| admin_hq = Taunton
| area_council_km2 = 3451
| area_council_rank = 12th
| iso_code = GB-SOM
| ons_code = 40
| gss_code =
| nuts_code = UKK23
| districts_map = File:Somerset Ceremonial Numbered2.gif
| districts_key =
| districts_list = #South Somerset
Taunton Deane (Borough)
West Somerset
Sedgemoor
Mendip
Bath and North East Somerset (Unitary)
North Somerset (Unitary)
| MPs = *Rebecca Pow (C)
Ben Howlett (C)
Liam Fox (C)
David Warburton (C)
Marcus Fysh (C)
Ian Liddell-Grainger (C)
James Heappey (C)
Jacob Rees-Mogg (C)
John Penrose (C)
| police =
| website =
}}
Somerset () is a county in South West England which borders Gloucestershire and Bristol to the north, Wiltshire to the east, Dorset to the south-east and Devon to the south-west. It is bounded to the north and west by the Severn Estuary and the Bristol Channel, its coastline facing southeastern Wales. Its traditional border with Gloucestershire is the River Avon. Somerset's county town is Taunton.
Somerset is a rural county of rolling hills such as the Blackdown Hills, Mendip Hills, Quantock Hills and Exmoor National Park, and large flat expanses of land including the Somerset Levels. There is evidence of human occupation from Paleolithic times, and of subsequent settlement in the Roman and Anglo-Saxon periods. The county played a significant part in the consolidation of power and rise of King Alfred the Great, and later in the English Civil War and the Monmouth Rebellion. The city of Bath is famous for its substantial Georgian architecture and is a UNESCO World Heritage Site.
Toponymy
Somerset's name derives from Old English Sumorsǣte, short for Sumortūnsǣte, meaning "the people living at or dependent on Sumortūn" (Somerton). The first known use of Somersæte is in the law code of King Ine who was the Saxon King of Wessex from 688 to 726, making Somerset along with Hampshire, Wiltshire and Dorset one of the oldest extant units of local government in the world. An alternative suggestion is the name derives from Seo-mere-saetan meaning "settlers by the sea lakes".
The Old English name is used in the motto of the county, Sumorsǣte ealle, meaning "all the people of Somerset". Adopted as the motto in 1911, the phrase is taken from the Anglo-Saxon Chronicle. Somerset was a part of the Anglo-Saxon kingdom of Wessex, and the phrase refers to the wholehearted support the people of Somerset gave to King Alfred in his struggle to save Wessex from Viking invaders.
Somerset settlement names are mostly Anglo-Saxon in origin, but some hill names include Brittonic Celtic elements. For example, an Anglo-Saxon charter of 682 refers to Creechborough Hill as "the hill the British call Cructan and we call Crychbeorh" ("we" being the Anglo-Saxons). Some modern names are Brythonic in origin, such as Tarnock, while others have both Saxon and Brythonic elements, such as Pen Hill.
History
thumb|left|A map of the county in 1646, author unknown
The caves of the Mendip Hills were settled during the Palaeolithic period, and contain extensive archaeological sites such as those at Cheddar Gorge. Bones from Gough's Cave have been dated to 12,000 BC, and a complete skeleton, known as Cheddar Man, dates from 7150 BC. Examples of cave art have been found in Aveline's Hole. Some caves continued to be occupied until modern times, including Wookey Hole.
The Somerset Levels—specifically dry points at Glastonbury and Brent Knoll— also have a long history of settlement, and are known to have been settled by Mesolithic hunters. Travel in the area was facilitated by the construction of one of the world's oldest known engineered roadways, the Sweet Track, which dates from 3807 BC or 3806 BC.Brunning, Richard (2001). "The Somerset Levels." In: Current Archaeology, Vol. XV, (No. 4), Issue Number 172 (Wetlands Special Issue), (February 2001), Pp 139–143. .
The exact age of the henge monument at Stanton Drew stone circles is unknown, but it is believed to be Neolithic. There are numerous Iron Age hill forts, some of which, like Cadbury Castle
and Ham Hill, were later reoccupied in the Early Middle Ages.
On the authority of the future emperor Vespasian, as part of the ongoing expansion of the Roman presence in Britain, the Second Legion Augusta invaded Somerset from the south-east in AD 47. The county remained part of the Roman Empire until around AD 409, when the Roman occupation of Britain came to an end.
A variety of Roman remains have been found, including Pagans Hill Roman temple in Chew Stoke,
Low Ham Roman Villa and the Roman Baths that gave their name to the city of Bath.
thumb|right|alt=Yellow/Gray stone bridge with three arches over water which reflects the bridge and the church spire behind. A weir is on the left with other yellow stone buildings behind.|Palladian Pulteney Bridge at Bath
After the Romans left, Britain was invaded by Anglo-Saxon peoples. By AD 600 they had established control over much of what is now England, but Somerset was still in native British hands. The British held back Saxon advance into the south-west for some time longer, but by the early eighth century King Ine of Wessex had pushed the boundaries of the West Saxon kingdom far enough west to include Somerset. The Saxon royal palace in Cheddar was used several times in the 10th century to host the Witenagemot. After the Norman Conquest, the county was divided into 700 fiefs, and large areas were owned by the crown, with fortifications such as Dunster Castle used for control and defence. Somerset contains HM Prison Shepton Mallet, which was England's oldest prison still in use prior to its closure in 2013, having opened in 1610. In the English Civil War Somerset was largely Parliamentarian, with key engagements being the Sieges of Taunton and the Battle of Langport. In 1685 the Monmouth Rebellion was played out in Somerset and neighbouring Dorset. The rebels landed at Lyme Regis and travelled north, hoping to capture Bristol and Bath, but they were defeated in the Battle of Sedgemoor at Westonzoyland, the last pitched battle fought in England. Arthur Wellesley took his title, Duke of Wellington from the town of Wellington; he is commemorated on a nearby hill by a large, spotlit obelisk, known as the Wellington Monument.
The Industrial Revolution in the Midlands and Northern England spelled the end for most of Somerset's cottage industries. Farming continued to flourish, however, and the Bath and West of England Society for the Encouragement of Agriculture, Arts, Manufactures and Commerce was founded in 1777 to improve farming methods. Despite this, 20 years later John Billingsley conducted a survey of the county's agriculture in 1795 and found that agricultural methods could still be improved. Coal mining was an important industry in north Somerset during the 18th and 19th centuries, and by 1800 it was prominent in Radstock. The Somerset Coalfield reached its peak production by the 1920s, but all the pits have now been closed, the last in 1973.Cornwell, John (2005). Collieries of Somerset & Bristol. Ashbourne, Derbyshire: Landmark Publishing Ltd. ISBN 1-84306-170-8. Most of the surface buildings have been removed, and apart from a winding wheel outside Radstock Museum, little evidence of their former existence remains. Further west, the Brendon Hills were mined for iron ore in the late 19th century; this was taken by the West Somerset Mineral Railway to Watchet Harbour for shipment to the furnaces at Ebbw Vale.
Many Somerset soldiers died during the First World War, with the Somerset Light Infantry suffering nearly 5,000 casualties. War memorials were put up in most of the county's towns and villages; only nine, described as the Thankful Villages, had none of their residents killed. During the Second World War the county was a base for troops preparing for the D-Day landings. Some of the hospitals which were built for the casualties of the war remain in use. The Taunton Stop Line was set up to repel a potential German invasion. The remains of its pill boxes can still be seen along the coast, and south through Ilminster and Chard.
A number of decoy towns were constructed in Somerset in World War II to protect Bristol and other towns, at night. They were designed to mimic the geometry of "blacked out" streets, railway lines, and Bristol Temple Meads railway station, to encourage bombers away from these targets.Brown, Donald (1999). Somerset v Hitler: Secret Operations in the Mendips 1939–1945. Newbury: Countryside Books. ISBN 1-85306-590-0. One, on the radio beam flight path to Bristol, was constructed on Beacon Batch. It was laid out by Shepperton Studios, based on aerial photographs of the city's railway marshalling yards. The decoys were fitted with dim red lights, simulating activities like the stoking of steam locomotives. Burning bales of straw soaked in creosote were used to simulate the effects of incendiary bombs dropped by the first wave of Pathfinder night bombers; meanwhile, incendiary bombs dropped on the correct location were quickly smothered, wherever possible. Drums of oil were also ignited to simulate the effect of a blazing city or town, with the aim of fooling subsequent waves of bombers into dropping their bombs on the wrong location. The Chew Magna decoy town was hit by half-a-dozen bombs on 2 December 1940, and over a thousand incendiaries on 3 January 1941. The following night the Uphill decoy town, protecting Weston-super-Mare's airfield, was bombed; a herd of dairy cows was hit, killing some and severely injuring others.
Human geography
Boundaries
thumb|right|The Avon Gorge, the historic boundary between Gloucestershire and Somerset, and also Mercia and Wessex; Somerset is to the left.
The boundaries of Somerset are largely unaltered from medieval times. The River Avon formed much of the border with Gloucestershire, except that the hundred of Bath Forum, which straddles the Avon, formed part of Somerset. Bristol began as a town on the Gloucestershire side of the Avon, however as it grew it extended across the river into Somerset. In 1373 Edward III proclaimed "that the town of Bristol with its suburbs and precincts shall henceforth be separate from the counties of Gloucester and Somerset ... and that it should be a county by itself".
The present-day northern border of Somerset (adjoining the counties of Bristol and Gloucestershire) runs along the southern bank of the Avon from the Bristol Channel, then follows around the southern edge of the Bristol built-up area, before continuing upstream along the Avon, and then diverges from the river to include Bath and its historic hinterland to the north of the Avon, before meeting Wiltshire at the Three Shire Stones on the Fosse Way at Batheaston.Ordnance Survey mapping
Cities and towns
Somerton took over from Ilchester as the county town in the late thirteenth century, but it declined in importance and the status of county town transferred to Taunton about 1366. The county has two cities, Bath and Wells, and 30 towns (including the county town of Taunton, which has no town council but instead is the chief settlement of the county's only borough). The largest urban areas in terms of population are Bath, Weston-super-Mare, Taunton, Yeovil and Bridgwater. Many settlements developed because of their strategic importance in relation to geographical features, such as river crossings or valleys in ranges of hills. Examples include Axbridge on the River Axe, Castle Cary on the River Cary, North Petherton on the River Parrett, and Ilminster, where there was a crossing point on the River Isle. Midsomer Norton lies on the River Somer; while the Wellow Brook and the Fosse Way Roman road run through Radstock. Chard is the most southerly town in Somerset, and at an altitude of it is also the highest.
Physical geography
Geology
Much of the landscape of Somerset falls into types determined by the underlying geology. These landscapes are the limestone karst and lias of the north, the clay vales and wetlands of the centre, the oolites of the east and south, and the Devonian sandstone of the west.
alt=A straight water filled channel surrounded by an avenue of trees and grassy banks.|thumb|alt=Long straight water filled channel, with occasional trees on the left hand bank and grass on the right hand bank.|The River Brue in an artificial channel draining farmland near Glastonbury
To the north-east of the Somerset Levels, the Mendip Hills are moderately high limestone hills. The central and western Mendip Hills was designated an Area of Outstanding Natural Beauty in 1972 and covers . The main habitat on these hills is calcareous grassland, with some arable agriculture. To the south-west of the Somerset Levels are the Quantock Hills which was England's first Area of Outstanding Natural Beauty designated in 1956 which is covered in heathland, oak woodlands, ancient parklands with plantations of conifer and covers 99 square kilometres. The Somerset Coalfield is part of a larger coalfield which stretches into Gloucestershire. To the north of the Mendip hills is the Chew Valley and to the south, on the clay substrate, are broad valleys which support dairy farming and drain into the Somerset Levels.
Caves and rivers
There is an extensive network of caves, including Wookey Hole, underground rivers, and gorges, including the Cheddar Gorge and Ebbor Gorge. The county has many rivers, including the Axe, Brue, Cary, Parrett, Sheppey, Tone and Yeo. These both feed and drain the flat levels and moors of mid and west Somerset. In the north of the county the River Chew flows into the Bristol Avon. The Parrett is tidal almost to Langport, where there is evidence of two Roman wharfs.Hadfield, Charles (1999). Canals of Southern England. London: Phoenix House Ltd.</ref> At the same site during the reign of King Charles I, river tolls were levied on boats to pay for the maintenance of the bridge.
Levels and moors
thumb|left|The town of Glastonbury looking west from the top of Glastonbury Tor. The fields in the distance are the Somerset Levels.
The Somerset Levels (or Somerset Levels and Moors as they are less commonly but more correctly known) are a sparsely populated wetland area of central Somerset, between the Quantock and Mendip hills. They consist of marine clay levels along the coast, and the inland (often peat based) moors. The Levels are divided into two by the Polden Hills; land to the south is drained by the River Parrett while land to the north is drained by the River Axe and the River Brue. The total area of the Levels amounts to about
and broadly corresponds to the administrative district of Sedgemoor but also includes the south west of Mendip district. Approximately 70% of the area is grassland and 30% is arable.
Stretching about inland, this expanse of flat land barely rises above sea level. Before it was drained, much of the land was under a shallow brackish sea in winter and was marsh land in summer. Drainage began with the Romans, and was restarted at various times: by the Anglo-Saxons; in the Middle Ages by the Glastonbury Abbey, during 1400–1770; and during the Second World War, with the construction of the Huntspill River. Pumping and management of water levels still continues.Williams, Michael (1970). The Draining of the Somerset Levels. Cambridge: Cambridge University Press. ISBN 0-521-07486-X.
right|alt=Three small brown horses on grassy area. In the distance are hills.|thumb|The Exmoor landscape with the native Exmoor Pony.
The North Somerset Levels basin, north of the Mendips, covers a smaller geographical area than the Somerset Levels; and forms a coastal area around Avonmouth. It too was reclaimed by draining.Rippon, Stephen (1997). The Severn Estuary: Landscape Evolution and Wetland Reclamation. London: Leicester University. ISBN 0-7185-0069-5 It is mirrored, across the Severn Estuary, in Wales, by a similar low-lying area: the Caldicot and Wentloog Levels.
In the far west of the county, running into Devon, is Exmoor, a high Devonian sandstone moor, which was designated as a national park in 1954, under the 1949 National Parks and Access to the Countryside Act.
The highest point in Somerset is Dunkery Beacon on Exmoor, with an altitude of .
Over 100 sites in Somerset have been designated as Sites of Special Scientific Interest.
Coastline
thumb|alt=Green covered rocky land in expanse of sea. Hills behind.|left|Brean Down from Steep Holm
upright|alt=small boats lined up in harbour. Crane in the background & metal walkway in the foreground.|thumb|The marina in Watchet
The coastline of the Bristol Channel and Severn Estuary forms part of the northern border of Somerset.
The Bristol Channel has the second largest tidal range in the world. At Burnham-on-Sea, for example, the tidal range of a spring tide is more than .
Proposals for the construction of a Severn Barrage aim to harness this energy. The island of Steep Holm in the Bristol Channel is within the ceremonial county and is now administered by North Somerset Council.
The main coastal towns are, from the west to the north-east, Minehead, Watchet, Burnham-on-Sea, Weston-super-Mare, Clevedon and Portishead. The coastal area between Minehead and the eastern extreme of the administrative county's coastline at Brean Down is known as Bridgwater Bay, and is a National Nature Reserve.
North of that, the coast forms Weston Bay and Sand Bay whose northern tip, Sand Point, marks the lower limit of the Severn Estuary. In the mid and north of the county the coastline is low as the level wetlands of the levels meet the sea. In the west, the coastline is high and dramatic where the plateau of Exmoor meets the sea, with high cliffs and waterfalls.
Climate
Along with the rest of South West England, Somerset has a temperate climate which is generally wetter and milder than the rest of the country. The annual mean temperature is approximately . Seasonal temperature variation is less extreme than most of the United Kingdom because of the adjacent sea temperatures. The summer months of July and August are the warmest with mean daily maxima of approximately . In winter mean minimum temperatures of or are common. In the summer the Azores high pressure affects the south-west of England, but convective cloud sometimes forms inland, reducing the number of hours of sunshine. Annual sunshine rates are slightly less than the regional average of 1,600 hours. In December 1998 there were 20 days without sun recorded at Yeovilton. Most the rainfall in the south-west is caused by Atlantic depressions or by convection. Most of the rainfall in autumn and winter is caused by the Atlantic depressions, which is when they are most active. In summer, a large proportion of the rainfall is caused by sun heating the ground leading to convection and to showers and thunderstorms. Average rainfall is around . About 8–15 days of snowfall is typical. November to March have the highest mean wind speeds, and June to August the lightest winds. The predominant wind direction is from the south-west.
Economy and industry
thumb|alt=A small single-story building with a pyramid shaped roof, to the side of a road lined with buildings. Some private small cars visible. Trees in the distance with the skyline of Dunster Castle.|right|The Dunster Yarn Market was built in 1609 for the trading of local cloth
Somerset has few industrial centres, but it does have a variety of light industry and high technology businesses, along with traditional agriculture and an increasingly important tourism sector, resulting in an unemployment rate of 2.5%. Unemployment is lower than the national average; the largest employment sectors are retail, manufacturing, tourism, and health and social care. Population growth in the county is higher than the national average.
Bridgwater was developed during the Industrial Revolution as the area's leading port. The River Parrett was navigable by large ships as far as Bridgwater. Cargoes were then loaded onto smaller boats at Langport Quay, next to the Bridgwater Bridge, to be carried further up river to Langport;Lawrence, J.F. (2005). A History of Bridgwater. (revised and compiled by J.C. Lawrence) Chichester: Phillimore & Co. ISBN 1-86077-363-X. or they could turn off at Burrowbridge and then travel via the River Tone to Taunton. The Parrett is now only navigable as far as Dunball Wharf. Bridgwater, in the 19th and 20th centuries, was a centre for the manufacture of bricks and clay roof tiles, and later cellophane, but those industries have now stopped. With its good links to the motorway system, Bridgwater has developed as a distribution hub for companies such as Argos, Toolstation, Morrisons and Gerber Juice. AgustaWestland manufactures helicopters in Yeovil, and Normalair Garratt, builder of aircraft oxygen systems, is also based in the town. Many towns have encouraged small-scale light industries, such as Crewkerne's Ariel Motor Company, one of the UK's smallest car manufacturers.
Somerset is an important supplier of defence equipment and technology. A Royal Ordnance Factory, ROF Bridgwater was built at the start of the Second World War, between the villages of Puriton and Woolavington, to manufacture explosives. The site was decommissioned and closed in July 2008. Templecombe has Thales Underwater Systems, and Taunton presently has the United Kingdom Hydrographic Office and Avimo, which became part of Thales Optics. It has been announced twice, in 2006 and 2007, that manufacturing is to end at Thales Optics' Taunton site, but the trade unions and Taunton Deane District Council are working to reverse or mitigate these decisions. Other high-technology companies include the optics company Gooch and Housego, at Ilminster. There are Ministry of Defence offices in Bath, and Norton Fitzwarren is the home of 40 Commando Royal Marines. The Royal Naval Air Station in Yeovilton, is one of Britain's two active Fleet Air Arm bases and is home to the Royal Navy's Lynx helicopters and the Royal Marines Commando Westland Sea Kings. Around 1,675 service and 2,000 civilian personnel are stationed at Yeovilton and key activities include training of aircrew and engineers and the Royal Navy's Fighter Controllers and surface-based aircraft controllers.
thumb|left|A traditional cider apple orchard at Over Stratton, with sheep grazing
Agriculture and food and drink production continue to be major industries in the county, employing over 15,000 people. Apple orchards were once plentiful, and Somerset is still a major producer of cider. The towns of Taunton and Shepton Mallet are involved with the production of cider, especially Blackthorn Cider, which is sold nationwide, and there are specialist producers such as Burrow Hill Cider Farm and Thatchers Cider. Gerber Products Company in Bridgwater is the largest producer of fruit juices in Europe, producing brands such as "Sunny Delight" and "Ocean Spray." Development of the milk-based industries, such as Ilchester Cheese Company and Yeo Valley Organic, have resulted in the production of ranges of desserts, yoghurts and cheeses,
including Cheddar cheese—some of which has the West Country Farmhouse Cheddar Protected Designation of Origin (PDO).
Traditional willow growing and weaving (such as basket weaving) is not as extensive as it used to be but is still carried out on the Somerset Levels and is commemorated at the Willows and Wetlands Visitor Centre. Fragments of willow basket were found near the Glastonbury Lake Village, and it was also used in the construction of several Iron Age causeways. The willow was harvested using a traditional method of pollarding, where a tree would be cut back to the main stem. During the 1930s more than of willow were being grown commercially on the Levels. Largely due to the displacement of baskets with plastic bags and cardboard boxes, the industry has severely declined since the 1950s. By the end of the 20th century only about were grown commercially, near the villages of Burrowbridge, Westonzoyland and North Curry. The Somerset Levels is now the only area in the UK where basket willow is grown commercially.
Towns such as Castle Cary and Frome grew around the medieval weaving industry. Street developed as a centre for the production of woollen slippers and, later, boots and shoes, with C. & J. Clark establishing its headquarters in the town. C&J Clark's shoes are no longer manufactured there as the work was transferred to lower-wage areas, such as China and Asia. Instead, in 1993, redundant factory buildings were converted to form Clarks Village, the first purpose-built factory outlet in the UK. C&J Clark also had shoe factories, at one time at Bridgwater, Minehead, Westfield and Weston super Mare to provide employment outside the main summer tourist season, but those satellite sites were closed in the late 1980s, before the main site at Street. Dr. Martens shoes were also made in Somerset, by the Northampton-based R. Griggs Group, using redundant skilled shoemakers from C&J Clark; that work has also been transferred to Asia.
thumb|alt=Large expanse of exposed grey rock. Fence in the foreground.|right|Stone quarries are still a major employer in Somerset
The county has a long tradition of supplying freestone and building stone. Quarries at Doulting supplied freestone used in the construction of Wells Cathedral. Bath stone is also widely used. Ralph Allen promoted its use in the early 18th century, as did Hans Price in the 19th century, but it was used long before then. It was mined underground at Combe Down and Bathampton Down Mines, and as a result of cutting the Box Tunnel, at locations in Wiltshire such as Box.Hudson (1971). The Fashionable Stone. Bath: Adams & Dart. ISBN 0-239-00066-8Bezzant, Norman (1980). Out of the Rock... London: William Heinemann Ltd. ISBN 0-434-06900-0Perkins, J.W., Brooks, A.T. and McR. Pearce, A.E. (1979). Bath Stone: a quarry history. Cardiff: Department of Extra-mural Studies, University College Cardiff. ISBN 0-906230-26-8 Bath stone is still used on a reduced scale today, but more often as a cladding rather than a structural material. Further south, Hamstone is the colloquial name given to stone from Ham Hill, which is also widely used in the construction industry. Blue Lias has been used locally as a building stone and as a raw material for lime mortar and Portland cement. Until the 1960s, Puriton had Blue Lias stone quarries, as did several other Polden villages. Its quarries also supplied a cement factory at Dunball, adjacent to the King's Sedgemoor Drain. Its derelict, early 20th century remains, was removed when the M5 motorway was constructed in the mid-1970s.<ref name=image>(n/a)(1998).Images of England: Bridgwater (Compiled from the collections at Admiral Blake Museum). Stroud: Tempus Publishing. ISBN 0-7524-1049-0 Since the 1920s, the county has supplied aggregates. Foster Yeoman is Europe's large supplier of limestone aggregates, with quarries at Merehead Quarry. It has a dedicated railway operation, Mendip Rail, which is used to transport aggregates by rail from a group of Mendip quarries.Shannon, Paul (2007). "Mendip Stone," In: Railway Magazine, Vol. 153, No. 1,277, pp 22–26. (September 2007). .
Tourism is a major industry, estimated in 2001 to support around 23,000 people. Attractions include the coastal towns, part of the Exmoor National Park, the West Somerset Railway (a heritage railway), and the museum of the Fleet Air Arm at RNAS Yeovilton. The town of Glastonbury has mythical associations, including legends of a visit by the young Jesus of Nazareth and Joseph of Arimathea, with links to the Holy Grail, King Arthur, and Camelot, identified by some as Cadbury Castle, an Iron Age hill fort. Glastonbury also gives its name to an annual open-air rock festival held in nearby Pilton. There are show caves open to visitors in the Cheddar Gorge, as well as its locally produced cheese, although there is now only one remaining cheese maker in the village of Cheddar.
In November 2008, a public sector inward investment organisation was launched, called Into Somerset,Somerset – Where you and your business can grow – Into Somerset official website with the intention of growing the county's economy by promoting it to businesses that may wish to relocate from other parts of the UK (especially London) and the world.
Nuclear electricity
Hinkley Point C nuclear power station is a project to construct a 3,200 MW two reactor nuclear power station. On 18 October 2010, the British government announced that Hinkley Point – already the site of the disused Hinkley Point A and the still operational Hinkley Point B power stations – was one of the eight sites it considered suitable for future nuclear power stations. NNB Generation Company, a subsidiary of EDF, submitted an application for development consent to the Infrastructure Planning Commission on 31 October 2011. A protest group, Stop Hinkley, was formed to campaign for the closure of Hinkley Point B and oppose any expansion at the Hinkley Point site. In December 2013, the European Commission opened an investigation to assess whether the project breaks state-aid rules. On 8 October 2014 it was announced that the European Commission has approved the project, with an overwhelming majority and only four commissioners voting against the decision.
Demography
Somerset ComparedUK Census 2001 Somerset C.C. North Somerset UA BANES UA South West England EnglandTotal population498,093188,564169,0404,928,43449,138,831Foreign born7.6%9.5%11.2%9.4%9.2%White98.8%97.1%97.3%97.7%91%Asian0.3%1.7%0.5%0.7%4.6%Black0.2%0.9%0.5%0.4%2.3%Christian76.7%75.0%71.0%74.0%72%Muslim0.2%0.2%0.4%0.5%3.1%Hindu0.1%0.1%0.2%0.2%1.1%No religion14.9%16.6%19.5%16.8%15%Over 75 years old9.6%9.9%8.9%9.3%7.5%Unemployed2.5%2.1%2.0%2.6%3.3%
In the 2001 census the population of the Somerset County Council area was 498,093
with 169,040 in Bath and North East Somerset,
and 188,564 in North Somerset
giving a total for the ceremonial county of 855,697.
Population growth is higher than the national average, with a 6.4% increase, in the Somerset County Council area, since 1991, and a 17% increase since 1981. The population density is 1.4 persons per hectare, which can be compared to 2.07 persons per hectare for the South West region. Within the county, population density ranges 0.5 in West Somerset to 2.2 persons per hectare in Taunton Deane. The percentage of the population who are economically active is higher than the regional and national average, and the unemployment rate is lower than the regional and national average.
Somerset has a high indigenous British population, with 98.8% registering as white British and 92.4% of these as born in the United Kingdom. Chinese is the largest ethnic group, while the black minority ethnic proportion of the total population is 2.9%. Over 25% of Somerset's population is concentrated in Taunton, Bridgwater and Yeovil. The rest of the county is rural and sparsely populated. Over 9 million tourist nights are spent in Somerset each year, which significantly increases the population at peak times.
Population since 1801Year1801185119011911192119311941195119611971198119912001Somerset CC area187,266276,684277,563280,215282,411284,740305,244327,505355,292385,698417,450468,395498,093BANES57,18896,992107,637113,732113,351112,972123,185134,346144,950156,421154,083164,737169,045North Somerset16,67033,77460,06668,41075,27682,83391,967102,119119,509139,924160,353179,865188,556Total261,124407,450445,266462,357471,038479,758520,396563,970619,751682,043731,886812,997855,694
Politics
thumb|alt=Stone building with colonnaded entrance. Above is a clock tower.|Weston-super-Mare town hall, the administrative headquarters of North Somerset
The county is divided into nine constituencies, each returning one Member of Parliament (MP) to the House of Commons. In the May 2015 general election, all nine constituencies of the county elected Conservative MPs.BBC Election 2015: Constituencies The current constituencies of Somerset are Bridgwater and West Somerset, North East Somerset, North Somerset, Bath, Somerton and Frome, Taunton Deane, Wells, Yeovil, and Weston-super-Mare.
Residents of Somerset also form part of the electorate for the South West England constituency for elections to the European Parliament.
Local government
The ceremonial county of Somerset consists of a two-tier non-metropolitan county, which is administered by Somerset County Council and five district councils, and two unitary authority areas (whose councils combine the functions of a county and a district). The five districts of Somerset are West Somerset, South Somerset, Taunton Deane, Mendip, and Sedgemoor. The two unitary authorities — which were established on 1 April 1996 following the break-up of the short-lived county of Avon — are North Somerset, and Bath & North East Somerset.
These unitary authorities formed part of the administrative county of Somerset before the creation of Avon (a county created to cover Bristol and its environs in north Somerset and south Gloucestershire) in 1974. Bath however was a largely independent county borough during the existence of the administrative county of Somerset (from 1889 to 1974).
In 2007, proposals to abolish the five district councils in favour of a unitary authority (covering the existing two-tier county) were rejected following local opposition. West Somerset is the least populous district (except for the two sui generis districts) in England. In September 2016, West Somerset and Taunton Deane councils agreed in principle to merge the districts into one (with one council) subject to consultation.West Somerset Online It is planned to achieve this on 1 April 2019 with the first elections to the new council in May 2019. The new district would not be a unitary authority, with Somerset County Council still performing its functions.Your New Council
Civil parishes
Almost all of the county is covered by the lowest/most local form of English local government, the civil parish, with either a town or parish council (a city council in the instance of Wells) or a parish meeting; some parishes group together, with a single council or meeting for the group. The city of Bath (the area of the former county borough) and much of the town of Taunton are unparished areas.
Emergency services
All of the ceremonial county of Somerset is covered by the Avon and Somerset Constabulary, a police force which also covers Bristol and South Gloucestershire. The police force is governed by the elected Avon and Somerset Police and Crime Commissioner. The Devon and Somerset Fire and Rescue Service was formed in 2007 upon the merger of the Somerset Fire and Rescue Service with its neighbouring Devon service; it covers the area of Somerset County Council as well as the entire ceremonial county of Devon. The unitary districts of North Somerset and Bath & North East Somerset are instead covered by the Avon Fire and Rescue Service, a service which also covers Bristol and South Gloucestershire. The South Western Ambulance Service covers the entire South West of England, including all of Somerset; prior to February 2013 the unitary districts of Somerset came under the Great Western Ambulance Service, which merged into South Western. The Dorset and Somerset Air Ambulance is a charitable organisation based in the county.
Culture
thumb|alt=Large ornate grey stone facade of a building. Symmetrical ith towers either side.|right|The west front of Wells Cathedral
Somerset has traditions of art, music and literature. Wordsworth and Coleridge wrote while staying in Coleridge Cottage, Nether Stowey.
The writer Evelyn Waugh spent his last years in the village of Combe Florey. The novelist John Cowper Powys (1872–1963) lived in the Somerset village of Montacute from 1885 until 1894 and his novels Wood and Stone (1915) and A Glastonbury Romance (1932) are set in Somerset.
Traditional folk music, both song and dance, was important in the agricultural communities. Somerset songs were collected by Cecil Sharp and incorporated into works such as Holst's A Somerset Rhapsody. Halsway Manor near Williton is an international centre for folk music. The tradition continues today with groups such as The Wurzels specialising in Scrumpy and Western music.
The Glastonbury Festival of Contemporary Performing Arts takes place most years in Pilton, near Shepton Mallet, attracting over 170,000 music and culture lovers from around the world to see world-famous entertainers.
The Big Green Gathering which grew out of the Green fields at the Glastonbury Festival is held in the Mendip Hills between Charterhouse and Compton Martin each summer.
The annual Bath Literature Festival is one of several local festivals in the county; others include the Frome Festival and the Trowbridge Village Pump Festival, which, despite its name, is held at Farleigh Hungerford in Somerset. The annual circuit of West Country Carnivals is held in a variety of Somerset towns during the autumn, forming a major regional festival, and the largest Festival of Lights in Europe.
alt=A mound surmounted by a tower in the distance. In the foreground are fields with cows and small trees and bushes.|thumb|alt=In the distance a small hill with a stone tower on the top. In the foreground flat land with vegetation.|left|Glastonbury Tor
In Arthurian legend, Avalon became associated with Glastonbury Tor when monks at Glastonbury Abbey claimed to have discovered the bones of King Arthur and his queen. What is more certain is that Glastonbury was an important religious centre by 700 and claims to be "the oldest above-ground Christian church in the World" situated "in the mystical land of Avalon." The claim is based on dating the founding of the community of monks at AD 63, the year of the legendary visit of Joseph of Arimathea, who was supposed to have brought the Holy Grail. During the Middle Ages there were also important religious sites at Woodspring Priory and Muchelney Abbey. The present Diocese of Bath and Wells covers Somerset – with the exception of the Parish of Abbots Leigh with Leigh Woods in North Somerset – and a small area of Dorset. The Episcopal seat of the Bishop of Bath and Wells is now in the Cathedral Church of Saint Andrew in the city of Wells, having previously been at Bath Abbey. Before the English Reformation, it was a Roman Catholic diocese; the county now falls within the Roman Catholic Diocese of Clifton. The Benedictine monastery Saint Gregory's Abbey, commonly known as Downside Abbey, is at Stratton-on-the-Fosse, and the ruins of the former Cistercian Cleeve Abbey are near the village of Washford.
thumb|alt=Yellow stone ornate facade of building with lower arched front to the left. In the foreground could flowers in formal garden.|Tyntesfield
The county has several museums; those at Bath include the American Museum in Britain, the Museum of Bath Architecture, the Herschel Museum of Astronomy, the Jane Austen Centre, and the Roman Baths. Other visitor attractions which reflect the cultural heritage of the county include: Claverton Pumping Station, Dunster Working Watermill, the Fleet Air Arm Museum at Yeovilton, Nunney Castle, The Helicopter Museum in Weston-super-Mare, King John's Hunting Lodge in Axbridge, Blake Museum Bridgwater, Radstock Museum, Museum of Somerset in Taunton, the Somerset Rural Life Museum in Glastonbury, and Westonzoyland Pumping Station Museum.
Somerset has 11,500 listed buildings, 523 scheduled monuments, 192 conservation areas, 41 parks and gardens including those at Barrington Court, Holnicote Estate, Prior Park Landscape Garden and Tintinhull Garden, 36 English Heritage sites and 19 National Trust sites, including Clevedon Court, Fyne Court, Montacute House and Tyntesfield as well as Stembridge Tower Mill, the last remaining thatched windmill in England. Other historic houses in the county which have remained in private ownership or used for other purposes include Halswell House and Marston Bigot. A key contribution of Somerset architecture is its medieval church towers. Jenkins writes, "These structures, with their buttresses, bell-opening tracery and crowns, rank with Nottinghamshire alabaster as England's finest contribution to medieval art."
Bath Rugby play at the Recreation Ground in Bath, and the Somerset County Cricket Club are based at the County Ground in Taunton. The county gained its first Football League club in 2003, when Yeovil Town won promotion to Division Three as Football Conference champions. They had achieved numerous FA Cup victories over football League sides in the past 50 years, and since joining the elite they have won promotion again—as League Two champions in 2005. They came close to yet another promotion in 2007, when they reached the League One playoff final, but lost to Blackpool at the newly reopened Wembley Stadium. Yeovil achieved promotion to the Championship in 2013 after beating Brentford in the playoff final. Horse racing courses are at Taunton and Wincanton.
In addition to English national newspapers the county is served by the regional Western Daily Press and local newspapers including: The Weston & Somerset Mercury, the Bath Chronicle, Chew Valley Gazette, Somerset County Gazette, Clevedon Mercury Mendip Times, and the West Somerset Free Press. Television and radio are provided by BBC Somerset, Heart West Country, The Breeze (Yeovil & South Somerset) Yeovil, and HTV, now known as ITV Wales & West Ltd, but still commonly referred to as HTV.
The Flag of Somerset, representing the ceremonial county, has been registered with the Flag Institute following a competition in July 2013.
Transport
thumb|left|Bristol Airport, which is located in North Somerset.
Somerset has of roads. The main arterial routes, which include the M5 motorway, A303, A37, A38, A39, A358 and A361 give good access across the county, but many areas can only be accessed via narrow country lanes.
Rail services are provided by the West of England Main Line through Yeovil Junction, the Bristol to Exeter Line, Heart of Wessex Line which runs from Bristol Temple Meads to Weymouth and the Reading to Taunton Line. The key train operator for Somerset is First Great Western, and other services are operated by South West Trains and CrossCountry.
Bristol Airport, located in North Somerset, provides national and international air services.
The Somerset Coal Canal was built in the early 19th century to reduce the cost of transportation of coal and other heavy produce. The first , running from a junction with the Kennet and Avon Canal, along the Cam valley, to a terminal basin at Paulton, were in use by 1805, together with several tramways. A planned branch to Midford was never built, but in 1815 a tramway was laid along its towing path. In 1871 the tramway was purchased by the Somerset and Dorset Joint Railway (S&DJR),Athill, Robin (1967). The Somerset & Dorset Railway. Newton Abbot, Devon: David & Charles. ISBN 0-7153-4164-2. and operated until the 1950s.
The 19th century saw improvements to Somerset's roads with the introduction of turnpikes, and the building of canals and railways. Nineteenth-century canals included the Bridgwater and Taunton Canal, Westport Canal, Glastonbury Canal and Chard Canal. The Dorset and Somerset Canal was proposed, but little of it was ever constructed and it was abandoned in 1803.
thumb|right|A steam locomotive and carriages, on the West Somerset Railway, a heritage line of notable length, in Spring 2015.
The usefulness of the canals was short-lived, though some have now been restored for recreation. The 19th century also saw the construction of railways to and through Somerset. The county was served by five pre-1923 Grouping railway companies: the Great Western Railway (GWR);St John Thomas, David (1960). A Regional history of the railways of Great Britain: Volume 1 – The West Country. London: Phoenix House. a branch of the Midland Railway (MR) to Bath Green Park (and another one to Bristol);Smith, Martin (1992). The Railways of Bristol and Somerset. Shepperton: Ian Allan Ltd. ISBN 0-7110-2063-9. the Somerset and Dorset Joint Railway,Awdry, Christopher (1990). Encyclopaedia of British Railway Companies. Patrick Stephens Ltd. p. 237.Casserley, H.C. (1968). Britain's Joint Lines. London: Ian Allan. ISBN 0-7110-0024-7. and the London and South Western Railway (L&SWR).Williams, R. A. (1968) The London & South Western Railway, v. 1: The formative years, and v. 2: Growth and consolidation. Newton Abbot, Devon: David & Charles, ISBN 0-7153-4188-X; ISBN 0-7153-5940-1 The former main lines of the GWR are still in use today, although many of its branch lines were scrapped under the notorious Beeching Axe. The former lines of the Somerset and Dorset Joint Railway closed completely,Atthill, Robin and Nock, O. S. (1967). The Somerset & Dorset Railway. Newton Abbot, Devon: David & Charles. ISBN 0-7153-4164-2. as has the branch of the Midland Railway to Bath Green Park (and to Bristol St Philips); however, the L&SWR survived as a part of the present West of England Main Line. None of these lines, in Somerset, are electrified. Two branch lines, the West and East Somerset Railways, were rescued and transferred back to private ownership as "heritage" lines. The fifth railway was a short-lived light railway, the Weston, Clevedon and Portishead Light Railway. The West Somerset Mineral Railway carried the iron ore from the Brendon Hills to Watchet.
Until the 1960s the piers at Weston-super-Mare, Clevedon, Portishead and Minehead were served by the paddle steamers of P and A Campbell who ran regular services to Barry and Cardiff as well as Ilfracombe and Lundy Island. The pier at Burnham-on-Sea was used for commercial goods, one of the reasons for the Somerset and Dorset Joint Railway was to provide a link between the Bristol Channel and the English Channel. The pier at Burnham-on-Sea is the shortest pier in the UK.Handley, Chris (2001). Maritime Activities of the Somerset & Dorset Railway. Cleckheaton: Millstream Books. ISBN 0-948975-63-6. In the 1970s the Royal Portbury Dock was constructed to provide extra capacity for the Port of Bristol.
For long-distance holiday traffic travelling through the county to and from Devon and Cornwall, Somerset is often regarded as a marker on the journey. North–south traffic moves through the county via the M5 Motorway.Charlesworth, George (1984). A History of British Motorways. London: Thomas Telford Limited. ISBN 0-7277-0159-2. Traffic to and from the east travels either via the A303 road, or the M4 Motorway, which runs east–west, crossing the M5 just beyond the northern limits of the county.
Education
State schools in Somerset are provided by three local education authorities: Bath and North East Somerset, North Somerset, and the larger Somerset County Council. All state schools are comprehensive. In some areas primary, infant and junior schools cater for ages four to eleven, after which the pupils move on to secondary schools. There is a three-tier system of first, middle and upper schools in the Cheddar Valley, and in West Somerset, while most other schools in the county use the two-tier system. Somerset has 30 state and 17 independent secondary schools; Bath and North East Somerset has 13 state and 5 independent secondary schools; and North Somerset has 10 state and 2 independent secondary schools, excluding sixth form colleges.
% of pupils gaining 5 grades A-C including English and Maths in 2006 (average for England is 45.8%)Education Authority %Bath and North East Somerset (Unitary Authority) 52.0%West Somerset 51.0%Taunton Deane 49.5%Mendip 47.7%North Somerset (Unitary Authority) 47.4%South Somerset 42.3%Sedgemoor 41.4%
Some of the county's secondary schools have specialist school status. Some schools have sixth forms and others transfer their sixth formers to colleges. Several schools can trace their origins back many years, such as The Blue School in Wells and Richard Huish College in Taunton. Others have changed their names over the years such as Beechen Cliff School which was started in 1905 as the City of Bath Boys' School and changed to its present name in 1972 when the grammar school was amalgamated with a local secondary modern school, to form a comprehensive school. Many others were established and built since the Second World War. In 2006, 5,900 pupils in Somerset sat GCSE examinations, with 44.5% achieving 5 grades A-C including English and Maths (compared to 45.8% for England).
Sexey's School is a state boarding school in Bruton that also takes day pupils from the surrounding area. The Somerset LEA also provides special schools such as Newbury Manor School, which caters for children aged between 10 and 17 with special educational needs. Provision for pupils with special educational needs is also made by the mainstream schools.
There is also a range of independent or public schools. Many of these are for pupils between 11 and 18 years, such as King's College, Taunton and Taunton School. King's School, Bruton, was founded in 1519 and received royal foundation status around 30 years later in the reign of Edward VI. Millfield is the largest co-educational boarding school. There are also preparatory schools for younger children, such as All Hallows, and Hazlegrove Preparatory School. Chilton Cantelo School offers places both to day pupils and boarders aged 7 to 16. Other schools provide education for children from the age of 3 or 4 years through to 18, such as King Edward's School, Bath, Queen's College, Taunton and Wells Cathedral School which is one of the five established musical schools for school-age children in Britain. Some of these schools have religious affiliations, such as Monkton Combe School, Prior Park College, Sidcot School which is associated with the Religious Society of Friends, Downside School which is a Roman Catholic public school in Stratton-on-the-Fosse, situated next to the Benedictine Downside Abbey, and Kingswood School, which was founded by John Wesley in 1748 in Kingswood near Bristol, originally for the education of the sons of the itinerant ministers (clergy) of the Methodist Church.
Further and higher education
A wide range of adult education and further education courses is available in Somerset, in schools, colleges and other community venues. The colleges include Weston College, Bridgwater College, Bath College, Frome Community College, Richard Huish College, Somerset College of Arts and Technology, Strode College and Yeovil College. Somerset County Council operates Dillington House, a residential adult education college located in Ilminster.
The University of Bath, Bath Spa University and University Centre Weston are higher education establishments in the north of the county. The University of Bath gained its Royal Charter in 1966, although its origins go back to the Bristol Trade School (founded 1856) and Bath School of Pharmacy (founded 1907). It has a purpose-built campus at Claverton on the outskirts of Bath, and has 15,000 students. Bath Spa University, which is based at Newton St Loe, achieved university status in 2005, and has origins including the Bath Academy of Art (founded 1898), Bath Teacher Training College, and the Bath College of Higher Education. It has several campuses and 5,500 students.
See alsoOutline of EnglandList of High Sheriffs of Somerset
List of hills of Somerset
Grade I listed buildings in Somerset
List of tourist attractions in Somerset
Lord Lieutenant of Somerset
West Country English
Healthcare in Somerset
Notes
References
Further reading
Victoria History of the Counties of England – History of the County of Somerset''. Oxford: Oxford University Press, for: The Institute of Historical Research.
Note: Volumes I to IX published so far ** Link to on-line version (not all volumes)
Volume I: Natural History, Prehistory, Domesday
Volume II: Ecclesiastical History, Religious Houses, Political, Maritime, and Social and Economic History, Earthworks, Agriculture, Forestry, Sport.
Volume III: Pitney, Somerton, and Tintinhull hundreds.
Volume IV: Crewkerne, Martock, and South Petherton hundreds.
Volume V: Williton and Freemanors hundred.
Volume VI: Andersfield, Cannington and North Petherton hundreds (Bridgwater and neighbouring parishes).
Volume VII: Bruton, Horethorne and Norton Ferris Hundreds.
Volume VIII: The Poldens and the Levels.
Volume IX: Glastonbury and Street, Baltonsborough, Butleigh, Compton Dundon, Meare, North Wootton, Podimore, Milton, Walton, West Bradley, and West Pennard.
External links
Official Somerset Tourism website
Somerset County Council
Somerset at GENUKI
Category:Somerset
Category:Articles including recorded pronunciations (UK English)
Category:Non-metropolitan counties | 51,763 | 2017-01 |
Renewable energy commercialization | thumb|The Sun, wind, and hydroelectricity are three renewable energy sources.
thumb|New investments globally in renewable energyBloomberg New Energy Finance, UNEP SEFI, Frankfurt School, Global Trends in Renewable Energy Investment 2011
thumb|right|The 150 MW Andasol solar power station is a commercial parabolic trough solar thermal power plant, located in Spain. The Andasol plant uses tanks of molten salt to store solar energy so that it can continue generating electricity even when the sun isn't shining.
Renewable energy commercialization involves the deployment of three generations of renewable energy technologies dating back more than 100 years. First-generation technologies, which are already mature and economically competitive, include biomass, hydroelectricity, geothermal power and heat. Second-generation technologies are market-ready and are being deployed at the present time; they include solar heating, photovoltaics, wind power, solar thermal power stations, and modern forms of bioenergy. Third-generation technologies require continued R&D efforts in order to make large contributions on a global scale and include advanced biomass gasification, hot-dry-rock geothermal power, and ocean energy.International Energy Agency (2007). Renewables in global energy supply: An IEA facts sheet (PDF) OECD, 34 pages. As of 2012, renewable energy accounts for about half of new nameplate electrical capacity installed and costs are continuing to fall.
Public policy and political leadership helps to "level the playing field" and drive the wider acceptance of renewable energy technologies.Donald W. Aitken. Transitioning to a Renewable Energy Future, International Solar Energy Society, January 2010, p. 3. Countries such as Germany, Denmark, and Spain have led the way in implementing innovative policies which has driven most of the growth over the past decade. As of 2014, Germany has a commitment to the "Energiewende" transition to a sustainable energy economy, and Denmark has a commitment to 100% renewable energy by 2050. There are now 144 countries with renewable energy policy targets.
Renewable energy continued its rapid growth in 2015, providing multiple benefits. There was a new record set for installed wind and photovoltaic capacity (64GW and 57GW) and a new high of US$329 Billion for global renewables investment. A key benefit that this investment growth brings is a growth in jobs.Editorial, Green Gold, Nature Energy, 2016. The top countries for investment in recent years were China, Germany, Spain, the United States, Italy, and Brazil.REN21 (2012). Renewables Global Status Report 2012 p. 17. Renewable energy companies include BrightSource Energy, First Solar, Gamesa, GE Energy, Goldwind, Sinovel, Trina Solar, Vestas, and Yingli.Top of the list, Renewable Energy World, 2 January 2006.Keith Johnson, Wind Shear: GE Wins, Vestas Loses in Wind-Power Market Race, Wall Street Journal, 25 March 2009, accessed on 7 January 2010.
Climate change concernsInternational Energy Agency. IEA urges governments to adopt effective policies based on key design principles to accelerate the exploitation of the large potential for renewable energy 29 September 2008.REN21 (2006). Changing climates: The Role of Renewable Energy in a Carbon-constrained World (PDF) p. 2.HM Treasury (2006). Stern Review on the Economics of Climate Change. are also driving increasing growth in the renewable energy industries.New UN report points to power of renewable energy to mitigate carbon emissions UN News Centre, 8 December 2007.Joel Makower, Ron Pernick and Clint Wilder (2008). [http://www.cleanedge.com/reports/pdf/Trends2008.pdf Clean Energy Trends 2008], Clean Edge, p. 2. According to a 2011 projection by the (IEA) International Energy Agency, solar power generators may produce most of the world's electricity within 50 years, reducing harmful greenhouse gas emissions.
Overview
thumb|350px|right|alt=refer to caption and image description|Global public support for energy sources, based on a survey by Ipsos (2011).
Rationale for renewables
Climate change, pollution, and energy insecurity are significant problems, and addressing them requires major changes to energy infrastructures. Renewable energy technologies are essential contributors to the energy supply portfolio, as they contribute to world energy security, reduce dependency on fossil fuels, and provide opportunities for mitigating greenhouse gases. Climate-disrupting fossil fuels are being replaced by clean, climate-stabilizing, non-depletable sources of energy:
...the transition from coal, oil, and gas to wind, solar, and geothermal energy is well under way. In the old economy, energy was produced by burning something — oil, coal, or natural gas — leading to the carbon emissions that have come to define our economy. The new energy economy harnesses the energy in wind, the energy coming from the sun, and heat from within the earth itself.Lester R. Brown. Plan B 4.0: Mobilizing to Save Civilization, Earth Policy Institute, 2009, p. 135.
In international public opinion surveys there is strong support for a variety of methods for addressing the problem of energy supply. These methods include promoting renewable sources such as solar power and wind power, requiring utilities to use more renewable energy, and providing tax incentives to encourage the development and use of such technologies. It is expected that renewable energy investments will pay off economically in the long term.
EU member countries have shown support for ambitious renewable energy goals. In 2010, Eurobarometer polled the twenty-seven EU member states about the target "to increase the share of renewable energy in the EU by 20 percent by 2020". Most people in all twenty-seven countries either approved of the target or called for it to go further. Across the EU, 57 percent thought the proposed goal was "about right" and 16 percent thought it was "too modest." In comparison, 19 percent said it was "too ambitious".
As of 2011, new evidence has emerged that there are considerable risks associated with traditional energy sources, and that major changes to the mix of energy technologies is needed:
Several mining tragedies globally have underscored the human toll of the coal supply chain. New EPA initiatives targeting air toxics, coal ash, and effluent releases highlight the environmental impacts of coal and the cost of addressing them with control technologies. The use of fracking in natural gas exploration is coming under scrutiny, with evidence of groundwater contamination and greenhouse gas emissions. Concerns are increasing about the vast amounts of water used at coal-fired and nuclear power plants, particularly in regions of the country facing water shortages. Events at the Fukushima nuclear plant have renewed doubts about the ability to operate large numbers of nuclear plants safely over the long term. Further, cost estimates for "next generation" nuclear units continue to climb, and lenders are unwilling to finance these plants without taxpayer guarantees.
The 2014 REN21 Global Status Report says that renewable energies are no longer just energy sources, but ways to address pressing social, political, economic and environmental problems:
Today, renewables are seen not only as sources of energy, but also as tools to address many other pressing needs, including: improving energy security; reducing the health and environmental impacts associated with fossil and nuclear energy; mitigating greenhouse gas emissions; improving educational opportunities; creating jobs; reducing poverty; and increasing gender equality... Renewables have entered the mainstream.
Growth of renewables
In 2008 for the first time, more renewable energy than conventional power capacity was added in both the European Union and United States, demonstrating a "fundamental transition" of the world's energy markets towards renewables, according to a report released by REN21, a global renewable energy policy network based in Paris. In 2010, renewable power consisted about a third of the newly built power generation capacities.UNEP, Bloomberg, Frankfurt School, Global Trends in Renewable Energy Investment 2011、Figure 24.
By the end of 2011, total renewable power capacity worldwide exceeded 1,360 GW, up 8%. Renewables producing electricity accounted for almost half of the 208 GW of capacity added globally during 2011. Wind and solar photovoltaics (PV) accounted for almost 40% and 30% . Renewables 2012 Global status report Executive summary REN21 Based on REN21's 2014 report, renewables contributed 19 percent to our energy consumption and 22 percent to our electricity generation in 2012 and 2013, respectively. This energy consumption is divided as 9% coming from traditional biomass, 4.2% as heat energy (non-biomass), 3.8% hydro electricity and 2% electricity from wind, solar, geothermal, and biomass.
During the five-years from the end of 2004 through 2009, worldwide renewable energy capacity grew at rates of 10–60 percent annually for many technologies.REN21 (2010). Renewables 2010 Global Status Report p. 15. In 2011, UN under-secretary general Achim Steiner said: "The continuing growth in this core segment of the green economy is not happening by chance. The combination of government target-setting, policy support and stimulus funds is underpinning the renewable industry's rise and bringing the much needed transformation of our global energy system within reach." He added: "Renewable energies are expanding both in terms of investment, projects and geographical spread. In doing so, they are making an increasing contribution to combating climate change, countering energy poverty and energy insecurity".
According to a 2011 projection by the International Energy Agency, solar power plants may produce most of the world's electricity within 50 years, significantly reducing the emissions of greenhouse gases that harm the environment. The IEA has said: "Photovoltaic and solar-thermal plants may meet most of the world's demand for electricity by 2060 – and half of all energy needs – with wind, hydropower and biomass plants supplying much of the remaining generation". "Photovoltaic and concentrated solar power together can become the major source of electricity".
+Selected renewable energy indicatorsEric Martinot and Janet Sawin. Renewables Global Status Report 2009 Update, Renewable Energy World, 9 September 2009.REN21 (2009). Renewables Global Status Report: 2009 Update p. 9.REN21 (2013). Renewables 2013 Global Status Report, (Paris: REN21 Secretariat), ISBN 978-3-9815934-0-2. Selected global indicators 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 units Investment in new renewable capacity (annual) 30 38 63 104 130 160 211 257 244 214 270 285 billion USD Existing renewables power capacity, including large-scale hydro 895 930 1,020 1,070 1,140 1,230 1,320 1,360 1,470 1,560 1,712 1,849 GWe Existing renewables power capacity, excluding large hydro 200 250 312 390 480 560 657 785 GWe Hydropower capacity (existing) 915 945 970 990 1,000 1,055 1,064 GWe Wind power capacity (existing) 48 59 74 94 121 159 198 238 283 318 370 433 GWe Solar PV capacity (grid-connected) 7.6 16 23 40 70 100 139 177 227 GWe Solar hot water capacity (existing) 77 88 105 120 130 160 185 232 255 326 406 435 GWth Ethanol production (annual) 30.5 33 39 50 67 76 86 86 83 87 94 98 billion liters Biodiesel production (annual) 12 17 19 21 22 26 29.7 30 billion liters Countries with policy targets for renewable energy use 45 49 68 79 89 98 118 138 144 164 173
In 2013, China led the world in renewable energy production, with a total capacity of 378 GW, mainly from hydroelectric and wind power. As of 2014, China leads the world in the production and use of wind power, solar photovoltaic power and smart grid technologies, generating almost as much water, wind and solar energy as all of France and Germany's power plants combined. China's renewable energy sector is growing faster than its fossil fuels and nuclear power capacity. Since 2005, production of solar cells in China has expanded 100-fold. As Chinese renewable manufacturing has grown, the costs of renewable energy technologies have dropped. Innovation has helped, but the main driver of reduced costs has been market expansion.
See also renewable energy in the United States for US-figures.
Economic trends
thumb|The National Renewable Energy Laboratory projects that the levelized cost of wind power will decline about 25% from 2012 to 2030.E. Lantz, M. Hand, and R. Wiser ( 13–17 May 2012) "The Past and Future Cost of Wind Energy," National Renewable Energy Laboratory conference paper no. 6A20-54526, page 4
Renewable energy technologies are getting cheaper, through technological change and through the benefits of mass production and market competition. A 2011 IEA report said: "A portfolio of renewable energy technologies is becoming cost-competitive in an increasingly broad range of circumstances, in some cases providing investment opportunities without the need for specific economic support," and added that "cost reductions in critical technologies, such as wind and solar, are set to continue." , there have been substantial reductions in the cost of solar and wind technologies:
The price of PV modules per MW has fallen by 60 percent since the summer of 2008, according to Bloomberg New Energy Finance estimates, putting solar power for the first time on a competitive footing with the retail price of electricity in a number of sunny countries. Wind turbine prices have also fallen – by 18 percent per MW in the last two years – reflecting, as with solar, fierce competition in the supply chain. Further improvements in the levelised cost of energy for solar, wind and other technologies lie ahead, posing a growing threat to the dominance of fossil fuel generation sources in the next few years.
Hydro-electricity and geothermal electricity produced at favourable sites are now the cheapest way to generate electricity. Renewable energy costs continue to drop, and the levelised cost of electricity (LCOE) is declining for wind power, solar photovoltaic (PV), concentrated solar power (CSP) and some biomass technologies.
Renewable energy is also the most economic solution for new grid-connected capacity in areas with good resources. As the cost of renewable power falls, the scope of economically viable applications increases. Renewable technologies are now often the most economic solution for new generating capacity. Where "oil-fired generation is the predominant power generation source (e.g. on islands, off-grid and in some countries) a lower-cost renewable solution almost always exists today". As of 2012, renewable power generation technologies accounted for around half of all new power generation capacity additions globally. In 2011, additions included 41 gigawatt (GW) of new wind power capacity, 30 GW of PV, 25 GW of hydro-electricity, 6 GW of biomass, 0.5 GW of CSP, and 0.1 GW of geothermal power.
Three generations of technologies
Renewable energy includes a number of sources and technologies at different stages of commercialization. The International Energy Agency (IEA) has defined three generations of renewable energy technologies, reaching back over 100 years:
"First-generation technologies emerged from the industrial revolution at the end of the 19th century and include hydropower, biomass combustion, geothermal power and heat. These technologies are quite widely used.
Second-generation technologies include solar heating and cooling, wind power, modern forms of bioenergy, and solar photovoltaics. These are now entering markets as a result of research, development and demonstration (RD&D) investments since the 1980s. Initial investment was prompted by energy security concerns linked to the oil crises of the 1970s but the enduring appeal of these technologies is due, at least in part, to environmental benefits. Many of the technologies reflect significant advancements in materials.
Third-generation technologies are still under development and include advanced biomass gasification, biorefinery technologies, concentrating solar thermal power, hot-dry-rock geothermal power, and ocean energy. Advances in nanotechnology may also play a major role". First-generation technologies are well established, second-generation technologies are entering markets, and third-generation technologies heavily depend on long-term research and development commitments, where the public sector has a role to play.
First-generation technologies
thumb|rechts|Biomass heating plant in Austria. The total heat power is about 1000 kW.
First-generation technologies are widely used in locations with abundant resources. Their future use depends on the exploration of the remaining resource potential, particularly in developing countries, and on overcoming challenges related to the environment and social acceptance.
Biomass
Biomass for heat and power is a fully mature technology which offers a ready disposal mechanism for municipal, agricultural, and industrial organic wastes. However, the industry has remained relatively stagnant over the decade to 2007, even though demand for biomass (mostly wood) continues to grow in many developing countries. One of the problems of biomass is that material directly combusted in cook stoves produces pollutants, leading to severe health and environmental consequences, although improved cook stove programmes are alleviating some of these effects. First-generation biomass technologies can be economically competitive, but may still require deployment support to overcome public acceptance and small-scale issues.
Hydroelectricity
thumb| The 22,500 MW Three Gorges Dam in the Peoples Republic of China, the largest hydroelectric power station in the world.
Hydroelectricity is the term referring to electricity generated by hydropower; the production of electrical power through the use of the gravitational force of falling or flowing water. In 2015 hydropower generated 16.6% of the worlds total electricity and 70% of all renewable electricityhttp://www.ren21.net/wp-content/uploads/2016/06/GSR_2016_Full_Report_REN21.pdf and is expected to increase about 3.1% each year for the next 25 years. Hydroelectric plants have the advantage of being long-lived and many existing plants have operated for more than 100 years.
Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity plants larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela. The cost of hydroelectricity is low, making it a competitive source of renewable electricity. The average cost of electricity from a hydro plant larger than 10 megawatts is 3 to 5 U.S. cents per kilowatt-hour.
Geothermal power and heat
thumb|right|West Ford Flat Geothermal Cooling Tower.JPG|thumb|One of many power plants at The Geysers, a geothermal power field in northern California, with a total output of over 750 MW
Geothermal power plants can operate 24 hours per day, providing baseload capacity. Estimates for the world potential capacity for geothermal power generation vary widely, ranging from 40 GW by 2020 to as much as 6,000 GW.Bertani, R., 2003, "What is Geothermal Potential?", IGA News, 53, page 1-3.Fridleifsson, I.B., R. Bertani, E. Huenges, J. W. Lund, A. Ragnarsson, and L. Rybach (2008). The possible role and contribution of geothermal energy to the mitigation of climate change . In: O. Hohmeyer and T. Trittin (Eds.), IPCC Scoping Meeting on Renewable Energy Sources, Proceedings, Luebeck, Germany, 20–25 January 2008, p. 59-80.
Geothermal power capacity grew from around 1 GW in 1975 to almost 10 GW in 2008. The United States is the world leader in terms of installed capacity, representing 3.1 GW. Other countries with significant installed capacity include the Philippines (1.9 GW), Indonesia (1.2 GW), Mexico (1.0 GW), Italy (0.8 GW), Iceland (0.6 GW), Japan (0.5 GW), and New Zealand (0.5 GW).Islandsbanki Geothermal Research, United States Geothermal Energy Market Report, October 2009, accessed through website of Islandbanki. In some countries, geothermal power accounts for a significant share of the total electricity supply, such as in the Philippines, where geothermal represented 17 percent of the total power mix at the end of 2008.Leonora Walet. Philippines targets $2.5 billion geothermal development, Reuters, 5 November 2009.
Geothermal (ground source) heat pumps represented an estimated 30 GWth of installed capacity at the end of 2008, with other direct uses of geothermal heat (i.e., for space heating, agricultural drying and other uses) reaching an estimated 15 GWth. , at least 76 countries use direct geothermal energy in some form.
Second-generation technologies
Markets for second-generation technologies have been strong and growing over the past decade, and these technologies have gone from being a passion for the dedicated few to a major economic sector in countries such as Germany, Spain, the United States, and Japan. Many large industrial companies and financial institutions are involved and the challenge is to broaden the market base for continued growth worldwide.
Solar Heating
thumb|Solar energy technologies, such as solar water heaters, located on or near the buildings which they supply with energy, are a prime example of a soft energy technology.
Solar heating systems are a well known second-generation technology and generally consist of solar thermal collectors, a fluid system to move the heat from the collector to its point of usage, and a reservoir or tank for heat storage. The systems may be used to heat domestic hot water, swimming pools, or homes and businesses.Brian Norton (2011) Solar Water Heaters: A Review of Systems Research and Design Innovation, Green. 1, 189–207, ISSN (Online) 1869–8778 The heat can also be used for industrial process applications or as an energy input for other uses such as cooling equipment.International Energy Agency. Solar assisted air-conditioning of buildings
In many warmer climates, a solar heating system can provide a very high percentage (50 to 75%) of domestic hot water energy. , China has 27 million rooftop solar water heaters.Lester R. Brown. Plan B 4.0: Mobilizing to Save Civilization, Earth Policy Institute, 2009, p. 122.
Photovoltaics
thumb|Nellis Solar Power Plant at Nellis Air Force Base. These panels track the sun in one axis.
thumb|President Barack Obama speaks at the DeSoto Next Generation Solar Energy Center.
Photovoltaic (PV) cells, also called solar cells, convert light into electricity. In the 1980s and early 1990s, most photovoltaic modules were used to provide remote-area power supply, but from around 1995, industry efforts have focused increasingly on developing building integrated photovoltaics and photovoltaic power stations for grid connected applications.
Many solar photovoltaic power stations have been built, mainly in Europe.Denis Lenardic. Large-scale photovoltaic power plants ranking 1 – 50 PVresources.com, 2010. As of July 2012, the largest photovoltaic (PV) power plants in the world are the Agua Caliente Solar Project (USA, 247 MW), Charanka Solar Park (India, 214 MW), Golmud Solar Park (China, 200 MW), Perovo Solar Park (Russia 100 MW), Sarnia Photovoltaic Power Plant (Canada, 97 MW), Brandenburg-Briest Solarpark (Germany 91 MW), Solarpark Finow Tower (Germany 84.7 MW), Montalto di Castro Photovoltaic Power Station (Italy, 84.2 MW), Eggebek Solar Park (Germany 83.6 MW), Senftenberg Solarpark (Germany 82 MW), Finsterwalde Solar Park (Germany, 80.7 MW), Okhotnykovo Solar Park (Russia, 80 MW), Lopburi Solar Farm (Thailand 73.16 MW), Rovigo Photovoltaic Power Plant (Italy, 72 MW), and the Lieberose Photovoltaic Park (Germany, 71.8 MW).
There are also many large plants under construction. The Desert Sunlight Solar Farm under construction in Riverside County, California and Topaz Solar Farm being built in San Luis Obispo County, California are both 550 MW solar parks that will use thin-film solar photovoltaic modules made by First Solar. The Blythe Solar Power Project is a 500 MW photovoltaic station under construction in Riverside County, California. The California Valley Solar Ranch (CVSR) is a 250 megawatt (MW) solar photovoltaic power plant, which is being built by SunPower in the Carrizo Plain, northeast of California Valley. The 230 MW Antelope Valley Solar Ranch is a First Solar photovoltaic project which is under construction in the Antelope Valley area of the Western Mojave Desert, and due to be completed in 2013. The Mesquite Solar project is a photovoltaic solar power plant being built in Arlington, Maricopa County, Arizona, owned by Sempra Generation. Phase 1 will have a nameplate capacity of 150 megawatts.
Many of these plants are integrated with agriculture and some use innovative tracking systems that follow the sun's daily path across the sky to generate more electricity than conventional fixed-mounted systems. There are no fuel costs or emissions during operation of the power stations.
Wind power
thumb|right|Wind power: worldwide installed capacityGWEC, Global Wind Report Annual Market Update
thumb|right|Landowners in the US typically receive $3,000 to $5,000 per year in rental income from each wind turbine, while farmers continue to grow crops or graze cattle up to the foot of the turbines.American Wind Energy Association (2009). Annual Wind Industry Report, Year Ending 2008 pp. 9–10.
Some of the second-generation renewables, such as wind power, have high potential and have already realised relatively low production costs."Stabilizing Climate" (PDF) in Lester R. Brown, Plan B 2.0 Rescuing a Planet Under Stress and a Civilization in Trouble (NY: W.W. Norton & Co., 2006), p. 189.Clean Edge (2007). The Clean Tech Revolution... the costs of clean energy are declining (PDF) p.8. Global wind power installations increased by 35,800 MW in 2010, bringing total installed capacity up to 194,400 MW, a 22.5% increase on the 158,700 MW installed at the end of 2009. The increase for 2010 represents investments totalling €47.3 billion (US$65 billion) and for the first time more than half of all new wind power was added outside of the traditional markets of Europe and North America, mainly driven, by the continuing boom in China which accounted for nearly half of all of the installations at 16,500 MW. China now has 42,300 MW of wind power installed. Wind power accounts for approximately 19% of electricity generated in Denmark, 9% in Spain and Portugal, and 6% in Germany and the Republic of Ireland.New Report a Complete Analysis of the Global Offshore Wind Energy Industry and its Major Players In Australian state of South Australia wind power, championed by Premier Mike Rann (2002–2011), now comprises 26% of the state's electricity generation, edging out coal fired power. At the end of 2011 South Australia, with 7.2% of Australia's population, had 54%of the nation's installed wind power capacity.Center for National Policy, Washington DC: What States Can Do, 2 April 2012 Wind power's share of worldwide electricity usage at the end of 2014 was 3.1%.http://www.ren21.net/wp-content/uploads/2015/07/REN12-GSR2015_Onlinebook_low1.pdf pg31
These are some of the largest wind farms in the world:
+ Large onshore wind farms Wind farm Current capacity (MW) Country Notes Gansu Wind Farm 6,000 Watts, Jonathan & Huang, Cecily. Winds Of Change Blow Through China As Spending On Renewable Energy Soars, The Guardian, 19 March 2012, revised on 20 March 2012. Retrieved 4 January 2012.Xinhua: Jiuquan Wind Power Base Completes First Stage, Xinhua News Agency, 4 November 2010. Retrieved from ChinaDaily.com.cn website 3 January 2013. Alta (Oak Creek-Mojave) 1,320 Terra-Gen Press Release, 17 April 2012 Jaisalmer Wind Park 1,064 Started in August 2001, the Jaisalmer based facility crossed 1,000 MW capacity to achieve this milestone Shepherds Flat Wind Farm 845 Roscoe Wind Farm 782 E.ON Delivers 335-MW of Wind in Texas Horse Hollow Wind Energy Center 736 AWEA: U.S. Wind Energy Projects – Texas Capricorn Ridge Wind Farm 662 Drilling Down: What Projects Made 2008 Such a Banner Year for Wind Power? Fântânele-Cogealac Wind Farm 600 CEZ Group: The Largest Wind Farm in Europe Goes Into Trial Operation Fowler Ridge Wind Farm 600 AWEA: U.S. Wind Energy Projects – Indiana Whitelee Wind Farm 539 Whitelee Windfarm
As of 2014, the wind industry in the USA is able to produce more power at lower cost by using taller wind turbines with longer blades, capturing the faster winds at higher elevations. This has opened up new opportunities and in Indiana, Michigan, and Ohio, the price of power from wind turbines built 300 feet to 400 feet above the ground can now compete with conventional fossil fuels like coal. Prices have fallen to about 4 cents per kilowatt-hour in some cases and utilities have been increasing the amount of wind energy in their portfolio, saying it is their cheapest option.
Solar thermal power stations
right|thumb|Solar Towers from left: PS10, PS20.
Solar thermal power stations include the 354 megawatt (MW) Solar Energy Generating Systems power plant in the USA, Solnova Solar Power Station (Spain, 150 MW), Andasol solar power station (Spain, 100 MW), Nevada Solar One (USA, 64 MW), PS20 solar power tower (Spain, 20 MW), and the PS10 solar power tower (Spain, 11 MW). The 370 MW Ivanpah Solar Power Facility, located in California's Mojave Desert, is the world's largest solar-thermal power plant project currently under construction.Todd Woody. In California’s Mojave Desert, Solar-Thermal Projects Take Off Yale Environment 360, 27 October 2010. Many other plants are under construction or planned, mainly in Spain and the USA.REN21 (2008). Renewables 2007 Global Status Report (PDF) p. 12. In developing countries, three World Bank projects for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt, Mexico, and Morocco have been approved.
Modern forms of Bioenergy
thumb|left|Neat ethanol on the left (A), gasoline on the right (G) at a filling station in Brazil.
Global ethanol production for transport fuel tripled between 2000 and 2007 from 17 billion to more than 52 billion litres, while biodiesel expanded more than tenfold from less than 1 billion to almost 11 billion litres. Biofuels provide 1.8% of the world's transport fuel and recent estimates indicate a continued high growth. The main producing countries for transport biofuels are the USA, Brazil, and the EU.United Nations Environment Programme (2009). Assessing Biofuels , p.15.
Brazil has one of the largest renewable energy programs in the world, involving production of ethanol fuel from sugar cane, and ethanol now provides 18 percent of the country's automotive fuel. As a result of this and the exploitation of domestic deep water oil sources, Brazil, which for years had to import a large share of the petroleum needed for domestic consumption, recently reached complete self-sufficiency in liquid fuels.America and Brazil Intersect on Ethanol Renewable Energy Access, 15 May 2006.New Rig Brings Brazil Oil Self-Sufficiency Washington Post, 21 April 2006.
right|thumb|Information on pump, California
Nearly all the gasoline sold in the United States today is mixed with 10 percent ethanol, a mix known as E10,Erica Gies. As Ethanol Booms, Critics Warn of Environmental Effect The New York Times, 24 June 2010. and motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford, DaimlerChrysler, and GM are among the automobile companies that sell flexible-fuel cars, trucks, and minivans that can use gasoline and ethanol blends ranging from pure gasoline up to 85% ethanol (E85). The challenge is to expand the market for biofuels beyond the farm states where they have been most popular to date. The Energy Policy Act of 2005, which calls for of biofuels to be used annually by 2012, will also help to expand the market.Worldwatch Institute and Center for American Progress (2006). American energy: The renewable path to energy security (PDF)
The growing ethanol and biodiesel industries are providing jobs in plant construction, operations, and maintenance, mostly in rural communities. According to the Renewable Fuels Association, "the ethanol industry created almost 154,000 U.S. jobs in 2005 alone, boosting household income by $5.7 billion. It also contributed about $3.5 billion in tax revenues at the local, state, and federal levels".
Third-generation technologies
Third-generation renewable energy technologies are still under development and include advanced biomass gasification, biorefinery technologies, hot-dry-rock geothermal power, and ocean energy. Third-generation technologies are not yet widely demonstrated or have limited commercialization. Many are on the horizon and may have potential comparable to other renewable energy technologies, but still depend on attracting sufficient attention and research and development funding.
New bioenergy technologies
+Selected Commercial Cellulosic Ethanol Plantsin the U.S.Decker, Jeff. Going Against the Grain: Ethanol from Lignocellulosics, Renewable Energy World, 22 January 2009. Company Location Feedstock Abengoa Bioenergy Hugoton, KS Wheat straw BlueFire Ethanol Irvine, CA Multiple sources Gulf Coast Energy Mossy Head, FL Wood waste Mascoma Lansing, MI Wood POET LLC Emmetsburg, IA Corn cobs SunOpta Little Falls, MN Wood chips Xethanol Auburndale, FL Citrus peels
According to the International Energy Agency, cellulosic ethanol biorefineries could allow biofuels to play a much bigger role in the future than organizations such as the IEA previously thought.International Energy Agency (2006). World Energy Outlook 2006 (PDF). Cellulosic ethanol can be made from plant matter composed primarily of inedible cellulose fibers that form the stems and branches of most plants. Crop residues (such as corn stalks, wheat straw and rice straw), wood waste, and municipal solid waste are potential sources of cellulosic biomass. Dedicated energy crops, such as switchgrass, are also promising cellulose sources that can be sustainably produced in many regions.Biotechnology Industry Organization (2007). Industrial Biotechnology Is Revolutionizing the Production of Ethanol Transportation Fuel pp. 3–4.
Ocean energy
Ocean energy is all forms of renewable energy derived from the sea including wave energy, tidal energy, river current, ocean current energy, offshore wind, salinity gradient energy and ocean thermal gradient energy.Ocean energy EPRI Ocean Energy Web Page
The Rance Tidal Power Station (240 MW) is the world's first tidal power station. The facility is located on the estuary of the Rance River, in Brittany, France. Opened on 26 November 1966, it is currently operated by Électricité de France, and is the largest tidal power station in the world, in terms of installed capacity.
First proposed more than thirty years ago, systems to harvest utility-scale electrical power from ocean waves have recently been gaining momentum as a viable technology. The potential for this technology is considered promising, especially on west-facing coasts with latitudes between 40 and 60 degrees:Jeff Scruggs and Paul Jacob. Harvesting Ocean Wave Energy, Science, Vol. 323, 27 February 2009, p. 1176.
In the United Kingdom, for example, the Carbon Trust recently estimated the extent of the economically viable offshore resource at 55 TWh per year, about 14% of current national demand. Across Europe, the technologically achievable resource has been estimated to be at least 280 TWh per year. In 2003, the U.S. Electric Power Research Institute (EPRI) estimated the viable resource in the United States at 255 TWh per year (6% of demand).
There are currently nine projects, completed or in-development, off the coasts of the United Kingdom, United States, Spain and Australia to harness the rise and fall of waves by Ocean Power Technologies. The current maximum power output is 1.5 MW (Reedsport, Oregon), with development underway for 100 MW (Coos Bay, Oregon).Projects Ocean Power Technologies Projects
Enhanced geothermal systems
, geothermal power development was under way in more than 40 countries, partially attributable to the development of new technologies, such as Enhanced Geothermal Systems.REN21 (2009). Renewables Global Status Report: 2009 Update pp. 12–13. The development of binary cycle power plants and improvements in drilling and extraction technology may enable enhanced geothermal systems over a much greater geographical range than "traditional" Geothermal systems. Demonstration EGS projects are operational in the USA, Australia, Germany, France, and The United Kingdom.
Advanced solar concepts
Beyond the already established solar photovoltaics and solar thermal power technologies are such advanced solar concepts as the solar updraft tower or space-based solar power. These concepts have yet to (if ever) be commercialized.
The Solar updraft tower (SUT) is a renewable-energy power plant for generating electricity from low temperature solar heat. Sunshine heats the air beneath a very wide greenhouse-like roofed collector structure surrounding the central base of a very tall chimney tower. The resulting convection causes a hot air updraft in the tower by the chimney effect. This airflow drives wind turbines placed in the chimney updraft or around the chimney base to produce electricity. Plans for scaled-up versions of demonstration models will allow significant power generation, and may allow development of other applications, such as water extraction or distillation, and agriculture or horticulture.
A more advanced version of a similarly themed technology is the Vortex engine (AVE) which aims to replace large physical chimneys with a vortex of air created by a shorter, less-expensive structure.
Space-based solar power (SBSP) is the concept of collecting solar power in space (using an "SPS", that is, a "solar-power satellite" or a "satellite power system") for use on Earth. It has been in research since the early 1970s. SBSP would differ from current solar collection methods in that the means used to collect energy would reside on an orbiting satellite instead of on Earth's surface. Some projected benefits of such a system are a higher collection rate and a longer collection period due to the lack of a diffusing atmosphere and night time in space.
Renewable energy industry
thumb|A Vestas wind turbine
thumb|Monocrystalline solar cell
Total investment in renewable energy reached $211 billion in 2010, up from $160 billion in 2009. The top countries for investment in 2010 were
China, Germany, the United States, Italy, and Brazil. Continued growth for the renewable energy sector is expected and promotional policies helped the industry weather the 2009 economic crisis better than many other sectors.Joel Makower, Ron Pernick and Clint Wilder (2009). [http://www.cleanedge.com/reports/pdf/Trends2009.pdf Clean Energy Trends 2009], Clean Edge, pp. 1–4.
Wind power companies
, Vestas (from Denmark) is the world's top wind turbine manufacturer in terms of percentage of market volume, and Sinovel (from China) is in second place. Together Vestas and Sinovel delivered 10,228 MW of new wind power capacity in 2010, and their market share was 25.9 percent. GE Energy (USA) was in third place, closely followed by Goldwind, another Chinese supplier. German Enercon ranks fifth in the world, and is followed in sixth place by Indian-based Suzlon.
Photovoltaic market trends
The solar PV market has been growing for the past few years. According to solar PV research company, PVinsights, worldwide shipment of solar modules in 2011 was around 25 GW, and the shipment year over year growth was around 40%. The top 5 solar module players in 2011 in turns are Suntech, First Solar, Yingli, Trina, and Sungen. The top 5 solar module companies possessed 51.3% market share of solar modules, according to PVinsights' market intelligence report.
2013Ranking Solar Module Company Change from2012 Country 1 Yingli Green Energy – China 2 Trina Solar +1 China 3 Sharp Solar +3 Japan 4 Canadian Solar – Canada 5 Jinko Solar +3 China 6 ReneSola +7 China 7 First Solar −2 China 8 Hanwha Solarone +2 South Korea 9 Kyocera +5 Japan 10 JA Solar −3 China Top 10 PV module suppliers in 2013Renewables 2012 Global Status Report
The PV industry has seen drops in module prices since 2008. In late 2011, factory-gate prices for crystalline-silicon photovoltaic modules dropped below the $1.00/W mark. The $1.00/W installed cost, is often regarded in the PV industry as marking the achievement of grid parity for PV. These reductions have taken many stakeholders, including industry analysts, by surprise, and perceptions of current solar power economics often lags behind reality. Some stakeholders still have the perspective that solar PV remains too costly on an unsubsidized basis to compete with conventional generation options. Yet technological advancements, manufacturing process improvements, and industry re-structuring, mean that further price reductions are likely in coming years.
+ Top 10 PV countries in 2014 (MW)
Total capacity 1. Germany 38,200 2. China 28,199 3. Japan 23,300 4. Italy 18,460 5. United States 18,280 6. France 5,660 7. Spain 5,358 8. UK 5,104 9. Australia 4,136 10. Belgium 3,074
Added capacity 1. China 10,560 2. Japan 9,700 3. United States 6,201 4. UK 2,273 5. Germany 1,900 6. France 927 7. Australia 910 8. South Korea 909 9. South Africa 800 10. India 616
Data: IEA-PVPS Snapshot of Global PV 1992–2014 report, March 2015Also see section Deployment by country for a complete and continuously updated list
Non-technical barriers to acceptance
Many energy markets, institutions, and policies have been developed to support the production and use of fossil fuels. Newer and cleaner technologies may offer social and environmental benefits, but utility operators often reject renewable resources because they are trained to think only in terms of big, conventional power plants. Consumers often ignore renewable power systems because they are not given accurate price signals about electricity consumption. Intentional market distortions (such as subsidies), and unintentional market distortions (such as split incentives) may work against renewables.Benjamin K. Sovacool. "Rejecting Renewables: The Socio-technical Impediments to Renewable Electricity in the United States," Energy Policy, 37(11) (November 2009), p. 4500. Benjamin K. Sovacool has argued that "some of the most surreptitious, yet powerful, impediments facing renewable energy and energy efficiency in the United States are more about culture and institutions than engineering and science".Benjamin K. Sovacool. "The Cultural Barriers to Renewable Energy in the United States," Technology in Society, 31(4) (November 2009), p. 372.
The obstacles to the widespread commercialization of renewable energy technologies are primarily political, not technical,Mark Z. Jacobson and Mark A. Delucchi. A Path to Sustainable Energy by 2030, Scientific American, November 2009, p. 45. and there have been many studies which have identified a range of "non-technical barriers" to renewable energy use.National Renewable Energy Laboratory (2006). Nontechnical Barriers to Solar Energy Use: Review of Recent Literature, Technical Report, NREL/TP-520-40116, September, 30 pages.United Nations Department of Economic and Social Affairs, (2005). Increasing Global Renewable Energy Market Share: Recent Trends and Perspectives Final Report. These barriers are impediments which put renewable energy at a marketing, institutional, or policy disadvantage relative to other forms of energy. Key barriers include:
Difficulty overcoming established energy systems, which includes difficulty introducing innovative energy systems, particularly for distributed generation such as photovoltaics, because of technological lock-in, electricity markets designed for centralized power plants, and market control by established operators. As the Stern Review on the Economics of Climate Change points out:
National grids are usually tailored towards the operation of centralised power plants and thus favour their performance. Technologies that do not easily fit into these networks may struggle to enter the market, even if the technology itself is commercially viable. This applies to distributed generation as most grids are not suited to receive electricity from many small sources. Large-scale renewables may also encounter problems if they are sited in areas far from existing grids.HM Treasury (2006). Stern Review on the Economics of Climate Change p. 355.
Lack of government policy support, which includes the lack of policies and regulations supporting deployment of renewable energy technologies and the presence of policies and regulations hindering renewable energy development and supporting conventional energy development. Examples include subsidies for fossil-fuels, insufficient consumer-based renewable energy incentives, government underwriting for nuclear plant accidents, and complex zoning and permitting processes for renewable energy.
Lack of information dissemination and consumer awareness.
Higher capital cost of renewable energy technologies compared with conventional energy technologies.
Inadequate financing options for renewable energy projects, including insufficient access to affordable financing for project developers, entrepreneurs and consumers.
Imperfect capital markets, which includes failure to internalize all costs of conventional energy (e.g., effects of air pollution, risk of supply disruption)Matthew L. Wald.
Fossil Fuels’ Hidden Cost Is in Billions, Study Says The New York Times, 20 October 2009. and failure to internalize all benefits of renewable energy (e.g., cleaner air, energy security).
Inadequate workforce skills and training, which includes lack of adequate scientific, technical, and manufacturing skills required for renewable energy production; lack of reliable installation, maintenance, and inspection services; and failure of the educational system to provide adequate training in new technologies.
Lack of adequate codes, standards, utility interconnection, and net-metering guidelines.
Poor public perception of renewable energy system aesthetics.
Lack of stakeholder/community participation and co-operation in energy choices and renewable energy projects.
With such a wide range of non-technical barriers, there is no "silver bullet" solution to drive the transition to renewable energy. So ideally there is a need for several different types of policy instruments to complement each other and overcome different types of barriers.Diesendorf, Mark (2007). Greenhouse Solutions with Sustainable Energy, UNSW Press, p. 293.
A policy framework must be created that will level the playing field and redress the imbalance of traditional approaches associated with fossil fuels. The policy landscape must keep pace with broad trends within the energy sector, as well as reflecting specific social, economic and environmental priorities.IEA Renewable Energy Working Party (2002). Renewable Energy... into the mainstream, p. 48.
Public policy landscape
Public policy has a role to play in renewable energy commercialization because the free market system has some fundamental limitations. As the Stern Review points out:
In a liberalised energy market, investors, operators and consumers should face the full cost of their decisions. But this is not the case in many economies or energy sectors. Many policies distort the market in favour of existing fossil fuel technologies.
The International Solar Energy Society has stated that "historical incentives for the conventional energy resources continue even today to bias markets by burying many of the real societal costs of their use".Donald W. Aitken. Transitioning to a Renewable Energy Future, International Solar Energy Society, January 2010, p. 4.
Fossil-fuel energy systems have different production, transmission, and end-use costs and characteristics than do renewable energy systems, and new promotional policies are needed to ensure that renewable systems develop as quickly and broadly as is socially desirable.
Lester Brown states that the market "does not incorporate the indirect costs of providing goods or services into prices, it does not value nature's services adequately, and it does not respect the sustainable-yield thresholds of natural systems". It also favors the near term over the long term, thereby showing limited concern for future generations.Brown, L.R. (2006). Plan B 2.0 Rescuing a Planet Under Stress and a Civilization in Trouble W.W. Norton & Co, pp. 228–232. Tax and subsidy shifting can help overcome these problems,Brown, L.R. (2006). Plan B 2.0 Rescuing a Planet Under Stress and a Civilization in Trouble W.W. Norton & Co, pp. 234–235. though is also problematic to combine different international normative regimes regulating this issue.
Shifting taxes
Tax shifting has been widely discussed and endorsed by economists. It involves lowering income taxes while raising levies on environmentally destructive activities, in order to create a more responsive market. For example, a tax on coal that included the increased health care costs associated with breathing polluted air, the costs of acid rain damage, and the costs of climate disruption would encourage investment in renewable technologies. Several Western European countries are already shifting taxes in a process known there as environmental tax reform.
In 2001, Sweden launched a new 10-year environmental tax shift designed to convert 30 billion kroner ($3.9 billion) of income taxes to taxes on environmentally destructive activities. Other European countries with significant tax reform efforts are France, Italy, Norway, Spain, and the United Kingdom. Asia's two leading economies, Japan and China, are considering carbon taxes.
Shifting subsidies
Just as there is a need for tax shifting, there is also a need for subsidy shifting. Subsidies are not an inherently bad thing as many technologies and industries emerged through government subsidy schemes. The Stern Review explains that of 20 key innovations from the past 30 years, only one of the 14 was funded entirely by the private sector and nine were totally publicly funded.HM Treasury (2006). Stern Review on the Economics of Climate Change p. 362. In terms of specific examples, the Internet was the result of publicly funded links among computers in government laboratories and research institutes. And the combination of the federal tax deduction and a robust state tax deduction in California helped to create the modern wind power industry.
Lester Brown has argued that "a world facing the prospect of economically disruptive climate change can no longer justify subsidies to expand the burning of coal and oil. Shifting these subsidies to the development of climate-benign energy sources such as wind, solar, biomass, and geothermal power is the key to stabilizing the earth's climate." The International Solar Energy Society advocates "leveling the playing field" by redressing the continuing inequities in public subsidies of energy technologies and R&D, in which the fossil fuel and nuclear power receive the largest share of financial support.Donald W. Aitken. Transitioning to a Renewable Energy Future, International Solar Energy Society, January 2010, p. 6.
Some countries are eliminating or reducing climate disrupting subsidies and Belgium, France, and Japan have phased out all subsidies for coal. Germany is reducing its coal subsidy. The subsidy dropped from $5.4 billion in 1989 to $2.8 billion in 2002, and in the process Germany lowered its coal use by 46 percent. China cut its coal subsidy from $750 million in 1993 to $240 million in 1995 and more recently has imposed a high-sulfur coal tax. However, the United States has been increasing its support for the fossil fuel and nuclear industries.
In November 2011, an IEA report entitled Deploying Renewables 2011 said "subsidies in green energy technologies that were not yet competitive are justified in order to give an incentive to investing into technologies with clear environmental and energy security benefits". The IEA's report disagreed with claims that renewable energy technologies are only viable through costly subsidies and not able to produce energy reliably to meet demand.
A fair and efficient imposition of subsidies for renewable energies and aiming at sustainable development, however, require coordination and regulation at a global level, as subsidies granted in one country can easily disrupt industries and policies of others, thus underlining the relevance of this issue at the World Trade Organization. and
Renewable energy targets
Setting national renewable energy targets can be an important part of a renewable energy policy and these targets are usually defined as a percentage of the primary energy and/or electricity generation mix. For example, the European Union has prescribed an indicative renewable energy target of 12 per cent of the total EU energy mix and 22 per cent of electricity consumption by 2010. National targets for individual EU Member States have also been set to meet the overall target. Other developed countries with defined national or regional targets include Australia, Canada, Israel, Japan, Korea, New Zealand, Norway, Singapore, Switzerland, and some US States.United Nations Environment Program (2006). Changing climates: The Role of Renewable Energy in a Carbon-constrained World pp. 14–15.
National targets are also an important component of renewable energy strategies in some developing countries. Developing countries with renewable energy targets include China, India, Indonesia, Malaysia, the Philippines, Thailand, Brazil, Egypt, Mali, and South Africa. The targets set by many developing countries are quite modest when compared with those in some industrialized countries.
Renewable energy targets in most countries are indicative and nonbinding but they have assisted government actions and regulatory frameworks. The United Nations Environment Program has suggested that making renewable energy targets legally binding could be an important policy tool to achieve higher renewable energy market penetration.
Levelling the playing field
The IEA has identified three actions which will allow renewable energy and other clean energy technologies to "more effectively compete for private sector capital".
"First, energy prices must appropriately reflect the "true cost" of energy (e.g. through carbon pricing) so that the positive and negative impacts of energy production and consumption are fully taken into account". Example: New UK nuclear plants cost £92.50/MWh, whereas offshore wind farms in the UK are supported with €74.2/MWhErin Gill. "France & UK offshore costs higher than average" Windpower Offshore, 28 March 2013. Accessed: 22 October 2013. at a price of £150 in 2011 falling to £130 per MWh in 2022.Christopher Willow & Bruce Valpy. "Offshore Wind Forecasts of future costs and benefits – June 2011" Renewable UK, June 2011. Accessed: 22 October 2013. In Denmark, the price can be €84/MWh."No consensus on offshore costs" Windpower Monthly, 1 September 2009. Accessed: 22 October 2013.
"Second, inefficient fossil fuel subsidies must be removed, while ensuring that all citizens have access to affordable energy".
"Third, governments must develop policy frameworks that encourage private sector investment in lower-carbon energy options".
Green stimulus programs
In response to the global financial crisis in the late 2000s, the world's major governments made "green stimulus" programs one of their main policy instruments for supporting economic recovery. Some in green stimulus funding had been allocated to renewable energy and energy efficiency, to be spent mainly in 2010 and in 2011.REN21 (2010). Renewables 2010 Global Status Report p. 27.
Energy Sector Regulation
Public policy determines the extent to which renewable energy (RE) is to be incorporated into a developed or developing country's generation mix. Energy sector regulators implement that policy—thus affecting the pace and pattern of RE investments and connections to the grid. Energy regulators often have authority to carry out a number of functions that have implications for the financial feasibility of renewable energy projects. Such functions include issuing licenses, setting performance standards, monitoring the performance of regulated firms, determining the price level and structure of tariffs, establishing uniform systems of accounts, arbitrating stakeholder disputes (like interconnection cost allocations), performing management audits, developing agency human resources (expertise), reporting sector and commission activities to government authorities, and coordinating decisions with other government agencies. Thus, regulators make a wide range of decisions that affect the financial outcomes associated with RE investments. In addition, the sector regulator is in a position to give advice to the government regarding the full implications of focusing on climate change or energy security. The energy sector regulator is the natural advocate for efficiency and cost-containment throughout the process of designing and implementing RE policies. Since policies are not self-implementing, energy sector regulators become a key facilitator (or blocker) of renewable energy investments.Frequently Asked Questions on Renewable Energy and Energy Efficiency, Body of Knowledge on Infrastructure Regulation,
Energy transition in Germany
thumb|Photovoltaic array and wind turbines at the Schneebergerhof wind farm in the German state of Rheinland-Pfalz
thumbnail|right|Market share of Germany's power generation 2014Germany's Electricity Mix 2014
The Energiewende (German for energy transition) is the transition by Germany to a low carbon, environmentally sound, reliable, and affordable energy supply. The new system will rely heavily on renewable energy (particularly wind, photovoltaics, and biomass) energy efficiency, and energy demand management. Most if not all existing coal-fired generation will need to be retired. The phase-out of Germany's fleet of nuclear reactors, to be complete by 2022, is a key part of the program.
Legislative support for the Energiewende was passed in late 2010 and includes greenhouse gas (GHG) reductions of 80–95% by 2050 (relative to 1990) and a renewable energy target of 60% by 2050. These targets are ambitious. One Berlin policy institute noted that "while the German approach is not unique worldwide, the speed and scope of the Energiewende are exceptional". The Energiewende also seeks a greater transparency in relation to national energy policy formation.
Germany has made significant progress on its GHG emissions reduction target, achieving a 27% decrease between 1990 and 2014. However Germany will need to maintain an average GHG emissions abatement rate of 3.5% per annum to reach its Energiewende goal, equal to the maximum historical value thus far.
Germany spends €1.5billion per annum on energy research (2013 figure) in an effort to solve the technical and social issues raised by the transition. This includes a number of computer studies that have confirmed the feasibility and a similar cost (relative to business-as-usual and given that carbon is adequately priced) of the Energiewende.
These initiatives go well beyond European Union legislation and the national policies of other European states. The policy objectives have been embraced by the German federal government and has resulted in a huge expansion of renewables, particularly wind power. Germany's share of renewables has increased from around 5% in 1999 to 22.9% in 2012, surpassing the OECD average of 18% usage of renewables.
Producers have been guaranteed a fixed feed-in tariff for 20 years, guaranteeing a fixed income. Energy co-operatives have been created, and efforts were made to decentralize control and profits. The large energy companies have a disproportionately small share of the renewables market. However, in some cases poor investment designs have caused bankruptcies and low returns, and unrealistic promises have been shown to be far from reality.
Nuclear power plants were closed, and the existing nine plants will close earlier than planned, in 2022.
One factor that has inhibited efficient employment of new renewable energy has been the lack of an accompanying investment in power infrastructure to bring the power to market. It is believed 8,300 km of power lines must be built or upgraded. The different German States have varying attitudes to the construction of new power lines. Industry has had their rates frozen and so the increased costs of the Energiewende have been passed on to consumers, who have had rising electricity bills.
Voluntary market mechanisms for renewable electricity
Voluntary markets, also referred to as green power markets, are driven by consumer preference. Voluntary markets allow a consumer to choose to do more than policy decisions require and reduce the environmental impact of their electricity use. Voluntary green power products must offer a significant benefit and value to buyers to be successful. Benefits may include zero or reduced greenhouse gas emissions, other pollution reductions or other environmental improvements on power stations.
The driving force behind voluntary green electricity within the EU are the liberalized electricity markets and the RES Directive. According to the directive the EU Member States must ensure that the origin of electricity produced from renewables can be guaranteed and therefore a "guarantee of origin" must be issued (article 15). Environmental organisations are using the voluntary market to create new renewables and improving sustainability of the existing power production. In the US the main tool to track and stimulate voluntary actions is Green-e program managed by Center for Resource Solutions. In Europe the main voluntary tool used by the NGOs to promote sustainable electricity production is EKOenergy label.
Recent developments
thumb|600px|centre|Projected renewable energy investment growth globally (2007–2017)Makower, J. Pernick, R. Wilder, C. (2008). Clean Energy Trends 2008
A number of events in 2006 pushed renewable energy up the political agenda, including the US mid-term elections in November, which confirmed clean energy as a mainstream issue. Also in 2006, the Stern Review made a strong economic case for investing in low carbon technologies now, and argued that economic growth need not be incompatible with cutting energy consumption.United Nations Environment Programme and New Energy Finance Ltd. (2007), p. 11. According to a trend analysis from the United Nations Environment Programme, climate change concerns coupled with recent high oil pricesHigh oil price hits Wall St ABC News, 16 October 2007. Retrieved on 15 January 2008. and increasing government support are driving increasing rates of investment in the renewable energy and energy efficiency industries.United Nations Environment Programme and New Energy Finance Ltd. (2007), p. 3.
Investment capital flowing into renewable energy reached a record US$77 billion in 2007, with the upward trend continuing in 2008. The OECD still dominates, but there is now increasing activity from companies in China, India and Brazil. Chinese companies were the second largest recipient of venture capital in 2006 after the United States. In the same year, India was the largest net buyer of companies abroad, mainly in the more established European markets.
New government spending, regulation, and policies helped the industry weather the 2009 economic crisis better than many other sectors. Most notably, U.S. President Barack Obama's American Recovery and Reinvestment Act of 2009 included more than $70 billion in direct spending and tax credits for clean energy and associated transportation programs. This policy-stimulus combination represents the largest federal commitment in U.S. history for renewables, advanced transportation, and energy conservation initiatives. Based on these new rules, many more utilities strengthened their clean-energy programs. Clean Edge suggests that the commercialization of clean energy will help countries around the world deal with the current economic malaise. Once-promising solar energy company, Solyndra, became involved in a political controversy involving U.S. President Barack Obama's administration's authorization of a $535 million loan guarantee to the Corporation in 2009 as part of a program to promote alternative energy growth.Solar Energy Company Touted By Obama Goes Bankrupt, ABC News, 31 August 2011Obama's Crony Capitalism, Reason, 9 September 2011Bankrupt solar company with fed backing has cozy ties to Obama admin, The Daily Caller, 1 September 2011 The company ceased all business activity, filed for Chapter 11 bankruptcy, and laid-off nearly all of its employees in early September 2011.Solyndra files for bankruptcy, looks for buyer . Bloomberg Businessweek. Retrieved: 20 September 2011.
In his 24 January 2012, State of the Union address, President Barack Obama restated his commitment to renewable energy. Obama said that he "will not walk away from the promise of clean energy." Obama called for a commitment by the Defense Department to purchase 1,000 MW of renewable energy. He also mentioned the long-standing Interior Department commitment to permit 10,000 MW of renewable energy projects on public land in 2012.
As of 2012, renewable energy plays a major role in the energy mix of many countries globally. Renewables are becoming increasingly economic in both developing and developed countries. Prices for renewable energy technologies, primarily wind power and solar power, continued to drop, making renewables competitive with conventional energy sources. Without a level playing field, however, high market penetration of renewables is still dependent on a robust promotional policies. Fossil fuel subsidies, which are far higher than those for renewable energy, remain in place and quickly need to be phased out.REN21. (2013). Renewables 2013 Global Status Report, (Paris: REN21 Secretariat), ISBN 978-3-9815934-0-2.
United Nations' Secretary-General Ban Ki-moon has said that "renewable energy has the ability to lift the poorest nations to new levels of prosperity". In October 2011, he "announced the creation of a high-level group to drum up support for energy access, energy efficiency and greater use of renewable energy. The group is to be co-chaired by Kandeh Yumkella, the chair of UN Energy and director general of the UN Industrial Development Organisation, and Charles Holliday, chairman of Bank of America".
Worldwide use of solar power and wind power continued to grow significantly in 2012. Solar electricity consumption increased by 58 percent, to 93 terawatt-hours (TWh). Use of wind power in 2012 increased by 18.1 percent, to 521.3 TWh. Global solar and wind energy installed capacities continued to expand even though new investments in these technologies declined during 2012. Worldwide investment in solar power in 2012 was $140.4 billion, an 11 percent decline from 2011, and wind power investment was down 10.1 percent, to $80.3 billion. But due to lower production costs for both technologies, total installed capacities grew sharply. This investment decline, but growth in installed capacity, may again occur in 2013.Sally Bakewell. " Clean Energy Investment Headed for Second Annual Decline" Bloomberg Businessweek, 14 October 2013. Accessed: 17 October 2013."Global Trends in Renewable Energy Investment 2013" Bloomberg New Energy Finance, 12 June 2013. Accessed: 17 October 2013. Analysts expect the market to triple by 2030."Renewables investment set to triple by 2030" BusinessGreen, 23 April 2013. Accessed: 17 October 2013. In 2015, investment in renewables exceeded fossils.
100% renewable energy
The incentive to use 100% renewable energy for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. The Intergovernmental Panel on Climate Change has said that there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. In reviewing 164 recent scenarios of future renewable energy growth, the report noted that the majority expected renewable sources to supply more than 17% of total energy by 2030, and 27% by 2050; the highest forecast projected 43% supplied by renewables by 2030 and 77% by 2050. Renewable energy use has grown much faster than even advocates anticipated. At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply.
Mark Z. Jacobson, professor of civil and environmental engineering at Stanford University and director of its Atmosphere and Energy Program says producing all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic". Jacobson says that energy costs with a wind, solar, water system should be similar to today's energy costs.
Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs … Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly greater amounts of electricity than the total current or projected domestic demand." .
The most significant barriers to the widespread implementation of large-scale renewable energy and low carbon energy strategies are primarily political and not technological. According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are: climate change denial, the fossil fuels lobby, political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints.
Energy efficiency
Moving towards energy sustainability will require changes not only in the way energy is supplied, but in the way it is used, and reducing the amount of energy required to deliver various goods or services is essential. Opportunities for improvement on the demand side of the energy equation are as rich and diverse as those on the supply side, and often offer significant economic benefits.InterAcademy Council (2007). Lighting the way: Toward a sustainable energy future
A sustainable energy economy requires commitments to both renewables and efficiency. Renewable energy and energy efficiency are said to be the "twin pillars" of sustainable energy policy. The American Council for an Energy-Efficient Economy has explained that both resources must be developed in order to stabilize and reduce carbon dioxide emissions:American Council for an Energy-Efficient Economy (2007).
The Twin Pillars of Sustainable Energy: Synergies between Energy Efficiency and Renewable Energy Technology and Policy Report E074.
Efficiency is essential to slowing the energy demand growth so that rising clean energy supplies can make deep cuts in fossil fuel use. If energy use grows too fast, renewable energy development will chase a receding target. Likewise, unless clean energy supplies come online rapidly, slowing demand growth will only begin to reduce total emissions; reducing the carbon content of energy sources is also needed.
The IEA has stated that renewable energy and energy efficiency policies are complementary tools for the development of a sustainable energy future, and should be developed together instead of being developed in isolation.International Energy Agency (2007). Global Best Practice in Renewable Energy Policy Making
See also
Lists
Lists about renewable energy
List of energy storage projects
List of large wind farms
List of notable renewable energy organizations
List of renewable energy topics by country
Topics
Catching the Sun (film)
Clean Energy Trends
Cost of electricity by source
Ecotax
EKOenergy
Energy security and renewable technology
Environmental tariff
Feed-in Tariff
International Renewable Energy Agency
PV financial incentives
Rocky Mountain Institute
The Clean Tech Revolution
The Third Industrial Revolution
World Council for Renewable Energy
People
Andrew Blakers
Michael Boxwell
Richard L. Crowther
James Dehlsen
Mark Diesendorf
Rolf Disch
Peter Droege
David Faiman
Hans-Josef Fell
Harrison Fraker
Chris Goodall
Al Gore
Michael Grätzel
Martin Green
Jan Hamrin
Denis Hayes
Tetsunari Iida
Mark Z. Jacobson
Stefan Krauter
Jeremy Leggett
Richard Levine
Amory Lovins
Gaspar Makale
Joel Makower
Eric Martinot
David Mills
Huang Ming
Leonard L. Northrup Jr.
Arthur Nozik
Monica Oliphant
Stanford R. Ovshinsky
Luis Palmer
Alan Pears
Hélène Pelosse
Ron Pernick
Phil Radford
Jeremy Rifkin
Hermann Scheer
Zhengrong Shi
Benjamin K. Sovacool
Thomas H. Stoner, Jr.
Peter Taylor
Félix Trombe
John Twidell
Martin Vosseler
Stuart Wenham
Clint Wilder
John I. Yellott
References
Bibliography
Aitken, Donald W. (2010). Transitioning to a Renewable Energy Future, International Solar Energy Society, January, 54 pages.
Armstrong, Robert C., Catherine Wolfram, Robert Gross, Nathan S. Lewis, and M.V. Ramana et al. The Frontiers of Energy, Nature Energy, Vol 1, 11 January 2016.
EurObserv'ER (2012). The state of renewable energies in Europe, 250 pages.
HM Treasury (2006). Stern Review on the Economics of Climate Change, 575 pages.
International Council for Science (c2006). Discussion Paper by the Scientific and Technological Community for the 14th session of the United Nations Commission on Sustainable Development, 17 pages.
International Energy Agency (2006). World Energy Outlook 2006: Summary and Conclusions, OECD, 11 pages.
International Energy Agency (2007). Renewables in global energy supply: An IEA facts sheet, OECD, 34 pages.
International Energy Agency (2008). Deploying Renewables: Principles for Effective Policies, OECD, 8 pages.
International Energy Agency (2011). Deploying Renewables 2011: Best and Future Policy Practice, OECD.
International Energy Agency (2011). Solar Energy Perspectives, OECD.
Lovins, Amory B. (2011). Reinventing Fire: Bold Business Solutions for the New Energy Era, Chelsea Green Publishing, 334 pages.
Makower, Joel, and Ron Pernick and Clint Wilder (2009). Clean Energy Trends 2009, Clean Edge.
National Renewable Energy Laboratory (2006). Non-technical Barriers to Solar Energy Use: Review of Recent Literature, Technical Report, NREL/TP-520-40116, September, 30 pages.
Pernick, Ron and Wilder, Clint (2012). Clean Tech Nation: How the U.S. Can Lead in the New Global Economy, HarperCollins.
REN21 (2009). Renewables Global Status Report: 2009 Update, Paris: REN21 Secretariat.
REN21 (2010). Renewables 2010 Global Status Report, Paris: REN21 Secretariat, 78 pages.
REN21 (2011). Renewables 2011: Global Status Report, Paris: REN21 Secretariat.
REN21 (2012). Renewables 2012: Global Status Report, Paris: REN21 Secretariat.
REN21 (2013). Renewables 2013: Global Status Report, (Paris: REN21 Secretariat), ISBN 978-3-9815934-0-2.
REN21 (2016). Renewables 2016 Global Status Report: key findings, Renewable Energy Policy Network for the 21st century.
External links
Investing: Green technology has big growth potential, LA Times, 2011
Global Renewable Energy: Policies and Measures
Missing the Market Meltdown
Bureau of Land Management 2012 Renewable Energy Priority Projects
Category:Energy policy
Category:Renewable resources
Category:Environmental social science | 10,418,624 | 2017-01 |
Videoconferencing | thumb|A Tandberg T3 high resolution telepresence room in use (2008).
thumb|Indonesian and U.S. students participating in an educational videoconference (2010).
Videoconferencing (VC) is the conduct of a videoconference (also known as a video conference or videoteleconference) by a set of telecommunication technologies which allow two or more locations to communicate by simultaneous two-way video and audio transmissions. It has also been called 'visual collaboration' and is a type of groupware.
Videoconferencing differs from videophone calls in that it's designed to serve a conference or multiple locations rather than individuals.Mulbach et al, 1995. pg. 291. It is an intermediate form of videotelephony, first used commercially in Germany during the late-1930s and later in the United States during the early 1970s as part of AT&T's development of Picturephone technology.
With the introduction of relatively low cost, high capacity broadband telecommunication services in the late 1990s, coupled with powerful computing processors and video compression techniques, videoconferencing has made significant inroads in business, education, medicine and media.
History
thumb|Multiple user videoconferencing first being demonstrated with Stanford Research Institute's NLS computer technology (1968).
Videoconferencing uses audio and video telecommunications to bring people at different sites together. This can be as simple as a conversation between people in private offices (point-to-point) or involve several (multipoint) sites in large rooms at multiple locations. Besides the audio and visual transmission of meeting activities, allied videoconferencing technologies can be used to share documents and display information on whiteboards.
Simple analog videophone communication could be established as early as the invention of the television. Such an antecedent usually consisted of two closed-circuit television systems connected via coax cable or radio. An example of that was the German Reich Postzentralamt (post office) video telephone network serving Berlin and several German cities via coaxial cables between 1936 and 1940."German Postoffice To Use Television-Telephone For Its Communication System", (Associated Press) The Evening Independent, St. Petersburg, Fl, September 1, 1934Peters, C. Brooks, "Talks On 'See-Phone': Television Applied to German Telephones Enables Speakers to See Each Other...", The New York Times, September 18, 1938
During the first manned space flights, NASA used two radio-frequency (UHF or VHF) video links, one in each direction. TV channels routinely use this type of videotelephony when reporting from distant locations. The news media were to become regular users of mobile links to satellites using specially equipped trucks, and much later via special satellite videophones in a briefcase.
This technique was very expensive, though, and could not be used for applications such as telemedicine, distance education, and business meetings. Attempts at using normal telephony networks to transmit slow-scan video, such as the first systems developed by AT&T Corporation, first researched in the 1950s, failed mostly due to the poor picture quality and the lack of efficient video compression techniques. The greater 1 MHz bandwidth and 6 Mbit/s bit rate of the Picturephone in the 1970s also did not achieve commercial success, mostly due to its high cost, but also due to a lack of network effect —with only a few hundred Picturephones in the world, users had extremely few contacts they could actually call to, and interoperability with other videophone systems would not exist for decades.
It was only in the 1980s that digital telephony transmission networks became possible, such as with ISDN networks, assuring a minimum bit rate (usually 128 kilobits/s) for compressed video and audio transmission. During this time, there was also research into other forms of digital video and audio communication. Many of these technologies, such as the Media space, are not as widely used today as videoconferencing but were still an important area of research.Robert Stults, Media Space, Xerox PARC, Palo Alto, CA, 1986.Harrison, Steve. Media Space: 20+ Years of Mediated Life, Springer, 2009, ISBN 1-84882-482-3, ISBN 978-1-84882-482-9. The first dedicated systems started to appear in the market as ISDN networks were expanding throughout the world. One of the first commercial videoconferencing systems sold to companies came from PictureTel Corp., which had an Initial Public Offering in November, 1984.
In 1984 Concept Communication in the United States replaced the then-100 pound, US$100,000 computers necessary for teleconferencing, with a $12,000 circuit board that doubled the video frame rate from 15 up to 30 frames per second, and which reduced the equipment to the size of a circuit board fitting into standard personal computers. The company also secured a patent for a codec for full-motion videoconferencing, first demonstrated at AT&T Bell Labs in 1986.
thumb|right|Global Schoolhouse students communicating via CU-SeeMe, with a video framerate between 3-9 frames per second (1993).
Videoconferencing systems throughout the 1990s rapidly evolved from very expensive proprietary equipment, software and network requirements to a standards-based technology readily available to the general public at a reasonable cost.
Finally, in the 1990s, Internet Protocol-based videoconferencing became possible, and more efficient video compression technologies were developed, permitting desktop, or personal computer (PC)-based videoconferencing. In 1992 CU-SeeMe was developed at Cornell by Tim Dorcey et al. In 1995 the first public videoconference between North America and Africa took place, linking a technofair in San Francisco with a techno-rave and cyberdeli in Cape Town. At the Winter Olympics opening ceremony in Nagano, Japan, Seiji Ozawa conducted the Ode to Joy from Beethoven's Ninth Symphony simultaneously across five continents in near-real time.
While videoconferencing technology was initially used primarily within internal corporate communication networks, one of the first community service usages of the technology started in 1992 through a unique partnership with PictureTel and IBM Corporations which at the time were promoting a jointly developed desktop based videoconferencing product known as the PCS/1. Over the next 15 years, Project DIANE (Diversified Information and Assistance Network) grew to utilize a variety of videoconferencing platforms to create a multi-state cooperative public service and distance education network consisting of several hundred schools, neighborhood centers, libraries, science museums, zoos and parks, public assistance centers, and other community oriented organizations.
In the 2000s, videotelephony was popularized via free Internet services such as Skype and iChat, web plugins and on-line telecommunication programs that promoted low cost, albeit lower-quality, videoconferencing to virtually every location with an Internet connection.
thumb|right|Russian President Dmitry Medvedev attending the Singapore APEC summit, holding a videoconference with Rashid Nurgaliyev via a Tactical MXP, after an arms depot explosion in Russia (2009).
In May 2005, the first high definition video conferencing systems, produced by LifeSize Communications, were displayed at the Interop trade show in Las Vegas, Nevada, able to provide video at 30 frames per second with a 1280 by 720 display resolution.Polycom High-Definition (HD) Video Conferencing Polycom introduced its first high definition video conferencing system to the market in 2006. As of the 2010s, high definition resolution for videoconferencing became a popular feature, with most major suppliers in the videoconferencing market offering it.
Technological developments by videoconferencing developers in the 2010s have extended the capabilities of video conferencing systems beyond the boardroom for use with hand-held mobile devices that combine the use of video, audio and on-screen drawing capabilities broadcasting in real-time over secure networks, independent of location. Mobile collaboration systems now allow multiple people in previously unreachable locations, such as workers on an off-shore oil rig, the ability to view and discuss issues with colleagues thousands of miles away. Traditional videoconferencing system manufacturers have begun providing mobile applications as well, such as those that allow for live and still image streaming.VCLink for Mobile Devices - AVer Video Conferencing
Technology
thumb|Dual display: An older Polycom VSX 7000 system and camera used for videoconferencing, with two displays for simultaneous broadcast from separate locations (2008).
thumb|Various components and the camera of a LifeSize Communications Room 220 high definition multipoint system (2010).
thumb|A video conference meeting facilitated by Google Hangouts.
The core technology used in a videoconferencing system is digital compression of audio and video streams in real time. The hardware or software that performs compression is called a codec (coder/decoder). Compression rates of up to 1:500 can be achieved. The resulting digital stream of 1s and 0s is subdivided into labeled packets, which are then transmitted through a digital network of some kind (usually ISDN or IP). The use of audio modems in the transmission line allow for the use of POTS, or the Plain Old Telephone System, in some low-speed applications, such as videotelephony, because they convert the digital pulses to/from analog waves in the audio spectrum range.
The other components required for a videoconferencing system include:
Video input: (PTZ / 360° / Fisheye) video camera or webcam
Video output: computer monitor, television or projector
Audio input: microphones, CD/DVD player, cassette player, or any other source of PreAmp audio outlet.
Audio output: usually loudspeakers associated with the display device or telephone
Data transfer: analog or digital telephone network, LAN or Internet
Computer: a data processing unit that ties together the other components, does the compressing and decompressing, and initiates and maintains the data linkage via the network.
There are basically two kinds of videoconferencing systems:
Dedicated systems have all required components packaged into a single piece of equipment, usually a console with a high quality remote controlled video camera. These cameras can be controlled at a distance to pan left and right, tilt up and down, and zoom. They became known as PTZ cameras. The console contains all electrical interfaces, the control computer, and the software or hardware-based codec. Omnidirectional microphones are connected to the console, as well as a TV monitor with loudspeakers and/or a video projector. There are several types of dedicated videoconferencing devices:
Large group videoconferencing are non-portable, large, more expensive devices used for large rooms and auditoriums.
Small group videoconferencing are non-portable or portable, smaller, less expensive devices used for small meeting rooms.
Individual videoconferencing are usually portable devices, meant for single users, have fixed cameras, microphones and loudspeakers integrated into the console.
Desktop systems are add-ons (hardware boards or software codec) to normal PCs and laptops, transforming them into videoconferencing devices. A range of different cameras and microphones can be used with the codec, which contains the necessary codec and transmission interfaces. Most of the desktops systems work with the H.323 standard. Videoconferences carried out via dispersed PCs are also known as e-meetings. These can also be nonstandard, Microsoft Lync, Skype for Business, Google Hangouts, or Yahoo Messenger or standards based, Cisco Jabber.
WebRTC Platforms are video conferencing solutions that are not resident by using a software application but is available through the standard web browser. Solutions such as Adobe Connect and Cisco WebEX can be accessed by going to a URL sent by the meeting organizer and various degrees of security can be attached to the virtual "room". Often the user will be required to download a piece of software, called an "Add In" to enable the browser to access the local camera, microphone and establish a connection to the meeting. WebRTC technology doesn't require any software or Add On installation, instead a WebRTC compliant internet browser itself acts as a client to facilitate 1-to-1 and 1-to-many videoconferencing calls. Several enhanced flavours of WebRTC technology are being provided by Third Party vendors.
Conferencing layers
The components within a Conferencing System can be divided up into several different layers: User Interface, Conference Control, Control or Signal Plane, and Media Plane.
Videoconferencing User Interfaces (VUI) can be either graphical or voice responsive. Many in the industry have encountered both types of interfaces, and normally graphical interfaces are encountered on a computer. User interfaces for conferencing have a number of different uses; they can be used for scheduling, setup, and making a videocall. Through the user interface the administrator is able to control the other three layers of the system.
Conference Control performs resource allocation, management and routing. This layer along with the User Interface creates meetings (scheduled or unscheduled) or adds and removes participants from a conference.
Control (Signaling) Plane contains the stacks that signal different endpoints to create a call and/or a conference. Signals can be, but aren’t limited to, H.323 and Session Initiation Protocol (SIP) Protocols. These signals control incoming and outgoing connections as well as session parameters.
The Media Plane controls the audio and video mixing and streaming. This layer manages Real-Time Transport Protocols, User Datagram Packets (UDP) and Real-Time Transport Control Protocol (RTCP). The RTP and UDP normally carry information such the payload type which is the type of codec, frame rate, video size and many others. RTCP on the other hand acts as a quality control Protocol for detecting errors during streaming.
Multipoint videoconferencing
Simultaneous videoconferencing among three or more remote points is possible by means of a Multipoint Control Unit (MCU). This is a bridge that interconnects calls from several sources (in a similar way to the audio conference call). All parties call the MCU, or the MCU can also call the parties which are going to participate, in sequence. There are MCU bridges for IP and ISDN-based videoconferencing. There are MCUs which are pure software, and others which are a combination of hardware and software. An MCU is characterised according to the number of simultaneous calls it can handle, its ability to conduct transposing of data rates and protocols, and features such as Continuous Presence, in which multiple parties can be seen on-screen at once. MCUs can be stand-alone hardware devices, or they can be embedded into dedicated videoconferencing units.
The MCU consists of two logical components:
A single multipoint controller (MC), and
Multipoint Processors (MP), sometimes referred to as the mixer.
The MC controls the conferencing while it is active on the signaling plane, which is simply where the system manages conferencing creation, endpoint signaling and in-conferencing controls. This component negotiates parameters with every endpoint in the network and controls conferencing resources.
While the MC controls resources and signaling negotiations, the MP operates on the media plane and receives media from each endpoint. The MP generates output streams from each endpoint and redirects the information to other endpoints in the conference.
Some systems are capable of multipoint conferencing with no MCU, stand-alone, embedded or otherwise. These use a standards-based H.323 technique known as "decentralized multipoint", where each station in a multipoint call exchanges video and audio directly with the other stations with no central "manager" or other bottleneck. The advantages of this technique are that the video and audio will generally be of higher quality because they don't have to be relayed through a central point. Also, users can make ad-hoc multipoint calls without any concern for the availability or control of an MCU. This added convenience and quality comes at the expense of some increased network bandwidth, because every station must transmit to every other station directly.
Videoconferencing modes
Videoconferencing systems use several common operating modes:
Voice-Activated Switch (VAS);
Continuous Presence.
In VAS mode, the MCU switches which endpoint can be seen by the other endpoints by the levels of one's voice. If there are four people in a conference, the only one that will be seen in the conference is the site which is talking; the location with the loudest voice will be seen by the other participants.
Continuous Presence mode, displays multiple participants at the same time. The MP in this mode takes the streams from the different endpoints and puts them all together into a single video image. In this mode, the MCU normally sends the same type of images to all participants. Typically these types of images are called "layouts" and can vary depending on the number of participants in a conference.
Echo cancellation
A fundamental feature of professional videoconferencing systems is Acoustic Echo Cancellation (AEC). Echo can be defined as the reflected source wave interference with new wave created by source. AEC is an algorithm which is able to detect when sounds or utterances reenter the audio input of the videoconferencing codec, which came from the audio output of the same system, after some time delay. If unchecked, this can lead to several problems including:
the remote party hearing their own voice coming back at them (usually significantly delayed)
strong reverberation, which makes the voice channel useless, and
howling created by feedback.
Echo cancellation is a processor-intensive task that usually works over a narrow range of sound delays.
Cloud-based video conferencing
Cloud-based video conferencing can be used without the hardware generally required by other video conferencing systems, and can be designed for use by SMEs, or larger international companies like Facebook. Cloud-based systems can handle either 2D or 3D video broadcasting. Cloud-based systems can also implement mobile calls, VOIP, and other forms of video calling. They can also come with a video recording function to archive past meetings.
Technical and other issues
Computer security experts have shown that poorly configured or inadequately supervised videoconferencing system can permit an easy 'virtual' entry by computer hackers and criminals into company premises and corporate boardrooms, via their own videoconferencing systems. Some observers argue that three outstanding issues have prevented videoconferencing from becoming a standard form of communication, despite the ubiquity of videoconferencing-capable systems. These issues are:
Eye contact: Eye contact plays a large role in conversational turn-taking, perceived attention and intent, and other aspects of group communication.Vertegaal, "Explaining Effects of Eye Gaze on Mediated Group Conversations: Amount or Synchronization?" ACM Conference on Computer Supported Cooperative Work, 2002. While traditional telephone conversations give no eye contact cues, many videoconferencing systems are arguably worse in that they provide an incorrect impression that the remote interlocutor is avoiding eye contact. Some telepresence systems have cameras located in the screens that reduce the amount of parallax observed by the users. This issue is also being addressed through research that generates a synthetic image with eye contact using stereo reconstruction.Computer vision approaches to achieving eye contact appeared in the 1990s, such as Teleconferencing Eye Contact Using a Virtual Camera, ACM CHI 1993. More recently gaze correction systems using only a single camera have been shown, such as. Microsoft's GazeMaster system.Telcordia Technologies, formerly Bell Communications Research, owns a patent for eye-to-eye videoconferencing using rear projection screens with the video camera behind it, evolved from a 1960s U.S. military system that provided videoconferencing services between the White House and various other government and military facilities. This technique eliminates the need for special cameras or image processing.Google Patent
Appearance consciousness: A second psychological problem with videoconferencing is being on camera, with the video stream possibly even being recorded. The burden of presenting an acceptable on-screen appearance is not present in audio-only communication. Early studies by Alphonse Chapanis found that the addition of video actually impaired communication, possibly because of the consciousness of being on camera.
Signal latency: The information transport of digital signals in many steps need time. In a telecommunicated conversation, an increased latency (time lag) larger than about 150–300 ms becomes noticeable and is soon observed as unnatural and distracting. Therefore, next to a stable large bandwidth, a small total round-trip time is another major technical requirement for the communication channel for interactive videoconferencing.
The issue of eye-contact may be solved with advancing technology, and presumably the issue of appearance consciousness will fade as people become accustomed to videoconferencing.
Standards
thumb|The Tandberg E20 is an example of a SIP-only device. Such devices need to route calls through a Video Communication Server to be able to reach H.323 systems, a process known as "interworking" (2009).
The International Telecommunications Union (ITU) (formerly: Consultative Committee on International Telegraphy and Telephony (CCITT)) has three umbrellas of standards for videoconferencing:
ITU H.320 is known as the standard for public switched telephone networks (PSTN) or videoconferencing over integrated services digital networks. While still prevalent in Europe, ISDN was never widely adopted in the United States and Canada.
ITU H.264 Scalable Video Coding (SVC) is a compression standard that enables videoconferencing systems to achieve highly error resilient Internet Protocol (IP) video transmissions over the public Internet without quality-of-service enhanced lines.SVC vs. H.264/AVC Error Resilience This standard has enabled wide scale deployment of high definition desktop videoconferencing and made possible new architectures,SVC White Papers which reduces latency between the transmitting sources and receivers, resulting in more fluid communication without pauses. In addition, an attractive factor for IP videoconferencing is that it is easier to set up for use along with web conferencing and data collaboration. These combined technologies enable users to have a richer multimedia environment for live meetings, collaboration and presentations.
ITU V.80: videoconferencing is generally compatibilized with H.324 standard point-to-point videotelephony over regular plain old telephone service (POTS) phone lines.
The Unified Communications Interoperability Forum (UCIF), a non-profit alliance between communications vendors, launched in May 2010. The organization's vision is to maximize the interoperability of UC based on existing standards. Founding members of UCIF include HP, Microsoft, Polycom, Logitech/LifeSize Communications and Juniper Networks.Unified Communications Interoperability ForumCollaboration Vendors Join for Interoperability
Social and institutional impact
Impact on the general public
High speed Internet connectivity has become more widely available at a reasonable cost and the cost of video capture and display technology has decreased. Consequently, personal videoconferencing systems based on a webcam, personal computer system, software compression and broadband Internet connectivity have become affordable to the general public. Also, the hardware used for this technology has continued to improve in quality, and prices have dropped dramatically. The availability of freeware (often as part of chat programs) has made software based videoconferencing accessible to many.
For over a century, futurists have envisioned a future where telephone conversations will take place as actual face-to-face encounters with video as well as audio. Sometimes it is simply not possible or practical to have face-to-face meetings with two or more people. Sometimes a telephone conversation or conference call is adequate. Other times, e-mail exchanges are adequate. However, videoconferencing adds another possible alternative, and can be considered when:
a live conversation is needed;
non-verbal (visual) information is an important component of the conversation;
the parties of the conversation can't physically come to the same location; or
the expense or time of travel is a consideration.
Deaf, hard-of-hearing and mute individuals have a particular interest in the development of affordable high-quality videoconferencing as a means of communicating with each other in sign language. Unlike Video Relay Service, which is intended to support communication between a caller using sign language and another party using spoken language, videoconferencing can be used directly between two deaf signers.
Mass adoption and use of videoconferencing is still relatively low, with the following often claimed as causes:
Complexity of systems. Most users are not technical and want a simple interface. In hardware systems an unplugged cord or a flat battery in a remote control is seen as failure, contributing to perceived unreliability which drives users back to traditional meetings. Successful systems are backed by support teams who can pro-actively support and provide fast assistance when required.
Perceived lack of interoperability: not all systems can readily interconnect, for example ISDN and IP systems require a gateway. Popular software solutions cannot easily connect to hardware systems. Some systems use different standards, features and qualities which can require additional configuration when connecting to dissimilar systems.
Bandwidth and quality of service: In some countries it is difficult or expensive to get a high quality connection that is fast enough for good-quality video conferencing. Technologies such as ADSL have limited upload speeds and cannot upload and download simultaneously at full speed. As Internet speeds increase higher quality and high definition video conferencing will become more readily available.
Expense of commercial systems: well-designed telepresence systems require specially designed rooms which can cost hundreds of thousands of dollars to fit out their rooms with codecs, integration equipment (such as Multipoint Control Units), high fidelity sound systems and furniture. Monthly charges may also be required for bridging services and high capacity broadband service.
Self-consciousness about being on camera: especially for new users or older generations who may prefer less fidelity in their communications.
Lack of direct eye contact, an issue being circumvented in some higher end systems.
These are some of the reasons many systems are often used for internal corporate use only, as they are less likely to result in lost sales. One alternative to companies lacking dedicated facilities is the rental of videoconferencing-equipped meeting rooms in cities around the world. Clients can book rooms and turn up for the meeting, with all technical aspects being prearranged and support being readily available if needed.
Impact on government and law
In the United States, videoconferencing has allowed testimony to be used for an individual who is unable or prefers not to attend the physical legal settings, or would be subjected to severe psychological stress in doing so, however there is a controversy on the use of testimony by foreign or unavailable witnesses via video transmission, regarding the violation of the Confrontation Clause of the Sixth Amendment of the U.S. Constitution.Tokson, Matthew J. Virtual Confrontation: Is Videoconference Testimony by an Unavailable Witness Constitutional?, University of Chicago Law Review, 2007, Vol. 74, No. 4.
In a military investigation in State of North Carolina, Afghan witnesses have testified via videoconferencing.
In Hall County, Georgia, videoconferencing systems are used for initial court appearances. The systems link jails with court rooms, reducing the expenses and security risks of transporting prisoners to the courtroom.Case Study: Hall County, Lifesize.com website.
The U.S. Social Security Administration (SSA), which oversees the world's largest administrative judicial system under its Office of Disability Adjudication and Review (ODAR),U.S. Social Security Administration. New National Hearing Centre has made extensive use of videoconferencing to conduct hearings at remote locations.ODAE Pubs: 70-067 In Fiscal Year (FY) 2009, the U.S. Social Security Administration (SSA) conducted 86,320 videoconferenced hearings, a 55% increase over FY 2008.SSA Overview Performance In August 2010, the SSA opened its fifth and largest videoconferencing-only National Hearing Center (NHC), in St. Louis, Missouri. This continues the SSA's effort to use video hearings as a means to clear its substantial hearing backlog. Since 2007, the SSA has also established NHCs in Albuquerque, New Mexico, Baltimore, Maryland, Falls Church, Virginia, and Chicago, Illinois.
Impact on education
Videoconferencing provides students with the opportunity to learn by participating in two-way communication forums. Furthermore, teachers and lecturers worldwide can be brought to remote or otherwise isolated educational facilities. Students from diverse communities and backgrounds can come together to learn about one another, although language barriers will continue to persist. Such students are able to explore, communicate, analyze and share information and ideas with one another. Through videoconferencing, students can visit other parts of the world to speak with their peers, and visit museums and educational facilities. Such virtual field trips can provide enriched learning opportunities to students, especially those in geographically isolated locations, and to the economically disadvantaged. Small schools can use these technologies to pool resources and provide courses, such as in foreign languages, which could not otherwise be offered.
A few examples of benefits that videoconferencing can provide in campus environments include:
faculty members keeping in touch with classes while attending conferences;
guest lecturers brought in classes from other institutions;LifeSize Case Study
researchers collaborating with colleagues at other institutions on a regular basis without loss of time due to travel;
schools with multiple campuses collaborating and sharing professors;LifeSize Case Study
schools from two separate nations engaging in cross-cultural exchanges;AVer Case Study
faculty members participating in thesis defenses at other institutions;
administrators on tight schedules collaborating on budget preparation from different parts of campus;
faculty committee auditioning scholarship candidates;
researchers answering questions about grant proposals from agencies or review committees;
student interviews with an employers in other cities, and
teleseminars.
Impact on medicine and health
Videoconferencing is a highly useful technology for real-time telemedicine and telenursing applications, such as diagnosis, consulting, transmission of medical images, etc... With videoconferencing, patients may contact nurses and physicians in emergency or routine situations; physicians and other paramedical professionals can discuss cases across large distances. Rural areas can use this technology for diagnostic purposes, thus saving lives and making more efficient use of health care money. For example, a rural medical center in Ohio, United States, used videoconferencing to successfully cut the number of transfers of sick infants to a hospital away. This had previously cost nearly $10,000 per transfer.Adena Health System Uses LifeSize High Definition Video to Bring Remote Specialists to Infant Patients (media release), LifeSize.com website, December 8, 2008.
Special peripherals such as microscopes fitted with digital cameras, videoendoscopes, medical ultrasound imaging devices, otoscopes, etc., can be used in conjunction with videoconferencing equipment to transmit data about a patient. Recent developments in mobile collaboration on hand-held mobile devices have also extended video-conferencing capabilities to locations previously unreachable, such as a remote community, long-term care facility, or a patient's home.Van't Haaff, Corey. Virtually On-sight, Just for Canadian Doctors, March–April 2009, p. 22.
Impact on sign language communications
thumb|right|A deaf person using a video relay service at his workplace to communicate with a hearing person in London. (Courtesy: SignVideo)
A video relay service (VRS), also known as a 'video interpreting service' (VIS), is a service that allows deaf, hard-of-hearing and speech-impaired (D-HOH-SI) individuals to communicate by videoconferencing (or similar technologies) with hearing people in real-time, via a sign language interpreter.
A similar video interpreting service called video remote interpreting (VRI) is conducted through a different organization often called a "Video Interpreting Service Provider" (VISP).UK Council on Deafness: Video Interpreting , Deafcouncil.org.uk website, Colchester, England, U.K. Retrieved 2009-09-12. VRS is a newer form of telecommunication service to the D-HOH-SI community, which had, in the United States, started earlier in 1974 using a non-video technology called telecommunications relay service (TRS).
One of the first demonstrations of the ability for telecommunications to help sign language users communicate with each other occurred when AT&T's videophone (trademarked as the "Picturephone") was introduced to the public at the 1964 New York World's Fair –two deaf users were able to communicate freely with each other between the fair and another city.Bell Laboratories RECORD (1969) A collection of several articles on the AT&T Picturephone (then about to be released) Bell Laboratories, Pg.134–153 & 160–187, Volume 47, No. 5, May/June 1969; Various universities and other organizations, including British Telecom's Martlesham facility, have also conducted extensive research on signing via videotelephony.New Scientist. Telephones Come To Terms With Sign Language, New Scientist, 19 August 1989, Vol.123, Iss.No.1678, pp.31.Sperling, George. Bandwidth Requirements for Video Transmission of American Sign Language and Finger Spelling, Science, AAAS, November 14, 1980, Vol. 210, pp.797-799, .Whybray, M.W. Moving Picture Transmission at Low Bitrates for Sign Language Communication, Martlesham, England: British Telecom Laboratories, 1995. The use of sign language via videotelephony was hampered for many years due to the difficulty of its use over slow analogue copper phone lines, coupled with the high cost of better quality ISDN (data) phone lines. Those factors largely disappeared with the introduction of more efficient video codecs and the advent of lower cost high-speed ISDN data and IP (Internet) services in the 1990s.
VRS services have become well developed nationally in Sweden since 1997Placencia Porrero, with Gunnar Hellstrom. Improving the Quality of Life for the European Citizen: Technology for Inclusive Design and Equality (Volume 4): The Public Swedish Video Relay Service, edited by: Placencia Porrero, E. Ballabio, IOS Press, 1998, pp.267–270, ISBN 90-5199-406-0, ISBN 978-90-5199-406-3. and also in the United States since the first decade of the 2000s. With the exception of Sweden, VRS has been provided in Europe for only a few years since the mid-2000s, and as of 2010 has not been made available in many European Union countries,European Union of the Deaf, EUD.eu website. with most European countries still lacking the legislation or the financing for large-scale VRS services, and to provide the necessary telecommunication equipment to deaf users. Germany and the Nordic countries are among the other leaders in Europe, while the United States is another world leader in the provisioning of VRS services.
Impact on business
Videoconferencing can enable individuals in distant locations to participate in meetings on short notice, with time and money savings. Technology such as VoIP can be used in conjunction with desktop videoconferencing to enable low-cost face-to-face business meetings without leaving the desk, especially for businesses with widespread offices. The technology is also used for telecommuting, in which employees work from home. One research report based on a sampling of 1,800 corporate employees showed that, as of June 2010, 54% of the respondents with access to video conferencing used it "all of the time" or "frequently".
Intel Corporation have used videoconferencing to reduce both costs and environmental impacts of its business operations.
Videoconferencing is also currently being introduced on online networking websites, in order to help businesses form profitable relationships quickly and efficiently without leaving their place of work. This has been leveraged by banks to connect busy banking professionals with customers in various locations using video banking technology.
Videoconferencing on hand-held mobile devices (mobile collaboration technology) is being used in industries such as manufacturing, energy, healthcare, insurance, government and public safety. Live, visual interaction removes traditional restrictions of distance and time, often in locations previously unreachable, such as a manufacturing plant floor a continent away., Mobile video system visually connects global plant floor engineers, Control Engineering, May 28, 2009
In the increasingly globalized film industry, videoconferencing has become useful as a method by which creative talent in many different locations can collaborate closely on the complex details of film production. For example, for the 2013 award-winning animated film Frozen, Burbank-based Walt Disney Animation Studios hired the New York City-based husband-and-wife songwriting team of Robert Lopez and Kristen Anderson-Lopez to write the songs, which required two-hour-long transcontinental videoconferences nearly every weekday for about 14 months.
With the development of lower cost endpoints, cloud based infrastructure and technology trends such as WebRTC, Video Conferencing is moving from just a business-to-business offering, to a business-to-business and business-to-consumer offering.
Although videoconferencing has frequently proven its value, research has shown that some non-managerial employees prefer not to use it due to several factors, including anxiety.Wolfe, Mark. "Broadband videoconferencing as knowledge management tool," Journal of Knowledge Management 11, no. 2 (2007) Some such anxieties can be avoided if managers use the technology as part of the normal course of business. Remote workers can also adopt certain behaviors and best practices to stay connected with their co-workers and company. Freeman, Michael, "How to stay connected while working remotely"
Researchers also find that attendees of business and medical videoconferences must work harder to interpret information delivered during a conference than they would if they attended face-to-face. Ferran, Carlos and Watts, Stephanie. "Videoconferencing in the field: A heuristic processing model," Management Science 54, no. 9 (2008) They recommend that those coordinating videoconferences make adjustments to their conferencing procedures and equipment.
Impact on media relations
The concept of press videoconferencing was developed in October 2007 by the PanAfrican Press Association (APPA), a Paris France-based non-governmental organization, to allow African journalists to participate in international press conferences on developmental and good governance issues.
Press videoconferencing permits international press conferences via videoconferencing over the Internet. Journalists can participate on an international press conference from any location, without leaving their offices or countries. They need only be seated by a computer connected to the Internet in order to ask their questions to the speaker.
In 2004, the International Monetary Fund introduced the Online Media Briefing Center, a password-protected site available only to professional journalists. The site enables the IMF to present press briefings globally and facilitates direct questions to briefers from the press. The site has been copied by other international organizations since its inception. More than 4,000 journalists worldwide are currently registered with the IMF.
Descriptive names and terminology
Videophone calls (also: videocalls, video chat as well as Skype and Skyping in verb form),PC Magazine. Definition: Video Calling, PC Magazine website. Retrieved 19 August 2010,Howell, Peter. The Lasting Appeal of 2001: A Space Odyssey, Toronto Star website, November 1, 2014; also published in print as "Forever 2001: Why Stanley Kubrick's Sci-Fi Masterpience Is More Popular Now Than In 1968", November 1, 2014, p. E1, E10. Retrieved November 2, 2014 from TheStar.com. Quote: "Public esteem and critical estimation of 2001 has grown steadily ever since, even as the title date has come and gone with very few of its far-out advancements having been realized — although.... innovations like iPads and Skyping have finally caught up with Kubrick's view of future living. [2001 actor] Lockwood marvels at how he's now able to Skype his wife and daughter, just as he does his movie parents in the space-to-Earth communication scene in 2001." differ from videoconferencing in that they expect to serve individuals, not groups. However that distinction has become increasingly blurred with technology improvements such as increased bandwidth and sophisticated software clients that can allow for multiple parties on a call. In general everyday usage the term videoconferencing is now frequently used instead of videocall for point-to-point calls between two units. Both videophone calls and videoconferencing are also now commonly referred to as a video link.
Webcams are popular, relatively low cost devices which can provide live video and audio streams via personal computers, and can be used with many software clients for both video calls and videoconferencing.
A videoconference system is generally higher cost than a videophone and deploys greater capabilities. A videoconference (also known as a videoteleconference) allows two or more locations to communicate via live, simultaneous two-way video and audio transmissions. This is often accomplished by the use of a multipoint control unit (a centralized distribution and call management system) or by a similar non-centralized multipoint capability embedded in each videoconferencing unit. Again, technology improvements have circumvented traditional definitions by allowing multiple party videoconferencing via web-based applications.
A telepresence system is a high-end videoconferencing system and service usually employed by enterprise-level corporate offices. Telepresence conference rooms use state-of-the art room designs, video cameras, displays, sound-systems and processors, coupled with high-to-very-high capacity bandwidth transmissions.
Typical use of the various technologies described above include calling or conferencing on a one-on-one, one-to-many or many-to-many basis for personal, business, educational, deaf Video Relay Service and tele-medical, diagnostic and rehabilitative use or services. New services utilizing videocalling and videoconferencing, such as teachers and psychologists conducting online sessions, personal videocalls to inmates incarcerated in penitentiaries, and videoconferencing to resolve airline engineering issues at maintenance facilities, are being created or evolving on an ongoing basis.
A telepresence robot (also telerobotics) is a robotically controlled and motorized videoconferencing display to help provide a better sense of remote physical presence for communication and collaboration in an office, home, school, etc... when one cannot be there in person. The robotic avatar and videoconferencing display-camera can move about and look around at the command of the remote person.Lehrbaum, Rick. "Attack of the Telepresence Robots", "InfoWeek", 2013.01.11 (accessed Dec. 8, 2013)
See also
H.331
List of video telecommunication services and product brands
Media phone
Mobile collaboration
Mobile VoIP
Press videoconferencing
Teleconference
Telecollaboration
US-Soviet Space Bridge
Visual Communication
VROC (Virtual Researcher on Call)
Web conferencing
References
Bibliography
Mulbach, Lothar; Bocker, Martin; Prussog, Angela. "Telepresence in Videocommunications: A Study on Stereoscopy and Individual Eye Contact", Human Factors, June 1995, Vol.37, No.2, pg.290, , Gale Document Number: GALE|A18253819. Accessed December 23, 2011 via General Science eCollection (subscription).
Further reading
Adeshina, Emmanuel. In-Person Visits Fade as Jails Set Up Video Units for Inmates and Families, The New York Times website, August 7, 2012, pg. A15 of the New York Edition.
Bajaj, Vikas. Transparent Government, Via Webcams in India, The New York Times, July 18, 2011, pg.B3. Published online: July 17, 2011.
Davis, Andrew W.; Weinstein, Ira M. The Business Case for Videoconferencing, Wainhouse Research, March 2005.
Greenberg, Alan D. Taking the Wraps off Videoconferencing in the US Classroom, Wainhouse Research, April 2009.
Hoffman, Jan. When Your Therapist Is Only a Click Away, The New York Times, September 25, 2011, pg. ST1. Also published September 23, 2011 online at www.nytimes.com. Updated on October 2, 2011.
Kopytoff, Verne G. Hewlett-Packard Sells Its Video Conferencing Business, The New York Times, June 1, 2011.
Lawlor, Julia. Videoconferencing: From Stage Fright to Stage Presence, The New York Times,August 27, 1998.
Lohr, Steve. As Travel Costs Rise, More Meetings Go Virtual, The New York Times, July 22, 2008.
Miller, Claire Cain. Logitech Buying a Maker Of Videoconference Tools, The New York Times, November 11, 2009.
Miller, Claire Cain. Logitech Breaks Into Videoconferencing, The New York Times, November 10, 2009 on line, and November 11, 2009 in print, p. B3. Discusses the acquisition of LifeSize Communications.
Millman, Howard. The Videoconference as a Bicoastal Pas de Deux, The New York Times, July 12, 2001.
O'Brien, Kevin. Stranded Travelers Turn to Videoconferencing, The New York Times, April 19, 2010. Article discusses the increased use of videoconferencing due to the eruption of an Icelandic volcano which severely curtailed air travel for several months.
ProAV Magazine. Being There ProAV Magazine. 7 November 2008.
Ramirez, Anthony. More Than Just a Phone Call; Video Conferencing And Photocopies, Too, The New York Times, September 15, 1993. Discusses the deployment of videoconferencing rooms in several hundred Kinkos locations.
Saint Louis, Catherine. With Enough Bandwidth, Many Join The Band, The New York Times, January 10, 2012 (online), January 11, 2012 (in print, New York Edition, pg. A1). Retrieved online January 11, 2012. Synopsis: a look at the pros and cons of videotelephony used for private, individual, music lessons.
Shannon, Victoria. Videoconferencing's virtual leap forward, The New York Times, August 29, 2007.
Sharkey, Joe. A Meeting in New York? Can’t We Videoconference?, The New York Times, May 11, 2009 online, and in print on May 12, 2009, p. B6 of the New York edition.
Vance, Ashlee. Cisco Buys Norwegian Firm for $3 Billion, The New York Times, October 1, 2009 online, and October 2, 2009 in print, p. B7. Discusses the acquisition of Tandberg.
Wang, Ses Open source tool detects videoconferencing equipment vulnerabilities, Help Net Security, 17 February 2012.
Wayner, Peter. Jerky Pictures and Sound Are History. Videoconferencing Is All Grown Up., The New York Times, June 16, 2005.
Category:Teleconferencing
Category:Groupware
Category:Assistive technology
Category:Videotelephony
Category:Video | 274,116 | 2017-01 |
Political party | A political party is a group of people who come together to contest elections and hold power in the government. The party agrees on some proposed policies and programmes, with a view to promoting the collective good or furthering their supporters' interests.
While there is some international commonality in the way political parties are recognized, and in how they operate, there are often many differences, and some are significant. Many political parties have an ideological core, but some do not, and many represent very different ideologies than they did when first founded. In democracies, political parties are elected by the electorate to run a government. Many countries have numerous powerful political parties, such as Germany and India and some nations have one-party systems, such as China and Cuba. The United States is in practice a two-party system, with many smaller parties participating. Its two most powerful parties are the Democratic Party and the Republican Party.
Historical dimensions
Political factions
The first political factions, cohering around a basic, if fluid, set of principles emerged from the Exclusion Crisis and Glorious Revolution in late-17th-century England.J. R. Jones, The First Whigs. The Politics of the Exclusion Crisis. 1678–1683 (Oxford University Press, 1961), p. 4. The Whigs supported Protestant constitutional monarchy against absolute rule and the Tories, originating in the Royalist (or "Cavalier") faction of the English Civil War, were conservative royalist supporters of a strong monarchy as a counterbalance to the republican tendencies of Whigs, who were the dominant political faction for most of the first half of the 18th century; they supported the Hanoverian succession of 1715 against the Jacobite supporters of the deposed Roman Catholic Stuart dynasty and were able to purge Tory politicians from important government positions after the failed Jacobite rising of 1715. The leader of the Whigs was Robert Walpole, who maintained control of the government in the period 1721–1742; his protégé was Henry Pelham (1743–1754).
As the century wore on, the factions slowly began to adopt more coherent political tendencies as the interests of their power bases began to diverge. The Whig party's initial base of support from the great aristocratic families, widened to include the emerging industrial interests and wealthy merchants. As well as championing constitutional monarchy with strict limits on the monarch's power, the Whigs adamantly opposed a Catholic king as a threat to liberty, and believed in extending toleration to nonconformist Protestants, or dissenters. A major influence on the Whigs were the liberal political ideas of John Locke,Richard Ashcraft and M. M. Goldsmith, "Locke, Revolution Principles, and the Formation of Whig Ideology," Historical Journal, Dec 1983, Vol. 26 Issue 4, pp. 773–800 and the concepts of universal rights employed by Locke and Algernon Sidney.Melinda S. Zook, "The Restoration Remembered: The First Whigs and the Making of their History," Seventeenth Century, Autumn 2002, Vol. 17 Issue 2, pp. 213–34
Although the Tories were dismissed from office for half a century, for most of this period (at first under the leadership of Sir William Wyndham), the Tories retained party cohesion, with occasional hopes of regaining office, particularly at the accession of George II (1727) and the downfall of the ministry of Sir Robert Walpole in 1742. They acted as a united, though unavailing, opposition to Whig corruption and scandals. At times they cooperated with the "Opposition Whigs", Whigs who were in opposition to the Whig government; however, the ideological gap between the Tories and the Opposition Whigs prevented them from coalescing as a single party. They finally regained power with the accession of George III in 1760 under Lord Bute.
Emergence
When they lost power, the old Whig leadership dissolved into a decade of factional chaos with distinct "Grenvillite", "Bedfordite", "Rockinghamite", and "Chathamite" factions successively in power, and all referring to themselves as "Whigs". Out of this chaos, the first distinctive parties emerged. The first such party was the Rockingham Whigs under the leadership of Charles Watson-Wentworth and the intellectual guidance of the political philosopher Edmund Burke. Burke laid out a philosophy that described the basic framework of the political party as "a body of men united for promoting by their joint endeavours the national interest, upon some particular principle in which they are all agreed". As opposed to the instability of the earlier factions, which were often tied to a particular leader and could disintegrate if removed from power, the party was centred around a set of core principles and remained out of power as a united opposition to government.
thumb|upright=1.2|In A Block for the Wigs (1783), James Gillray caricatured Fox's return to power in a coalition with North. George III is the blockhead in the centre.
A coalition including the Rockingham Whigs, led by the Earl of Shelburne, took power in 1782, only to collapse after Rockingham's death. The new government, led by the radical politician Charles James Fox in coalition with Lord North, was soon brought down and replaced by William Pitt the Younger in 1783. It was now that a genuine two-party system began to emerge, with Pitt leading the new Tories against a reconstituted "Whig" party led by Fox.Parliamentary History, xxiv, 213, 222, cited in Foord, His Majesty's Opposition, 1714–1830, p. 441
By the time of this split the Whig party was increasingly influenced by the ideas of Adam Smith, founder of classical liberalism. As Wilson and Reill (2004) note, "Adam Smith's theory melded nicely with the liberal political stance of the Whig Party and its middle-class constituents."Ellen Wilson and Peter Reill, Encyclopedia of the Enlightenment (2004) p. 298
The modern Conservative Party was created out of the 'Pittite' Tories of the early 19th century. In the late 1820s disputes over political reform broke up this grouping. A government led by the Duke of Wellington collapsed amidst dire election results. Following this disaster Robert Peel set about assembling a new coalition of forces. Peel issued the Tamworth Manifesto in 1834 which set out the basic principles of Conservatism; – the necessity in specific cases of reform in order to survive, but an opposition to unnecessary change, that could lead to "a perpetual vortex of agitation". Meanwhile, the Whigs, along with free trade Tory followers of Robert Peel, and independent Radicals, formed the Liberal Party under Lord Palmerston in 1859, and transformed into a party of the growing urban middle-class, under the long leadership of William Ewart Gladstone.
In America
Although the Founding Fathers of the United States did not originally intend for American politics to be partisan, early political controversies in the 1790s over the extent of federal government powers saw the emergence of two proto-political parties- the Federalist Party and the Democratic-Republican Party, which were championed by Framers Alexander Hamilton and James Madison, respectively.Richard Hofstadter, The Idea of a Party System: The Rise of Legitimate Opposition in the United States, 1780–1840 (1970)William Nisbet Chambers, ed. The First Party System (1972) However, a consensus reached on these issues ended party politics in 1816 for a decade, a period commonly known as the Era of Good Feelings.Stephen Minicucci, Internal Improvements and the Union, 1790–1860, Studies in American Political Development (2004), 18: pp. 160–85, (2004), Cambridge University Press,
Party politics revived in 1829 with the split of the Democratic-Republican Party into the Jacksonian Democrats led by Andrew Jackson, and the Whig Party, led by Henry Clay. The former evolved into the modern Democratic Party and the latter was replaced with the Republican Party as one of the two main parties in the 1850s.
Spread
thumb|upright|Charles Stewart Parnell, leader of the Irish Parliamentary Party
The second half of the nineteenth century saw the adoption of the party model of politics across Europe. In Germany, France, Austria and elsewhere, the 1848 Revolutions sparked a wave of liberal sentiment and the formation of representative bodies and political parties. The end of the century saw the formation of large socialist parties in Europe, some conforming to the teaching of Karl Marx, others adapting social democracy through the use of reformist and gradualist methods.
At the same time, the political party reached its modern form, with a membership disciplined through the use of a party whip and the implementation of efficient structures of control. The Home Rule League Party, campaigning for Home Rule for Ireland in the British Parliament was fundamentally changed by the great Irish political leader Charles Stewart Parnell in the 1880s. In 1882, he changed his party's name to the Irish Parliamentary Party and created a well-organized grass roots structure, introducing membership to replace "ad hoc" informal groupings. He created a new selection procedure to ensure the professional selection of party candidates committed to taking their seats, and in 1884 he imposed a firm 'party pledge' which obliged MPs to vote as a bloc in parliament on all occasions. The creation of a strict party whip and a formal party structure was unique at the time. His party's efficient structure and control contrasted with the loose rules and flexible informality found in the main British parties; – they soon came to model themselves on the Parnellite model.
Structure
A political party is typically led by a party leader (the most powerful member and spokesperson representing the party), a party secretary (who maintains the daily work and records of party meetings), party treasurer (who is responsible for membership dues) and party chair (who forms strategies for recruiting and retaining party members, and also chairs party meetings). Most of the above positions are also members of the party executive, the leading organization which sets policy for the entire party at the national level. The structure is far more decentralized in the United States because of the separation of powers, federalism and the multiplicity of economic interests and religious sects. Even state parties are decentralized as county and other local committees are largely independent of state central committees. The national party leader in the U.S. will be the president, if the party holds that office, or a prominent member of Congress in opposition (although a big-state governor may aspire to that role). Officially, each party has a chairman for its national committee who is a prominent spokesman, organizer and fund-raiser, but without the status of prominent elected office holders.
In parliamentary democracies, on a regular, periodic basis, party conferences are held to elect party officers, although snap leadership elections can be called if enough members opt for such. Party conferences are also held in order to affirm party values for members in the coming year. American parties also meet regularly and, again, are more subordinate to elected political leaders.
Depending on the demographic spread of the party membership, party members form local or regional party committees in order to help candidates run for local or regional offices in government. These local party branches reflect the officer positions at the national level.
It is also customary for political party members to form wings for current or prospective party members, most of which fall into the following two categories:
identity-based: including youth wings, women's wings, ethnic minority wings, LGBT wings, etc.
position-based: including wings for candidates, mayors, governors, professionals, students, etc. The formation of these wings may have become routine but their existence is more of an indication of differences of opinion, intra-party rivalry, the influence of interest groups, or attempts to wield influence for one's state or region.
These are useful for party outreach, training and employment. Many young aspiring politicians seek these roles and jobs as stepping stones to their political careers in legislative and/or executive offices.
The internal structure of political parties has to be democratic in some countries. In Germany Art. 21 Abs. 1 Satz 3 GG establishes a command of inner-party democracy.Cf. Brettschneider, Nutzen der ökonomischen Theorie der Politik für eine Konkretisierung des Gebotes innerparteilicher Demokratie
Parliamentary parties
When the party is represented by members in the lower house of parliament, the party leader simultaneously serves as the leader of the parliamentary group of that full party representation; depending on a minimum number of seats held, Westminster-based parties typically allow for leaders to form frontbench teams of senior fellow members of the parliamentary group to serve as critics of aspects of government policy. When a party becomes the largest party not part of the Government, the party's parliamentary group forms the Official Opposition, with Official Opposition frontbench team members often forming the Official Opposition Shadow cabinet. When a party achieves enough seats in an election to form a majority, the party's frontbench becomes the Cabinet of government ministers. They are all elected members.
Regulation
The freedom to form, declare membership in, or campaign for candidates from a political party is considered a measurement of a state's adherence to liberal democracy as a political value. Regulation of parties may run from a crackdown on or repression of all opposition parties, a norm for authoritarian governments, to the repression of certain parties which hold or promote ideals which run counter to the general ideology of the state's incumbents (or possess membership by-laws which are legally unenforceable).
Furthermore, in the case of far-right, far-left and regionalism parties in the national parliaments of much of the European Union, mainstream political parties may form an informal cordon sanitaire which applies a policy of non-cooperation towards those "Outsider Parties" present in the legislature which are viewed as 'anti-system' or otherwise unacceptable for government. Cordons sanitaire, however, have been increasingly abandoned over the past two decades in multi-party democracies as the pressure to construct broad coalitions in order to win elections – along with the increased willingness of outsider parties themselves to participate in government – has led to many such parties entering electoral and government coalitions.McDonnell, Duncan and Newell, James (2011) 'Outsider Parties'.
Starting in the second half of the 20th century, modern democracies have introduced rules for the flow of funds through party coffers, e.g. the Canada Election Act 1976, the PPRA in the U.K. or the FECA in the U.S. Such political finance regimes stipulate a variety of regulations for the transparency of fundraising and expenditure, limit or ban specific kinds of activity and provide public subsidies for party activity, including campaigning.
Partisan style
Partisan style varies according to each jurisdiction, depending on how many parties there are, and how much influence each individual party has.
Nonpartisan systems
In a nonpartisan system, no official political parties exist, sometimes reflecting legal restrictions on political parties. In nonpartisan elections, each candidate is eligible for office on his or her own merits. In nonpartisan legislatures, there are no typically formal party alignments within the legislature. The administration of George Washington and the first few sessions of the United States Congress were nonpartisan. Washington also warned against political parties during his Farewell Address.Redding 2004 In the United States, the unicameral legislature of Nebraska is nonpartisan but is elected and votes on informal party lines. In Canada, the territorial legislatures of the Northwest Territories and Nunavut are nonpartisan. In New Zealand, Tokelau has a nonpartisan parliament. Many city and county governments are nonpartisan. Nonpartisan elections and modes of governance are common outside of state institutions.Abizadeh 2005. Unless there are legal prohibitions against political parties, factions within nonpartisan systems often evolve into political parties.
One-party systems
In one-party systems, one political party is legally allowed to hold effective power. Although minor parties may sometimes be allowed, they are legally required to accept the leadership of the dominant party. This party may not always be identical to the government, although sometimes positions within the party may in fact be more important than positions within the government. North Korea and China are examples; others can be found in Fascist states, such as Nazi Germany between 1934 and 1945. The one-party system is thus often equated with dictatorships and tyranny.
In dominant-party systems, opposition parties are allowed, and there may be even a deeply established democratic tradition, but other parties are widely considered to have no real chance of gaining power. Sometimes, political, social and economic circumstances, and public opinion are the reason for others parties' failure. Sometimes, typically in countries with less of an established democratic tradition, it is possible the dominant party will remain in power by using patronage and sometimes by voting fraud. In the latter case, the definition between dominant and one-party system becomes rather blurred. Examples of dominant party systems include the People's Action Party in Singapore, the African National Congress in South Africa, the Cambodian People's Party in Cambodia, the Liberal Democratic Party in Japan, and the National Liberation Front in Algeria. One-party dominant system also existed in Mexico with the Institutional Revolutionary Party until the 1990s, in the southern United States with the Democratic Party from the late 19th century until the 1970s, in Indonesia with the Golkar from the early 1970s until 1998.
Two-party systems
Two-party systems are states such as Jamaica, Malta, Ghana and the United States in which there are two political parties dominant to such an extent that electoral success under the banner of any other party is almost impossible. One right wing coalition party and one left wing coalition party is the most common ideological breakdown in such a system but in two-party states political parties are traditionally catch all parties which are ideologically broad and inclusive.
The United States has become essentially a two-party system, since a conservative (such as the Republican Party) and liberal (such as the Democratic Party) party has usually been the status quo within American politics. The first parties were called Federalist and Republican, followed by a brief period of Republican dominance before a split occurred between National Republicans and Democratic Republicans. The former became the Whig Party and the latter became the Democratic Party. The Whigs survived only for two decades before they split over the spread of slavery, those opposed becoming members of the new Republican Party, as did anti-slavery members of the Democratic Party. Third parties (such as the Libertarian Party) often receive little support and are very rarely the victors in elections. Despite this, there have been several examples of third parties siphoning votes from major parties that were expected to win (such as Theodore Roosevelt in the election of 1912 and George Wallace in the election of 1968). As third party movements have learned, the Electoral College's requirement of a nationally distributed majority makes it difficult for third parties to succeed. Thus, such parties rarely win many electoral votes, although their popular support within a state may tip it toward one party or the other. Wallace had weak support outside the South. More generally, parties with a broad base of support across regions or among economic and other interest groups, have a great chance of winning the necessary plurality in the U.S.'s largely single-member district, winner-take-all elections. The tremendous land area and large population of the country are formidable challenges to political parties with a narrow appeal.
The UK political system, while technically a multi-party system, has functioned generally as a two-party (sometimes called a "two-and-a-half party") system; since the 1920s the two largest political parties have been the Conservative Party and the Labour Party. Before the Labour Party rose in British politics the Liberal Party was the other major political party along with the Conservatives. Though coalition and minority governments have been an occasional feature of parliamentary politics, the first-past-the-post electoral system used for general elections tends to maintain the dominance of these two parties, though each has in the past century relied upon a third party to deliver a working majority in Parliament. (A plurality voting system usually leads to a two-party system, a relationship described by Maurice Duverger and known as Duverger's Law.Duverger 1954) There are also numerous other parties that hold or have held a number of seats in Parliament.
Multi-party systems
thumb|right|A poster for the European Parliament election 2004 in Italy, showing party lists
Multi-party systems are systems in which more than two parties are represented and elected to public office.
Australia, Canada, People's Republic of Bangladesh, Pakistan, India, Ireland, United Kingdom and Norway are examples of countries with two strong parties and additional smaller parties that have also obtained representation. The smaller or "third" parties may hold the balance of power in a parliamentary system, and thus may be invited to form a part of a coalition government together with one of the larger parties; or may instead act independently from the dominant parties.
More commonly, in cases where there are three or more parties, no one party is likely to gain power alone, and parties have to work with each other to form coalition governments. This is almost always the case in Germany on national and state level, and in most constituencies at the communal level. Furthermore, since the forming of the Republic of Iceland there has never been a government not led by a coalition, usually involving the Independence Party and/or the Progressive Party. A similar situation exists in the Republic of Ireland, where no one party has held power on its own since 1989. Since then, numerous coalition governments have been formed. These coalitions have been led exclusively by either Fianna Fáil or Fine Gael.
Political change is often easier with a coalition government than in one-party or two-party dominant systems. If factions in a two-party system are in fundamental disagreement on policy goals, or even principles, they can be slow to make policy changes, which appears to be the case now in the U.S. with power split between Democrats and Republicans. Still coalition governments struggle, sometimes for years, to change policy and often fail altogether, post World War II France and Italy being prime examples. When one party in a two-party system controls all elective branches, however, policy changes can be both swift and significant. Democrats Woodrow Wilson, Franklin Roosevelt and Lyndon Johnson were beneficiaries of such fortuitous circumstances, as were Republicans as far removed in time as Abraham Lincoln and Ronald Reagan. Barack Obama briefly had such an advantage between 2009 and 2011.
Funding
Political parties are funded by contributions from
party members and other individuals,
organizations, which share their political ideas (e.g. trade union affiliation fees) or which could benefit from their activities (e.g. corporate donations) or
governmental or public funding.See Heard, Alexander, 'Political financing'. In: Sills, David I. (ed.) International Emcyclopedia of the Social Sciences, vol. 12. New York, NY: Free Press – Macmillan, 1968, pp. 235–41; Paltiel, Khayyam Z., 'Campaign finance – contrasting practices and reforms'. In: Butler, David et al. (eds.), Democracy at the polls – a comparative study of competitive national elections. Washington, DC: AEI, 1981, pp. 138–72; Paltiel, Khayyam Z., 'Political finance'. In: Bogdanor, Vernon (ed.), The Blackwell Encyclopedia of Political Institutions. Oxford, UK: Blackwell, 1987, pp. 454–56; 'Party finance', in: Kurian, George T. et al. (eds.) The encyclopedia of political science. vol 4, Washington, DC: CQ Press, 2011, pp. 1187–89.
Political parties, still called factions by some, especially those in the governmental apparatus, are lobbied vigorously by organizations, businesses and special interest groups such as trade unions. Money and gifts-in-kind to a party, or its leading members, may be offered as incentives. Such donations are the traditional source of funding for all right-of-centre cadre parties. Starting in the late 19th century these parties were opposed by the newly founded left-of-centre workers' parties. They started a new party type, the mass membership party, and a new source of political fundraising, membership dues.
From the second half of the 20th century on parties which continued to rely on donations or membership subscriptions ran into mounting problems. Along with the increased scrutiny of donations there has been a long-term decline in party memberships in most western democracies which itself places more strains on funding. For example, in the United Kingdom and Australia membership of the two main parties in 2006 is less than an 1/8 of what it was in 1950, despite significant increases in population over that period.
In some parties, such as the post-communist parties of France and Italy or the Sinn Féin party and the Socialist Party, elected representatives (i.e. incumbents) take only the average industrial wage from their salary as a representative, while the rest goes into party coffers. Although these examples may be rare nowadays, "rent-seeking" continues to be a feature of many political parties around the world.Foresti and Wild 2010. Support to political parties: a missing piece of the governance puzzle. London: Overseas Development Institute
In the United Kingdom, it has been alleged that peerages have been awarded to contributors to party funds, the benefactors becoming members of the House of Lords and thus being in a position to participate in legislating. Famously, Lloyd George was found to have been selling peerages. To prevent such corruption in the future, Parliament passed the Honours (Prevention of Abuses) Act 1925 into law. Thus the outright sale of peerages and similar honours became a criminal act. However, some benefactors are alleged to have attempted to circumvent this by cloaking their contributions as loans, giving rise to the 'Cash for Peerages' scandal.
Such activities as well as assumed "influence peddling" have given rise to demands that the scale of donations should be capped. As the costs of electioneering escalate, so the demands made on party funds increase. In the UK some politicians are advocating that parties should be funded by the state; a proposition that promises to give rise to interesting debate in a country that was the first to regulate campaign expenses (in 1883).
In many other democracies such subsidies for party activity (in general or just for campaign purposes) have been introduced decades ago. Public financing for parties and/ or candidates (during election times and beyond) has several permutations and is increasingly common. Germany, Sweden, Israel, Canada, Australia, Austria and Spain are cases in point. More recently among others France, Japan, Mexico, the Netherlands and Poland have followed suit.For details you may want to consult specific articles on Campaign finance in the United States, Federal political financing in Canada, Party finance in Germany, Political donations in Australia, Political finance, Political funding in Japan, Political funding in the United Kingdom.
There are two broad categories of public funding, direct, which entails a monetary transfer to a party, and indirect, which includes broadcasting time on state media, use of the mail service or supplies. According to the Comparative Data from the ACE Electoral Knowledge Network, out of a sample of over 180 nations, 25% of nations provide no direct or indirect public funding, 58% provide direct public funding and 60% of nations provide indirect public funding.ACEproject.org ACE Electoral Knowledge Network: Comparative Data: Political Parties and Candidates Some countries provide both direct and indirect public funding to political parties. Funding may be equal for all parties or depend on the results of previous elections or the number of candidates participating in an election.ACEproject.org ACE Electoral Knowledge Network: Comparative Data: Political Parties and Candidates Frequently parties rely on a mix of private and public funding and are required to disclose their finances to the Election management body.ACEproject.org ACE Encyclopaedia: Public funding of political parties
In fledgling democracies funding can also be provided by foreign aid. International donors provide financing to political parties in developing countries as a means to promote democracy and good governance. Support can be purely financial or otherwise. Frequently it is provided as capacity development activities including the development of party manifestos, party constitutions and campaigning skills. Developing links between ideologically linked parties is another common feature of international support for a party. Sometimes this can be perceived as directly supporting the political aims of a political party, such as the support of the US government to the Georgian party behind the Rose Revolution. Other donors work on a more neutral basis, where multiple donors provide grants in countries accessible by all parties for various aims defined by the recipients. There have been calls by leading development think-tanks, such as the Overseas Development Institute, to increase support to political parties as part of developing the capacity to deal with the demands of interest-driven donors to improve governance.
Colors and emblems
Generally speaking, over the world, political parties associate themselves with colors, primarily for identification, especially for voter recognition during elections. Conservative parties generally use blue or black.
Pink sometimes signifies moderate socialist. Yellow is often used for libertarianism or classical liberalism. Red often signifies social democratic, socialist or communist parties.
Green is the color for green parties, Islamist parties, Nordic agrarian parties and Irish republican parties. Orange is sometimes a color of nationalism, such as in the Netherlands, in Israel with the Orange Camp or with Ulster Loyalists in Northern Ireland; it is also a color of reform such as in Ukraine. In the past, Purple was considered the color of royalty (like white), but today it is sometimes used for feminist parties. White also is associated with nationalism. "Purple Party" is also used as an academic hypothetical of an undefined party, as a Centrist party in the United States (because purple is created from mixing the main parties' colors of red and blue) and as a highly idealistic "peace and love" partyhttp://www.purpleparty.com—in a similar vein to a Green Party, perhaps. Black is generally associated with fascist parties, going back to Benito Mussolini's blackshirts, but also with Anarchism. Similarly, brown is sometimes associated with Nazism, going back to the Nazi Party's tan-uniformed storm troopers.
Color associations are useful for mnemonics when voter illiteracy is significant. Another case where they are used is when it is not desirable to make rigorous links to parties, particularly when coalitions and alliances are formed between political parties and other organizations, for example: Red Tory, "Purple" (Red-Blue) alliances, Red-green alliances, Blue-green alliances, Traffic light coalitions, Pan-green coalitions, and Pan-blue coalitions.
Political color schemes in the United States diverge from international norms. Since 2000, red has become associated with the right-wing Republican Party and blue with the left-wing Democratic Party. However, unlike political color schemes of other countries, the parties did not choose those colors; they were used in news coverage of 2000 election results and ensuing legal battle and caught on in popular usage. Prior to the 2000 election the media typically alternated which color represented which party each presidential election cycle. The color scheme happened to get inordinate attention that year, so the cycle was stopped lest it cause confusion the following election.
The emblem of socialist parties is often a red rose held in a fist. Communist parties often use a hammer to represent the worker, a sickle to represent the farmer, or both a hammer and a sickle to refer to both at the same time.
The emblem of Nazism, the swastika or "hakenkreuz", has been adopted as a near-universal symbol for almost any organized white supremacist group, even though it dates from more ancient times.
Symbols can be very important when the overall electorate is illiterate. In the Kenyan constitutional referendum, 2005, supporters of the constitution used the banana as their symbol, while the "no" used an orange.
International organization
During the 19th and 20th century, many national political parties organized themselves into international organizations along similar policy lines. Notable examples are The Universal Party, International Workingmen's Association (also called the First International), the Socialist International (also called the Second International), the Communist International (also called the Third International), and the Fourth International, as organizations of working class parties, or the Liberal International (yellow), Hizb ut-Tahrir, Christian Democratic International and the International Democrat Union (blue). Organized in Italy in 1945, the International Communist Party, since 1974 headquartered in Florence has sections in six countries. Worldwide green parties have recently established the Global Greens. The Universal Party, The Socialist International, the Liberal International, and the International Democrat Union are all based in London.
Some administrations (e.g. Hong Kong) outlaw formal linkages between local and foreign political organizations, effectively outlawing international political parties.
Types
French political scientist Maurice Duverger drew a distinction between cadre parties and mass parties. Cadre parties were political elites that were concerned with contesting elections and restricted the influence of outsiders, who were only required to assist in election campaigns. Mass parties tried to recruit new members who were a source of party income and were often expected to spread party ideology as well as assist in elections. Socialist parties are examples of mass parties, while the British Conservative Party and the German Christian Democratic Union are examples of hybrid parties. In the United States, where both major parties were cadre parties, the introduction of primaries and other reforms has transformed them so that power is held by activists who compete over influence and nomination of candidates.Ware, Political parties, pp. 65–67
Klaus von Beyme categorized European parties into nine families, which described most parties. He was able to arrange seven of them from left to right: communist, socialist, green, liberal, Christian democratic, conservative and libertarian. The position of two other types, agrarian and regional/ethnic parties varied.Ware, Political parties, p. 22
See also
Elite party
Index of politics articles
List of political parties
List of ruling political parties by country
Particracy (a political regime dominated by one or more parties)
Party class
Party line (politics)
UCLA School of Political Parties
References
External links
U.S. Party Platforms from 1840 to 2004 at The American Presidency Project: UC Santa Barbara
Political resources on the net
Category:Elections
Category:Political parties | 23,996 | 2017-01 |
Gregorian calendar | The Gregorian calendar is internationally the most widely used civil calendar.Introduction to Calendars. United States Naval Observatory. Retrieved 15 January 2009.Calendars by L. E. Doggett. Section 2.The international standard for the representation of dates and times, ISO 8601, uses the Gregorian calendar. Section 3.2.1. It is named after Pope Gregory XIII, who introduced it in October 1582.
The calendar was a refinement to the Julian calendarSee Wikisource English translation of the (Latin) 1582 papal bull Inter gravissimas. involving a 0.002% correction in the length of the year. The motivation for the reform was to stop the drift of the calendar with respect to the equinoxes and solstices—particularly the vernal equinox, which set the date for Easter celebrations. Transition to the Gregorian calendar would restore the holiday to the time of the year in which it was celebrated when introduced by the early Church. The reform was adopted initially by the Catholic countries of Europe. Protestants and Eastern Orthodox countries continued to use the traditional Julian calendar and adopted the Gregorian reform after a time, for the sake of convenience in international trade. The last European country to adopt the reform was Greece, in 1923.
The Gregorian reform contained two parts: a reform of the Julian calendar as used prior to Pope Gregory XIII's time and a reform of the lunar cycle used by the Church, with the Julian calendar, to calculate the date of Easter. The reform was a modification of a proposal made by Aloysius Lilius.Moyer (1983). His proposal included reducing the number of leap years in four centuries from 100 to 97, by making 3 out of 4 centurial years common instead of leap years. Lilius also produced an original and practical scheme for adjusting the epacts of the moon when calculating the annual date of Easter, solving a long-standing obstacle to calendar reform.
The Gregorian reform modified the Julian calendar's scheme of leap years as follows:
Every year that is exactly divisible by four is a leap year, except for years that are exactly divisible by 100, but these centurial years are leap years if they are exactly divisible by 400. For example, the years 1700, 1800, and 1900 are not leap years, but the years 1600 and 2000 are.Introduction to Calendars. (15 May 2013). United States Naval Observatory.
In addition to the change in the mean length of the calendar year from 365.25 days (365 days 6 hours) to 365.2425 days (365 days 5 hours 49 minutes 12 seconds), a reduction of 10 minutes 48 seconds per year, the Gregorian calendar reform also dealt with the accumulated difference between these lengths. The canonical Easter tables were devised at the end of the third century, when the vernal equinox fell either on 20 March or 21 March depending on the year's position in the leap year cycle. As the rule was that the full moon preceding Easter was not to precede the equinox, the date was fixed at 21 March for computational purposes and the earliest date for Easter was fixed at 22 March. The Gregorian calendar reproduced these conditions by removing ten days.Ziggelaar (1983), p. 223.
To unambiguously specify the date, dual dating or Old Style (O.S.) and New Style (N.S.) are sometimes used with dates. Dual dating uses two consecutive years because of differences in the starting date of the year, or includes both the Julian and Gregorian dates. Old Style and New Style (N.S.) indicate either whether the start of the Julian year has been adjusted to start on 1 January (N.S.) even though documents written at the time use a different start of year (O.S.), or whether a date conforms to the Julian calendar (O.S.) rather than the Gregorian (N.S.).
The Gregorian calendar continued to use the previous calendar era (year-numbering system), which counts years from the traditional date of the nativity (Anno Domini), originally calculated in the 6th century by Dionysius Exiguus.Nineteen-Year Cycle of Dionysius. Introduction and first . This year-numbering system, also known as Dionysian era or Common Era, is the predominant international standard today.The first known occurrence of Common Era in English dates to 1708. Years before the beginning of the era are abbreviated in English as either BC for "Before Christ", or as BCE for "Before the Common Era"
Two era names occur within the bull Inter gravissimas itself, ("in the year of the Incarnation of the Lord") for the year it was signed, and ("in the year from the Nativity of our Lord Jesus Christ") for the year it was printed. Les canons of Les textes fondateurs du calendrier grégorien
__TOC__
Description
+A year is divided into twelve months No. Name Length in days 1 January 31 2 February 28 or 29 3 March 31 4 April 30 5 May 31 6 June 30 7 July 31 8 August 31 9 September 30 10 October 31 11 November 30 12 December 31
The Gregorian calendar is a solar calendar. A regular Gregorian year consists of 365 days, but as in the Julian calendar, in a leap year, a leap day is added to February. In the Julian calendar a leap year occurs every 4 years, but the Gregorian calendar omits 3 leap days every 400 years. In the Julian calendar, this leap day was inserted by doubling 24 February, and the Gregorian reform did not change the date of the leap day. In the modern period, it has become customary to number the days from the beginning of the month, and 29 February is often considered as the leap day. Some churches, notably the Roman Catholic Church, delay February festivals after the 23rd by one day in leap years.Richards, p. 101
Gregorian years are identified by consecutive year numbers.Clause 3.2.1 ISO 8601 The cycles repeat completely every 146,097 days, which equals 400 years.The cycle described applies to the solar, or civil, calendar. If the ecclesiastical lunar rules are also considered, the lunisolar Easter computus cycle repeats only after 5,700,000 years of 2,081,882,250 days in 70,499,183 lunar months, based on an assumed mean lunar month of 29 days 12 hours 44 minutes 2 seconds. (Seidelmann (1992), p. 582) [To properly function as an Easter computus, this lunisolar cycle must have the same mean year as the Gregorian solar cycle, and indeed that is exactly the case.]The extreme length of the Gregorian Easter computus is due to its being the product of the 19-year Metonic cycle, the thirty different possible values of the epact, and the least common multiple (10,000) of the 400-year and 2,500-year solar and lunar correction cycles. (Walker 1945, 218) Of these 400 years, 303 are regular years of 365 days and 97 are leap years of 366 days. A mean calendar year is 365 days = 365.2425 days, or 365 days, 5 hours, 49 minutes and 12 seconds.The same result is obtained by summing the fractional parts implied by the rule:
A calendar date is fully specified by the year (numbered by some scheme beyond the scope of the calendar itself), the month (identified by name or number), and the day of the month (numbered sequentially starting at 1). Although the calendar year currently runs from 1 January to 31 December, at previous times year numbers were based on a different starting point within the calendar (see the "beginning of the year" section below).
Gregorian reform
thumb|First page of the papal bull Inter gravissimas
thumb| Detail of the pope's tomb by Camillo Rusconi (completed 1723); Antonio Lilio is genuflecting before the pope, presenting his printed calendar.
The Gregorian calendar was a reform of the Julian calendar. It was instituted in 1582 by Pope Gregory XIII, after whom the calendar was named, by papal bull Inter gravissimas dated 24 February 1582. The motivation for the adjustment was to bring the date for the celebration of Easter to the time of year in which it was celebrated when it was introduced by the early Church. Although a recommendation of the First Council of Nicaea in 325 specified that all Christians should celebrate Easter on the same day, it took almost five centuries before virtually all Christians achieved that objective by adopting the rules of the Church of Alexandria (see Easter for the issues which arose).The last major Christian region to accept the Alexandrian rules was the Carolingian Empire (most of Western Europe) during 780–800. The last monastery in England to accept the Alexandrian rules did so in 931, and a few churches in southwest Asia beyond the eastern border of the Byzantine Empire continued to use rules that differed slightly, causing four dates for Easter to differ every 532 years.
Background
Because the spring equinox was tied to the date of Easter, the Roman Catholic Church considered the seasonal drift in the date of Easter undesirable. The Church of Alexandria celebrated Easter on the Sunday after the 14th day of the moon (computed using the Metonic cycle) that falls on or after the vernal equinox, which they placed on 21 March. However, the Church of Rome still regarded 25 March (Lady Day) as the equinox (until 342), and used a different cycle to compute the day of the moon.Pedersen (1983), pp. 42–43. In the Alexandrian system, since the 14th day of the Easter moon could fall at earliest on 21 March its first day could fall no earlier than 8 March and no later than 5 April. This meant that Easter varied between 22 March and 25 April. In Rome, Easter was not allowed to fall later than 21 April, that being the day of the Parilia or birthday of Rome and a pagan festival. The first day of the Easter moon could fall no earlier than 5 March and no later than 2 April.
Easter was the Sunday after the 15th day of this moon, whose 14th day was allowed to precede the equinox. Where the two systems produced different dates there was generally a compromise so that both churches were able to celebrate on the same day. By the 10th century all churches (except some on the eastern border of the Byzantine Empire) had adopted the Alexandrian Easter, which still placed the vernal equinox on 21 March, although Bede had already noted its drift in 725—it had drifted even further by the 16th century.For example, in the Julian calendar, at Rome in 1550, the March equinox occurred at 11 Mar 6:51 AM local mean time. "Seasons calculator", Time and Date AS, 2014.
Worse, the reckoned Moon that was used to compute Easter was fixed to the Julian year by a 19-year cycle. That approximation built up an error of one day every 310 years, so by the 16th century the lunar calendar was out of phase with the real Moon by four days.
Preparation
In order to solve the problem, the University of Salamanca, Spain, sent a technical paper in 1515 but it was rejected. The Council of Trent approved a plan in 1563 for correcting the calendrical errors, requiring that the date of the vernal equinox be restored to that which it held at the time of the First Council of Nicaea in 325 and that an alteration to the calendar be designed to prevent future drift. This would allow for a more consistent and accurate scheduling of the feast of Easter.
In 1577, a Compendium was sent to expert mathematicians outside the reform commission for comments. Some of these experts, including Giambattista Benedetti and Giuseppe Moleto, believed Easter should be computed from the true motions of the sun and moon, rather than using a tabular method, but these recommendations were not adopted.Ziggelaar (1983), pp. 211, 214. The reform adopted was a modification of a proposal made by the Calabrian doctor Aloysius Lilius (or Lilio).
Lilius's proposal included reducing the number of leap years in four centuries from 100 to 97, by making three out of four centurial years common instead of leap years. He also produced an original and practical scheme for adjusting the epacts of the moon when calculating the annual date of Easter, solving a long-standing obstacle to calendar reform.
Ancient tables provided the sun's mean longitude.See, for example,Tabule illustrissimi principis regis alfonsii, Prague 1401 −4 (Latin). A full set of Alphonsine Tables (including tables for mean motions, conjunctions of sun and moon, equation of time, spherical astronomy, longitudes and latitudes of cities, star tables, eclipse tables).For an example of the information provided see Jacques Cassini, Tables astronomiques du soleil, de la lune, des planetes, des etoiles fixes, et des satellites de Jupiter et de Saturne, Paris 1740, available at (go forward ten pages to Table III on p. 10). Christopher Clavius, the architect of the Gregorian calendar, noted that the tables agreed neither on the time when the sun passed through the vernal equinox nor on the length of the mean tropical year. Tycho Brahe also noticed discrepancies. The Gregorian leap year rule (97 leap years in 400 years) was put forward by Petrus Pitatus of Verona in 1560. He noted that it is consistent with the tropical year of the Alfonsine tables and with the mean tropical year of Copernicus (De revolutionibus) and Reinhold (Prutenic tables). The three mean tropical years in Babylonian sexagesimals as the excess over 365 days (the way they would have been extracted from the tables of mean longitude) were 14,33,9,57 (Alphonsine), 14,33,11,12 (Copernicus) and 14,33,9,24 (Reinhold). All values are the same to two places (14:33) and this is also the mean length of the Gregorian year. Thus Pitatus' solution would have commended itself to the astronomers.Swerdlow (1986).
Lilius's proposals had two components. Firstly, he proposed a correction to the length of the year. The mean tropical year is 365.24219 days long.Meeus and Savoie (1992). As the average length of a Julian year is 365.25 days, the Julian year is almost 11 minutes longer than the mean tropical year. The discrepancy results in a drift of about three days every 400 years. Lilius's proposal resulted in an average year of 365.2425 days (see Accuracy). At the time of Gregory's reform there had already been a drift of 10 days since the Council of Nicaea, resulting in the vernal equinox falling on 10 or 11 March instead of the ecclesiastically fixed date of 21 March, and if unreformed it would drift further. Lilius proposed that the 10-day drift should be corrected by deleting the Julian leap day on each of its ten occurrences over a period of forty years, thereby providing for a gradual return of the equinox to 21 March.
Lilius's work was expanded upon by Christopher Clavius in a closely argued, 800-page volume. He would later defend his and Lilius's work against detractors. Clavius's opinion was that the correction should take place in one move, and it was this advice which prevailed with Gregory.
The second component consisted of an approximation which would provide an accurate yet simple, rule-based calendar. Lilius's formula was a 10-day correction to revert the drift since the Council of Nicaea, and the imposition of a leap day in only 97 years in 400 rather than in 1 year in 4. The proposed rule was that years divisible by 100 would be leap years only if they were divisible by 400 as well.
The 19-year cycle used for the lunar calendar was also to be corrected by one day every 300 or 400 years (8 times in 2500 years) along with corrections for the years that are no longer leap years (i.e., 1700, 1800, 1900, 2100, etc.). In fact, a new method for computing the date of Easter was introduced.
When the new calendar was put in use, the error accumulated in the 13 centuries since the Council of Nicaea was corrected by a deletion of 10 days. The Julian calendar day Thursday, 4 October 1582 was followed by the first day of the Gregorian calendar, Friday, 15 October 1582 (the cycle of weekdays was not affected).
Adoption
Although Gregory's reform was enacted in the most solemn of forms available to the Church, the bull had no authority beyond the Catholic Church and the Papal States. The changes that he was proposing were changes to the civil calendar, over which he had no authority. They required adoption by the civil authorities in each country to have legal effect.
The bull Inter gravissimas became the law of the Catholic Church in 1582, but it was not recognised by Protestant Churches, Orthodox Churches, and a few others. Consequently, the days on which Easter and related holidays were celebrated by different Christian Churches again diverged.
A month after having decreed the reform, the pope with a brief of 3 April 1582 granted to Antonio Lilio, the brother of Luigi Lilio, the exclusive right to publish the calendar for a period of ten years. The Lunario Novo secondo la nuova riforma printed by Vincenzo Accolti, one of the first calendars printed in Rome after the reform, notes at the bottom that it was signed with papal authorization and by Lilio (Con licentia delli Superiori... et permissu Ant(onii) Lilij). The papal brief was later revoked, on 20 September 1582, because Antonio Lilio proved unable to keep up with the demand for copies.Mezzi, E., and Vizza, F., Luigi Lilio Medico Astronomo e Matematico di Cirò, Laruffa Editore, Reggio Calabria, 2010, p. 14; p. 52, citing as primary references: Biblioteca Nazionale Centrale die Firenze, Magl. 5.10.5/a, ASV A.A., Arm. I‑XVII, 5506, f. 362r.
On 29 September 1582, Philip II of Spain decreed the change from the Julian to the Gregorian calendar. This affected much of Roman Catholic Europe, as Philip was at the time ruler over Spain and Portugal as well as much of Italy. In these territories, as well as in the Polish–Lithuanian Commonwealth (ruled by Anna Jagiellon) and in the Papal States, the new calendar was implemented on the date specified by the bull, with Julian Thursday, 4 October 1582, being followed by Gregorian Friday, 15 October 1582. The Spanish and Portuguese colonies followed somewhat later de facto because of delay in communication."Pragmatica" on the Ten Days of the Year World Digital Library, the first known South American imprint, produced in 1584 by Antonio Ricardo, of a four-page edict issued by King Philip II of Spain in 1582, decreeing the change from the Julian to the Gregorian calendar.
Many Protestant countries initially objected to adopting a Catholic innovation; some Protestants feared the new calendar was part of a plot to return them to the Catholic fold.
Britain and the British Empire (including the eastern part of what is now the United States) adopted the Gregorian calendar in 1752, followed by Sweden in 1753.
Prior to 1917, Turkey used the lunar Islamic calendar with the Hegira era for general purposes and the Julian calendar for fiscal purposes. The start of the fiscal year was eventually fixed at 1 March and the year number was roughly equivalent to the Hegira year (see Rumi calendar). As the solar year is longer than the lunar year this originally entailed the use of "escape years" every so often when the number of the fiscal year would jump. From 1 March 1917 the fiscal year became Gregorian, rather than Julian. On 1 January 1926 the use of the Gregorian calendar was extended to include use for general purposes and the number of the year became the same as in other countries.
+Adoption of Gregorian Calendar150016001700180019001582: Spain, Portugal, France, Italy,
Catholic Low Countries, Luxemburg, and colonies1610: Prussia1700: 'Germany', Swiss Cantons, Protestant Low Countries, Norway, Denmark1873: Japan1912: China, Albania1584: Kingdom of Bohemia1648: Alsace1752: Great Britain and colonies1875: Egypt1915: Latvia, Lithuania1682: Strasbourg1753: Sweden and Finland1896: Korea1916: Bulgaria1918: USSR, Estonia1919: Romania, Yugoslavia1923: Greece1926: Turkey
Difference between Gregorian and Julian calendar dates
+Conversion from Julian to Gregorian dates.A more extensive list is available at Conversion between Julian and Gregorian calendars Gregorian range Julian range DifferenceFrom 15 October 1582to 28 February 1700From 5 October 1582to 18 February 170010 daysFrom 1 March 1700to 28 February 1800From 19 February 1700to 17 February 180011 daysFrom 1 March 1800to 28 February 1900From 18 February 1800to 16 February 190012 daysFrom 1 March 1900to 28 February 2100From 17 February 1900to 15 February 210013 daysFrom 1 March 2100to 28 February 2200From 16 February 2100to 14 February 220014 days
Since the introduction of the Gregorian calendar, the difference between Gregorian and Julian calendar dates has increased by three days every four centuries (all date ranges are inclusive):
This section always places the intercalary day on even though it was always obtained by doubling (the bissextum (twice sixth) or bissextile day) until the late Middle Ages. The Gregorian calendar is proleptic before 1582 (assumed to exist before 1582).
The following equation gives the number of days (actually, dates) that the Gregorian calendar is ahead of the Julian calendar, called the secular difference between the two calendars. A negative difference means the Julian calendar is ahead of the Gregorian calendar.Blackburn & Holford-Strevens (1999), p. 788.
where is the secular difference and is the year using astronomical year numbering, that is, use for BC years. means that if the result of the division is not an integer it is rounded down to the nearest integer. Thus during the 1900s, 1900/400 = 4, while during the −500s, −500/400 = −2.
The general rule, in years which are leap years in the Julian calendar but not the Gregorian, is as follows:
Up to 28 February in the calendar you are converting from add one day less or subtract one day more than the calculated value. Remember to give February the appropriate number of days for the calendar you are converting into. When you are subtracting days to move from Julian to Gregorian be careful, when calculating the Gregorian equivalent of 29 February (Julian), to remember that 29 February is discounted. Thus if the calculated value is −4 the Gregorian equivalent of this date is 24 February.James Evans, The history and practice of ancient astronomy (Oxford: Oxford University Press, 1998) 169. ISBN 0-19-509539-1.Explanatory Supplement to The Astronomical Ephemeris and The American Ephemeris and Nautical Almanac (London: Her Majesty's Stationery Office, 1961) 417.
Beginning of the year
Country Start numbered yearon 1 January Adoption ofGregorian calendar Denmark Gradual change from13th to 16th centuriesHerluf Nielsen: Kronologi (2nd ed., Dansk Historisk Fællesforening, Copenhagen 1967), pp. 48–50. 1700 Venice 1522 1582 Holy Roman Empire (Catholic states) 1544 1583 Spain, Poland, Portugal 1556 1582 Holy Roman Empire (Protestant states) 1559 1700 Sweden 1559 1753 France 1564Le calendrier grégorien en France 1582 Southern Netherlands 1576Per decree of 16 June 1575. Hermann Grotefend, "Osteranfang" (Easter beginning), Zeitrechnung de Deutschen Mittelalters und der Neuzeit (Chronology of the German Middle Ages and modern times) (1891–1898) 1582 Lorraine 1579 1682 Dutch Republic 1583 1582 Scotland 1600Blackburn & Holford-Strevens (1999), p. 784.John James Bond, Handy-book of rules and tables for verifying dates with the Christian era Scottish decree on pp. xvii–xviii. 1752 Russia 1700Roscoe Lamont, The reform of the Julian calendar, Popular Astronomy 28 (1920) 18–32. Decree of Peter the Great is on pp. 23–24.1918 Tuscany 1721 1750 Great Britain and the British Empireexcept Scotland 1752 1752
The year used in dates during the Roman Republic and the Roman Empire was the consular year, which began on the day when consuls first entered office—probably 1 May before 222 BC, 15 March from 222 BC and 1 January from 153 BC. The Julian calendar, which began in 45 BC, continued to use 1 January as the first day of the new year. Even though the year used for dates changed, the civil year always displayed its months in the order January to December from the Roman Republican period until the present.
During the Middle Ages, under the influence of the Catholic Church, many Western European countries moved the start of the year to one of several important Christian festivals—25 December (supposed Nativity of Jesus), 25 March (Annunciation), or Easter (France),Mike Spathaky Old Style and New Style Dates and the change to the Gregorian Calendar: A summary for genealogists while the Byzantine Empire began its year on 1 September and Russia did so on 1 March until 1492 when the new year was moved to 1 September.S. I. Seleschnikow: Wieviel Monde hat ein Jahr? (Aulis-Verlag, Leipzig/Jena/Berlin 1981, p. 149), which is a German translation of С. И. Селешников: История календаря и хронология (Издательство «Наука», Moscow 1977). The relevant chapter is available online here: История календаря в России и в СССР (Calendar history in Russia and the USSR). Anno Mundi 7000 lasted from to .
In common usage, 1 January was regarded as New Year's Day and celebrated as such,Tuesday 31 December 1661, The Diary of Samuel Pepys "I sat down to end my journell for this year, ..." but from the 12th century until 1751 the legal year in England began on 25 March (Lady Day).Nørby, Toke. The Perpetual Calendar: What about England Version 29 February 2000 So, for example, the Parliamentary record lists the execution of Charles I on 30 January as occurring in 1648 (as the year did not end until 24 March), although modern histories adjust the start of the year to 1 January and record the execution as occurring in 1649.
Most Western European countries changed the start of the year to 1 January before they adopted the Gregorian calendar. For example, Scotland changed the start of the Scottish New Year to 1 January in 1600 (this means that 1599 was a short year). England, Ireland and the British colonies changed the start of the year to 1 January in 1752 (so 1751 was a short year with only 282 days) though in England the start of the tax year remained at 25 March (O.S.), 5 April (N.S.) till 1800, when it moved to 6 April. Later in 1752 in September the Gregorian calendar was introduced throughout Britain and the British colonies (see the section Adoption). These two reforms were implemented by the Calendar (New Style) Act 1750.Nørby, Toke. The Perpetual Calendar
In some countries, an official decree or law specified that the start of the year should be 1 January. For such countries a specific year when a 1 January-year became the norm can be identified. In other countries the customs varied, and the start of the year moved back and forth as fashion and influence from other countries dictated various customs.
Neither the papal bull nor its attached canons explicitly fix such a date, though it is implied by two tables of saint's days, one labelled 1582 which ends on 31 December, and another for any full year that begins on 1 January. It also specifies its epact relative to 1 January, in contrast with the Julian calendar, which specified it relative to 22 March. The old date was derived from the Greek system: the earlier Supputatio Romana specified it relative to 1 January.
Dual dating
During the period between 1582, when the first countries adopted the Gregorian calendar, and 1923, when the last European country adopted it, it was often necessary to indicate the date of some event in both the Julian calendar and in the Gregorian calendar, for example, "10/21 February 1750/51", where the dual year accounts for some countries already beginning their numbered year on 1 January while others were still using some other date. Even before 1582, the year sometimes had to be double dated because of the different beginnings of the year in various countries. Woolley, writing in his biography of John Dee (1527–1608/9), notes that immediately after 1582 English letter writers "customarily" used "two dates" on their letters, one OS and one NS.Benjamin Woolley, The Queen's Conjurer: The science and magic of Dr. John Dee, adviser to Queen Elizabeth I (New York: Henry Holt, 2001) p. 173
Old Style and New Style dates
"Old Style" (OS) and "New Style" (NS) are sometimes added to dates to identify which calendar reference system is used for the date given. In Britain and its Colonies, where the Calendar Act of 1750 altered the start of the year and also aligned the British calendar with the Gregorian calendar, there is some confusion as to what these terms mean. They can indicate that the start of the Julian year has been adjusted to start on 1 January (NS) even though contemporary documents use a different start of year (OS); or to indicate that a date conforms to the Julian calendar (OS), formerly in use in many countries, rather than the Gregorian calendar (NS).Death warrant of Charles I web page of the UK National Archives. A demonstration of New Style meaning Julian calendar with a start of year adjustment.Spathaky, Mike Old Style New Style dates and the change to the Gregorian calendar. "increasingly parish registers, in addition to a new year heading after 24th March showing, for example '1733', had another heading at the end of the following December indicating '1733/4'. This showed where the New Style 1734 started even though the Old Style 1733 continued until 24th March. ... We as historians have no excuse for creating ambiguity and must keep to the notation described above in one of its forms. It is no good writing simply 20th January 1745, for a reader is left wondering whether we have used the Old or the New Style reckoning. The date should either be written 20th January 1745 OS (if indeed it was Old Style) or as 20th January 1745/6. The hyphen (1745-6) is best avoided as it can be interpreted as indicating a period of time."The October (November) Revolution Britannica encyclopaedia, A demonstration of New Style meaning the Gregorian calendar.Stockton, J.R. Date Miscellany I: The Old and New Styles "The terms 'Old Style' and 'New Style' are now commonly used for both the 'Start of Year' and 'Leap Year' [(Gregorian calendar)] changes (England & Wales: both in 1752; Scotland: 1600, 1752). I believe that, properly and historically, the 'Styles' really refer only to the 'Start of Year' change (from March 25th to January 1st); and that the 'Leap Year' change should be described as the change from Julian to Gregorian."
Proleptic Gregorian calendar
Extending the Gregorian calendar backwards to dates preceding its official introduction produces a proleptic calendar, which should be used with some caution. For ordinary purposes, the dates of events occurring prior to 15 October 1582 are generally shown as they appeared in the Julian calendar, with the year starting on 1 January, and no conversion to their Gregorian equivalents. For example, the Battle of Agincourt is universally considered to have been fought on 25 October 1415 which is Saint Crispin's Day.
Usually, the mapping of new dates onto old dates with a start of year adjustment works well with little confusion for events that happened before the introduction of the Gregorian calendar. But for the period between the first introduction of the Gregorian calendar on 15 October 1582 and its introduction in Britain on 14 September 1752, there can be considerable confusion between events in continental western Europe and in British domains in English language histories.
Events in continental western Europe are usually reported in English language histories as happening under the Gregorian calendar. For example, the Battle of Blenheim is always given as 13 August 1704. Confusion occurs when an event affects both. For example, William III of England arrived at Brixham in England on 5 November 1688 (Julian calendar), after setting sail from the Netherlands on 11 November 1688 (Gregorian calendar).
Shakespeare and Cervantes seemingly died on exactly the same date (23 April 1616), but Cervantes predeceased Shakespeare by ten days in real time (as Spain used the Gregorian calendar, but Britain used the Julian calendar). This coincidence encouraged UNESCO to make 23 April the World Book and Copyright Day.
Astronomers avoid this ambiguity by the use of the Julian day number.
For dates before the year 1, unlike the proleptic Gregorian calendar used in the international standard ISO 8601, the traditional proleptic Gregorian calendar (like the Julian calendar) does not have a year 0 and instead uses the ordinal numbers 1, 2, … both for years AD and BC. Thus the traditional time line is 2 BC, 1 BC, AD 1, and AD 2. ISO 8601 uses astronomical year numbering which includes a year 0 and negative numbers before it. Thus the ISO 8601 time line is , 0000, 0001, and 0002.
Months of the year
English speakers sometimes remember the number of days in each month by memorising a traditional mnemonic verse:
Thirty days hath September,
April, June, and November.
All the rest have thirty-one,
Excepting February alone,
Which hath but twenty-eight days clear,
And twenty-nine in each leap year.
For variations and alternate endings, see Thirty days hath September.
thumb|The knuckle mnemonic.
A language-independent alternative used in many countries is to hold up one's two fists with the index knuckle of the left hand against the index knuckle of the right hand. Then, starting with January from the little knuckle of the left hand, count knuckle, space, knuckle, space through the months. A knuckle represents a month of 31 days, and a space represents a short month (a 28- or 29-day February or any 30-day month). The junction between the hands is not counted, so the two index knuckles represent July and August.
This method also works by starting the sequence on the right hand's little knuckle, then continuing towards the left. It can also be done using just one hand: after counting the fourth knuckle as July, start again counting the first knuckle as August. A similar mnemonic can be found on a piano keyboard: starting on the key F for January, moving up the keyboard in semitones, the black notes give the short months, the white notes the long ones.
The origins of English naming used by the Gregorian calendar:
January: Janus (Roman god of gates, doorways, beginnings and endings)
February: Februus (Etruscan god of death) Februarius (mensis) (Latin for "month of purification (rituals)" it is said to be a Sabine word, the last month of ancient pre-450 BC Roman calendar). It is related to fever.Adriana Rosado-Bonewitz, "Whats in a word?" (pdf, 1.3MB), Intercambios: Quarterly Newsletter of the Spanish Language Division of the American Translators, 9(1) (March 2005): 14–15, (in Spanish) Anatoly Liberman, "On A Self-Congratulatory Note, Or, All The Year Round: The Names of The Months" (filed in Oxford Etymologist, 7 March 2007)Neuru(1996)
March: Mars (Roman god of war)
April: The Romans thought that the name Aprilis derived from aperio, aperire, apertus, a verb meaning "to open". Varro and Cincius both reject the connection of the name to Aphrodite, and the common Roman derivation from aperio may be the correct one.Scullard, Festivals and Ceremonies of the Roman Republic, p. 96; Forsythe, Time in Roman Religion, p. 10.
May: Maia Maiestas (Roman goddess of springtime, warmth, and increase)
June: Juno (Roman goddess, wife of Jupiter)
July: Julius Caesar (Roman dictator) (month was formerly named Quintilis, the fifth month of the calendar of Romulus)
August: Augustus (first Roman emperor) (month was formerly named Sextilis, the sixth month of Romulus)
September: septem (Latin for seven, the seventh month of Romulus)
October: octo (Latin for eight, the eighth month of Romulus)
November: novem (Latin for nine, the ninth month of Romulus)
December: decem (Latin for ten, the tenth month of Romulus)
Week
In conjunction with the system of months there is a system of weeks. A physical or electronic calendar provides conversion from a given date to the weekday, and shows multiple dates for a given weekday and month. Calculating the day of the week is not very simple, because of the irregularities in the Gregorian system. When the Gregorian calendar was adopted by each country, the weekly cycle continued uninterrupted. For example, in the case of the few countries that adopted the reformed calendar on the date proposed by Gregory XIII for the calendar's adoption, Friday, 15 October 1582, the preceding date was Thursday, 4 October 1582 (Julian calendar).
Opinions vary about the numbering of the days of the week. ISO 8601, in common use worldwide, starts with Monday=1; printed monthly calendar grids often list Mondays in the first (left) column of dates and Sundays in the last. Software often starts with Sunday=0, which places Sundays in the left column of a monthly calendar page.
Accuracy
The Gregorian calendar improves the approximation made by the Julian calendar by skipping three Julian leap days in every 400 years, giving an average year of 365.2425 mean solar days long.Seidelmann (1992), pp. 580–581. This approximation has an error of about one day per 3,030 yearsUsing value from Richards (2013, p. 587) for tropical year in mean solar days, the calculation is with respect to the current value of the mean tropical year. However, because of the precession of the equinoxes, which is not constant, and the movement of the perihelion (which affects the Earth's orbital speed) the error with respect to the astronomical vernal equinox is variable; using the average interval between vernal equinoxes near 2000 of 365.24237 daysMeeus and Savoie (1992), p. 42 implies an error closer to 1 day every 7,700 years. By any criterion, the Gregorian calendar is substantially more accurate than the 1 day in 128 years error of the Julian calendar (average year 365.25 days).
In the 19th century, Sir John Herschel proposed a modification to the Gregorian calendar with 969 leap days every 4000 years, instead of 970 leap days that the Gregorian calendar would insert over the same period.John Herschel, Outlines of Astronomy, 1849, p. 629. This would reduce the average year to 365.24225 days. Herschel's proposal would make the year 4000, and multiples thereof, common instead of leap. While this modification has often been proposed since, it has never been officially adopted.
On time scales of thousands of years, the Gregorian calendar falls behind the astronomical seasons because the slowing down of the Earth's rotation makes each day slightly longer over time (see tidal acceleration and leap second) while the year maintains a more uniform duration.
Calendar seasonal error
nome|800px|Gregorian calendar seasons difference
This image shows the difference between the Gregorian calendar and the astronomical seasons.
The y-axis is the date in June and the x-axis is Gregorian calendar years.
Each point is the date and time of the June solstice in that particular year. The error shifts by about a quarter of a day per year. Centurial years are ordinary years, unless they are divisible by 400, in which case they are leap years. This causes a correction in the years 1700, 1800, 1900, 2100, 2200, and 2300.
For instance, these corrections cause 23 December 1903 to be the latest December solstice, and 20 December 2096 to be the earliest solstice—2.25 days of variation compared with the seasonal event.
Proposed reforms
The following are proposed reforms of the Gregorian calendar:
Holocene calendar
International Fixed Calendar (also called the International Perpetual calendar)
World Calendar
World Season Calendar
Leap week calendars
Pax Calendar
Symmetry454
Hanke–Henry Permanent Calendar
See also
Calendar reform
Conversion between Julian and Gregorian calendars
Common Era
Computus — Gregorian lunar calendar
Doomsday rule
Dual dating
Inter gravissimas in English — Wikisource
Julian day calculation
History of calendars
List of adoption dates of the Gregorian calendar per country
List of calendars
Old Style and New Style dates
Old Calendarists
Greek Old Calendarists
Tropical year
Revised Julian calendar (Milanković) — used in Eastern Orthodoxy
Precursors of the Gregorian reform
Johannes de Sacrobosco, De Anni Ratione ("On reckoning the years"), c. 1235
Roger Bacon, Opus Majus ("Greater Work"), c. 1267
Notes
Footnotes
References
Blackburn, B. & Holford-Strevens, L. (1999). The Oxford Companion to the Year. Oxford University Press. ISBN 0-19-214231-3.
Blackburn, B. & Holford-Strevens, L. (2003). The Oxford Companion to the Year: An exploration of calendar customs and time-reckoning, Oxford University Press.
Coyne, G. V., Hoskin, M. A., Pedersen, O. (Eds.) (1983). Gregorian Reform of the Calendar: Proceedings of the Vatican Conference to Commemorate its 400th Anniversary, 1582–1982. Vatican City: Pontifical Academy of Sciences, Vatican Observatory (Pontificia Academia Scientarum, Specola Vaticana).
Borkowski, K. M., (1991). "The tropical calendar and solar year", J. Royal Astronomical Soc. of Canada 85(3): 121–130.
Barsoum, Ignatius A. (2003). The Scattered Pearls. Piscataway: Georgias Press.
Duncan, D. E. (1999). Calendar: Humanity's Epic Struggle To Determine A True And Accurate Year. HarperCollins. ISBN 9780380793242.
Gregory XIII. (2002 [1582]). Inter Gravissimas (W. Spenser & R. T. Crowley, Trans.). International Organization for Standardization.
Meeus, J. & Savoie, D. (1992). The history of the tropical year. Journal of the British Astronomical Association, 102(1): 40–42.
Morrison, L. V. & Stephenson, F. R. (2004). Historical values of the Earth's clock error ΔT and the calculation of eclipses. Journal for the History of Astronomy Vol. 35, Part 3, No. 120, pp. 327–336.
Moyer, Gordon (May 1982). "The Gregorian Calendar". Scientific American, pp. 144–152.
Moyer, Gordon (1983). "Aloisius Lilius and the Compendium Novae Rationis Restituendi Kalendarium". In Coyne, Hoskin, Pedersen (1983), pp. 171–188.
Pedersen, O. (1983). "The Ecclesiastical Calendar and the Life of the Church". In Coyne, Hoskin, Pedersen (eds), Gregorian Reform of the Calendar: Proceedings of the Vatican Conference to Commemorate its 400th Anniversary. Vatican City: Pontifical Academy of Sciences, Specolo Vaticano, pp. 17–74.
Richards, E. G. (1998). Mapping Time: The Calendar and its History. Oxford U. Press.
Richards, E. G. (2013). "Calendars". In S. E. Urban and P. K. Seidelmann (eds.), Explanatory Supplement to the Astronomical Almanac (pp. 585–624). Mill Valley CA: University Science Books. ISBN 978-1-891389-85-6
Seidelmann, P. K. (Ed.) (1992). Explanatory Supplement to the Astronomical Almanac. Sausalito, CA: University Science Books.\
Swerdlow, N. M. (1986). The Length of the the Year in the Original Proposal for the Gregorian Calendar. Journal for the History of Astronomy Vol. 17, No. 49, pp. 109–118.
Walker, G. W. Easter Intervals. Popular Astronomy June 1945, Vol. 53, pp. 162 –178, 218–232. Available at .
Ziggelaar, A. (1983). "The Papal Bull of 1582 Promulgating a Reform of the Calendar". In Coyne, Hoskin, Pedersen (eds), Gregorian Reform of the Calendar: Proceedings of the Vatican Conference to Commemorate its 400th Anniversary. Vatican City: Pontifical Academy of Sciences, Specolo Vaticano, pp. 201–239.
External links
Calendar Converter
Inter Gravissimas (Latin and French plus English)
History of Gregorian Calendar
The Perpetual Calendar Gregorian Calendar adoption dates for many countries.
World records for mentally calculating the day of the week in the Gregorian Calendar
The Calendar FAQ – Frequently Asked Questions about Calendars
Today's date (Gregorian) in over 400 more-or-less obscure foreign languages
Category:1582 establishments
Category:Articles which contain graphical timelines
Category:1582 establishments in Europe | 23,306,251 | 2017-01 |
Serbo-Croatian | Serbo-Croatian , also called Serbo-Croat , Serbo-Croat-Bosnian (SCB), Bosnian-Croatian-Serbian (BCS), or Bosnian-Croatian-Montenegrin-Serbian (BCMS), is a South Slavic language and the primary language of Serbia, Croatia, Bosnia and Herzegovina, and Montenegro. It is a pluricentric language with four mutually intelligible standard varieties.
South Slavic dialects historically formed a continuum. The turbulent history of the area, particularly due to expansion of the Ottoman Empire, resulted in a patchwork of dialectal and religious differences. Due to population migrations, Shtokavian became the most widespread in the western Balkans, intruding westwards into the area previously occupied by Chakavian and Kajkavian (which further blend into Slovenian in the northwest). Bosniaks, Croats and Serbs differ in religion and were historically often part of different cultural circles, although a large part of the nations have lived side by side under foreign overlords. During that period, the language was referred to under a variety of names, such as "Slavic", "Illyrian", or according to region, "Bosnian", "Serbian" and "Croatian", the latter often in combination with "Slavonian" or "Dalmatian".
Serbo-Croatian was standardized in the mid-19th-century Vienna Literary Agreement by Croatian and Serbian writers and philologists, decades before a Yugoslav state was established. From the very beginning, there were slightly different literary Serbian and Croatian standards, although both were based on the same Shtokavian subdialect, Eastern Herzegovinian. In the 20th century, Serbo-Croatian served as the official language of the Kingdom of Yugoslavia (when it was called "Serbo-Croato-Slovenian"), and later as one of the official languages of the Socialist Federal Republic of Yugoslavia. The breakup of Yugoslavia affected language attitudes, so that social conceptions of the language separated on ethnic and political lines. Since the breakup of Yugoslavia, Bosnian has likewise been established as an official standard in Bosnia and Herzegovina, and there is an ongoing movement to codify a separate Montenegrin standard. Serbo-Croatian thus generally goes by the ethnic names Serbian, Croatian, Bosnian, and sometimes Montenegrin and Bunjevac."The same language [Croatian] is referred to by different names, Serbian (srpski), Serbo-Croat (in Croatia: hrvatsko-srpski), Bosnian (bosanski), based on political and ethnic grounds. […] the names Serbian, Croatian, and Bosnian are politically determined and refer to the same language with possible slight variations." ()
Like other South Slavic languages, Serbo-Croatian has a simple phonology, with the common five-vowel system and twenty-five consonants. Its grammar evolved from Common Slavic, with complex inflection, preserving seven grammatical cases in nouns, pronouns, and adjectives. Verbs exhibit imperfective or perfective aspect, with a moderately complex tense system. Serbo-Croatian is a pro-drop language with flexible word order, subject–verb–object being the default. It can be written in Serbian Cyrillic or Gaj's Latin alphabet, whose thirty letters mutually map one-to-one, and the orthography is highly phonemic in all standards.
Name
Throughout the history of the South Slavs, the vernacular, literary, and written languages (e.g. Chakavian, Kajkavian, Shtokavian) of the various regions and ethnicities developed and diverged independently. Prior to the 19th century, they were collectively called "Illyric", "Slavic", "Slavonian", "Bosnian", "Dalmatian", "Serbian" or "Croatian". As such, the term Serbo-Croatian was first used by Jacob Grimm in 1824, popularized by the Vienna philologist Jernej Kopitar in the following decades, and accepted by Croatian Zagreb grammarians in 1854 and 1859. At that time, Serb and Croat lands were still part of the Ottoman and Austrian Empires. Officially, the language was called variously Serbo-Croat, Croato-Serbian, Serbian and Croatian, Croatian and Serbian, Serbian or Croatian, Croatian or Serbian. Unofficially, Serbs and Croats typically called the language "Serbian" or "Croatian", respectively, without implying a distinction between the two, and again in independent Bosnia and Herzegovina, "Bosnian", "Croatian", and "Serbian" were considered to be three names of a single official language."In 1993 the authorities in Sarajevo adopted a new language law (Službeni list Republike Bosne i Hercegovine, 18/93): In the Republic of Bosnia and Herzegovina, the Ijekavian standard literary language of the three constitutive nations is officially used, designated by one of the three terms: Bosnian, Serbian, Croatian." () Croatian linguist Dalibor Brozović advocated the term Serbo-Croatian as late as 1988, claiming that in an analogy with Indo-European, Serbo-Croatian does not only name the two components of the same language, but simply charts the limits of the region in which it is spoken and includes everything between the limits (‘Bosnian’ and ‘Montenegrin’). Today, use of the term "Serbo-Croatian" is controversial due to the prejudice that nation and language must match. It is still used for lack of a succinct alternative, though alternative names have been used, such as Bosnian/Croatian/Serbian (BCS),Tomasz Kamusella. The Politics of Language and Nationalism in Modern Central Europe. Palgrave Macmillan, 2008. pp. 228, 297. which is often seen in political contexts such as the International Criminal Tribunal for the former Yugoslavia.
History
Early development
thumb|Humac tablet, ~1000 AD
thumb|Hval's Codex, 1404
Old Church Slavonic was adopted as the language of the liturgy. This language was gradually adapted to non-liturgical purposes and became known as the Croatian version of Old Slavonic. The two variants of the language, liturgical and non-liturgical, continued to be a part of the Glagolitic service as late as the middle of the 19th century. The earliest known Croatian Church Slavonic Glagolitic manuscripts are the Glagolita Clozianus and the Vienna Folia from the 11th century.
The beginning of written Serbo-Croatian can be traced from the 10th century and on when Serbo-Croatian medieval texts were written in five scripts: Latin, Glagolitic, Early Cyrillic, Bosnian Cyrillic (bosančica/bosanica), and Arebica, the last principally by Bosniak nobility. Serbo-Croatian competed with the more established literary languages of Latin and Old Slavonic in the west and Persian and Arabic in the east.
Old Slavonic developed into the Serbo-Croatian variant of Church Slavonic between the 12th and 16th centuries.
Among the earliest attestations of Serbo-Croatian are the Humac tablet, dating from the 10th or 11th century, written in Bosnian Cyrillic and Glagolitic; the Plomin tablet, dating from the same era, written in Glagolitic; the Valun tablet, dated to the 11th century, written in Glagolitic and Latin; and the Inscription of Župa Dubrovačka, a Glagolitic tablet dated to the 11th century.
The Baška tablet from the late 11th century was written in Glagolitic. It is a large stone tablet found in the small Church of St. Lucy, Jurandvor on the Croatian island of Krk that contains text written mostly in Chakavian in the Croatian angular Glagolitic script. It is also important in the history of the nation as it mentions Zvonimir, the king of Croatia at the time.
The Charter of Ban Kulin of 1189, written by Ban Kulin of Bosnia, was an early Shtokavian text, written in Bosnian Cyrillic.
The luxurious and ornate representative texts of Serbo-Croatian Church Slavonic belong to the later era, when they coexisted with the Serbo-Croatian vernacular literature. The most notable are the "Missal of Duke Novak" from the Lika region in northwestern Croatia (1368), "Evangel from Reims" (1395, named after the town of its final destination), Hrvoje's Missal from Bosnia and Split in Dalmatia (1404), and the first printed book in Serbo-Croatian, the Glagolitic Missale Romanum Glagolitice (1483).
During the 13th century Serbo-Croatian vernacular texts began to appear, the most important among them being the "Istrian land survey" of 1275 and the "Vinodol Codex" of 1288, both written in the Chakavian dialect.
The Shtokavian dialect literature, based almost exclusively on Chakavian original texts of religious provenance (missals, breviaries, prayer books) appeared almost a century later. The most important purely Shtokavian vernacular text is the Vatican Croatian Prayer Book (c. 1400).
Both the language used in legal texts and that used in Glagolitic literature gradually came under the influence of the vernacular, which considerably affected its phonological, morphological, and lexical systems. From the 14th and the 15th centuries, both secular and religious songs at church festivals were composed in the vernacular.
Writers of early Serbo-Croatian religious poetry (začinjavci) gradually introduced the vernacular into their works. These začinjavci were the forerunners of the rich literary production of the 16th-century literature, which, depending on the area, was Chakavian-, Kajkavian-, or Shtokavian-based. The language of religious poems, translations, miracle and morality plays contributed to the popular character of medieval Serbo-Croatian literature.
One of the earliest dictionaries, also in the Slavic languages as a whole, was the Bosnian–Turkish Dictionary of 1631 authored by Muhamed Hevaji Uskufi and was written in the Arebica script.
Gallery
Modern standardization
left|thumb|upright|Đuro Daničić, Rječnik hrvatskoga ili srpskoga jezika (Croatian or Serbian Dictionary), 1882
thumb|upright|Gramatika bosanskoga jezika (Grammar of the Bosnian Language), 1890
In the mid-19th century, Serbian (led by self-taught writer and folklorist Vuk Stefanović Karadžić) and most Croatian writers and linguists (represented by the Illyrian movement and led by Ljudevit Gaj and Đuro Daničić), proposed the use of the most widespread dialect, Shtokavian, as the base for their common standard language. Karadžić standardised the Serbian Cyrillic alphabet, and Gaj and Daničić standardized the Croatian Latin alphabet, on the basis of vernacular speech phonemes and the principle of phonological spelling. In 1850 Serbian and Croatian writers and linguists signed the Vienna Literary Agreement, declaring their intention to create a unified standard. Thus a complex bi-variant language appeared, which the Serbs officially called "Serbo-Croatian" or "Serbian or Croatian" and the Croats "Croato-Serbian", or "Croatian or Serbian". Yet, in practice, the variants of the conceived common literary language served as different literary variants, chiefly differing in lexical inventory and stylistic devices. The common phrase describing this situation was that Serbo-Croatian or "Croatian or Serbian" was a single language. During the Austro-Hungarian occupation of Bosnia and Herzegovina, the language of all three nations was called "Bosnian" until the death of administrator von Kállay in 1907, at which point the name was changed to "Serbo-Croatian".
With unification of the first the Kingdom of the Serbs, Croats, and Slovenes – the approach of Karadžić and the Illyrians became dominant. The official language was called "Serbo-Croato-Slovenian" (srpsko-hrvatsko-slovenački) in the 1921 constitution. In 1929, the constitution was suspended, and the country was renamed the Kingdom of Yugoslavia, while the official language of Serbo-Croato-Slovene was reinstated in the 1931 constitution.
In June 1941, the Nazi puppet Independent State of Croatia began to rid the language of "Eastern" (Serbian) words, and shut down Serbian schools.
On January 15, 1944, the Anti-Fascist Council of the People's Liberation of Yugoslavia (AVNOJ) declared Croatian, Serbian, Slovene, and Macedonian to be equal in the entire territory of Yugoslavia. In 1945 the decision to recognize Croatian and Serbian as separate languages was reversed in favor of a single Serbo-Croatian or Croato-Serbian language. In the Communist-dominated second Yugoslavia, ethnic issues eased to an extent, but the matter of language remained blurred and unresolved.
In 1954, major Serbian and Croatian writers, linguists and literary critics, backed by Matica srpska and Matica hrvatska signed the Novi Sad Agreement, which in its first conclusion stated: "Serbs, Croats and Montenegrins share a single language with two equal variants that have developed around Zagreb (western) and Belgrade (eastern)". The agreement insisted on the equal status of Cyrillic and Latin scripts, and of Ekavian and Ijekavian pronunciations. It also specified that Serbo-Croatian should be the name of the language in official contexts, while in unofficial use the traditional Serbian and Croatian were to be retained. Matica hrvatska and Matica srpska were to work together on a dictionary, and a committee of Serbian and Croatian linguists was asked to prepare a pravopis. During the sixties both books were published simultaneously in Ijekavian Latin in Zagreb and Ekavian Cyrillic in Novi Sad. Yet Croatian linguists claim that it was an act of unitarianism. The evidence supporting this claim is patchy: Croatian linguist Stjepan Babić complained that the television transmission from Belgrade always used the Latin alphabet— which was true, but was not proof of unequal rights, but of frequency of use and prestige. Babić further complained that the Novi Sad Dictionary (1967) listed side by side words from both the Croatian and Serbian variants wherever they differed, which one can view as proof of careful respect for both variants, and not of unitarism. Moreover, Croatian linguists criticized those parts of the Dictionary for being unitaristic that were written by Croatian linguists. And finally, Croatian linguists ignored the fact that the material for the Pravopisni rječnik came from the Croatian Philological Society. Regardless of these facts, Croatian intellectuals brought the Declaration on the Status and Name of the Croatian Literary Language in 1967. On occasion of the publication’s 45th anniversary, the Croatian weekly journal Forum published the Declaration again in 2012, accompanied by a critical analysis.
West European scientists judge the Yugoslav language policy as an exemplary one: although three-quarters of the population spoke one language, no single language was official on a federal level. Official languages were declared only at the level of constituent republics and provinces, and very generously: Vojvodina had five (among them Slovak and Romanian, spoken by 0.5 per cent of the population), and Kosovo four (Albanian, Turkish, Romany and Serbo-Croatian). Newspapers, radio and television studios used sixteen languages, fourteen were used as languages of tuition in schools, and nine at universities. Only the Yugoslav Army used Serbo-Croatian as the sole language of command, with all other languages represented in the army’s other activities—however, this is not different from other armies of multilingual states, or in other specific institutions, such as international air traffic control where English is used worldwide. All variants of Serbo-Croatian were used in state administration and republican and federal institutions. Both Serbian and Croatian variants were represented in respectively different grammar books, dictionaries, school textbooks and in books known as pravopis (which detail spelling rules). Serbo-Croatian was a kind of soft standardisation. However, legal equality could not dampen the prestige Serbo-Croatian had: since it was the language of three quarters of the population, it functioned as an unofficial lingua franca. And within Serbo-Croatian, the Serbian variant, with twice as many speakers as the Croatian, enjoyed greater prestige, reinforced by the fact that Slovene and Macedonian speakers preferred it to the Croatian variant because their languages are also Ekavian. This is a common situation in other pluricentric languages, e.g. the variants of German differ according to their prestige, the variants of Portuguese too. Moreover, all languages differ in terms of prestige: "the fact is that languages (in terms of prestige, learnability etc.) are not equal, and the law cannot make them equal"."die Tatsache, dass Sprachen (in ihrem Prestige, ihrer Erlernbarkeit etc.) nicht gleich sind und auch per Gesetz nicht gleich gemacht werden können" ()
Demographics
thumbnail|300px|
The total number of persons who declared their native language as either 'Bosnian', 'Croatian', 'Serbian', 'Montenegrin', or 'Serbo-Croatian' in countries of the region is about 16 million.
Serbian is spoken by about 9.5 million mostly in Serbia (6.7m), Bosnia and Herzegovina (1.4m), and Montenegro (0.4m). Serbian minorities are found in the Republic of Macedonia and in Romania. In Serbia, there are about 760,000 second-language speakers of Serbian, including Hungarians in Vojvodina and the 400,000 estimated Roma. Familiarity of Kosovo Albanians with Serbian in Kosovo varies depending on age and education, and exact numbers are not available.
Croatian is spoken by roughly 4.8 million including some 575,000 in Bosnia and Herzegovina. A small Croatian minority lives in Italy known as Molise Croats have somewhat preserved traces of the Croatian language. In Croatia, 170,000 mostly Italians and Hungarians use it as a second language.
Bosnian is spoken by 2.2 million people, chiefly Bosniaks, including about 220,000 in Serbia and Montenegro.
Notion of Montenegrin as a separate standard from Serbian is relatively recent. In the 2003 census, around 150,000 Montenegrins, of the country's 620,000, declared Montenegrin as their native language. That figure is likely to increase since, due to the country's independence and strong institutional backing of Montenegrin language.
Serbo-Croatian is also a second language of many Slovenians and Macedonians, especially those born during the time of Yugoslavia. According to the 2002 Census, Serbo-Croatian and its variants have the largest number of speakers of the minority languages in Slovenia.
Outside the Balkans, there are over 2 million native speakers of the language(s), especially in countries which are frequent targets of immigration, such as Australia, Austria, Brazil, Canada, Chile, Germany, Hungary, Italy, Sweden and the United States.
Grammar
thumb|Tomislav Maretić's 1899 Grammar of Croatian or Serbian.
Serbo-Croatian is a highly inflected language. Traditional grammars list seven cases for nouns and adjectives: nominative, genitive, dative, accusative, vocative, locative, and instrumental, reflecting the original seven cases of Proto-Slavic, and indeed older forms of Serbo-Croatian itself. However, in modern Shtokavian the locative has almost merged into dative (the only difference is based on accent in some cases), and the other cases can be shown declining; namely:
For all nouns and adjectives, instrumental = dative = locative (at least orthographically) in the plural: ženama, ženama, ženama; očima, očima, očima; riječima, riječima, riječima.
There is an accentual difference between the genitive singular and genitive plural of masculine and neuter nouns, which are otherwise homonyms (seljaka, seljaka) except that on occasion an "a" (which might or might not appear in the singular) is filled between the last letter of the root and the genitive plural ending (kapitalizma, kapitalizama).
The old instrumental ending "ju" of the feminine consonant stems and in some cases the "a" of the genitive plural of certain other sorts of feminine nouns is fast yielding to "i": noći instead of noćju, borbi instead of boraba and so forth.
Almost every Shtokavian number is indeclinable, and numbers after prepositions have not been declined for a long time.
Like most Slavic languages, there are mostly three genders for nouns: masculine, feminine, and neuter, a distinction which is still present even in the plural (unlike Russian and, in part, the Čakavian dialect). They also have two numbers: singular and plural. However, some consider there to be three numbers (paucal or dual, too), since (still preserved in closely related Slovene) after two (dva, dvije/dve), three (tri) and four (četiri), and all numbers ending in them (e.g. twenty-two, ninety-three, one hundred four) the genitive singular is used, and after all other numbers five (pet) and up, the genitive plural is used. (The number one [jedan] is treated as an adjective.) Adjectives are placed in front of the noun they modify and must agree in both case and number with it.
There are seven tenses for verbs: past, present, future, exact future, aorist, imperfect, and plusquamperfect; and three moods: indicative, imperative, and conditional. However, the latter three tenses are typically used only in Shtokavian writing, and the time sequence of the exact future is more commonly formed through an alternative construction.
In addition, like most Slavic languages, the Shtokavian verb also has one of two aspects: perfective or imperfective. Most verbs come in pairs, with the perfective verb being created out of the imperfective by adding a prefix or making a stem change. The imperfective aspect typically indicates that the action is unfinished, in progress, or repetitive; while the perfective aspect typically denotes that the action was completed, instantaneous, or of limited duration. Some Štokavian tenses (namely, aorist and imperfect) favor a particular aspect (but they are rarer or absent in Čakavian and Kajkavian). Actually, aspects "compensate" for the relative lack of tenses, because aspect of the verb determines whether the act is completed or in progress in the referred time.
Phonology
Vowels
The Serbo-Croatian vowel system is simple, with only five vowels in Shtokavian. All vowels are monophthongs. The oral vowels are as follows:
Latin script Cyrillic script IPA Description English approximation a а open central unrounded father e е mid front unrounded den i и close front unrounded seek o о mid back rounded lord u у close back rounded pool
The vowels can be short or long, but the phonetic quality doesn't change depending on the length. In a word, vowels can be long in the stressed syllable and the syllables following it, never in the ones preceding it.
Consonants
The consonant system is more complicated, and its characteristic features are series of affricate and palatal consonants. As in English, voice is phonemic, but aspiration is not.
Latin script Cyrillic script IPA Description English approximation trill r р alveolar trill rolled (vibrating) r as in carramba approximants v в labiodental approximant roughly between vortex and war j ј palatal approximant year laterals l л lateral alveolar approximant light lj љ palatal lateral approximant roughly battalion nasals m м bilabial nasal man n н alveolar nasal not nj њ palatal nasal news or American canyon fricatives f ф voiceless labiodental fricative five s с voiceless dental sibilant some z з voiced dental sibilant zero š ш voiceless postalveolar fricative sharp ž ж voiced postalveolar fricative television h х voiceless velar fricative loch affricates c ц voiceless dental affricate pots dž џ voiced postalveolar affricate roughly eject č ч voiceless postalveolar affricate roughly check đ ђ voiced alveolo-palatal affricate roughly Jews ć ћ voiceless alveolo-palatal affricate roughly choose plosives b б voiced bilabial plosive book p п voiceless bilabial plosive top d д voiced dental plosive dog t т voiceless dental plosive it g г voiced velar plosive good k к voiceless velar plosive duck
In consonant clusters all consonants are either voiced or voiceless. All the consonants are voiced if the last consonant is normally voiced or voiceless if the last consonant is normally voiceless. This rule does not apply to approximantsa consonant cluster may contain voiced approximants and voiceless consonants; as well as to foreign words (Washington would be transcribed as VašinGton), personal names and when consonants are not inside of one syllable.
can be syllabic, playing the role of the syllable nucleus in certain words (occasionally, it can even have a long accent). For example, the tongue-twister navrh brda vrba mrda involves four words with syllabic . A similar feature exists in Czech, Slovak, and Macedonian. Very rarely other sonorants can be syllabic, like (in bicikl), (surname Štarklj), (unit njutn), as well as and in slang.
Pitch accent
Apart from Slovene, Serbo-Croatian is the only Slavic language with a pitch accent (simple tone) system. This feature is present in some other Indo-European languages, such as Swedish, Norwegian, and Ancient Greek. Neo-Shtokavian Serbo-Croatian, which is used as the basis for standard Bosnian, Croatian, Montenegrin, and Serbian, has four "accents", which involve either a rising or falling tone on either long or short vowels, with optional post-tonic lengths:
+Serbo-Croatian accent systemSlavicistsymbolIPAsymbolDescriptionenon-tonic short vowelēnon-tonic long vowelèshort vowel with rising toneélong vowel with rising toneȅshort vowel with falling toneȇlong vowel with falling tone
The tone stressed vowels can be approximated in English with set vs. setting? said in isolation for a short tonic e, or leave vs. leaving? for a long tonic i, due to the prosody of final stressed syllables in English.
General accent rules in the standard language:
Monosyllabic words may have only a falling tone (or no accent at all – enclitics);
Falling tone may occur only on the first syllable of polysyllabic words;
Accent can never occur on the last syllable of polysyllabic words.
There are no other rules for accent placement, thus the accent of every word must be learned individually; furthermore, in inflection, accent shifts are common, both in type and position (the so-called "mobile paradigms"). The second rule is not strictly obeyed, especially in borrowed words.
Comparative and historical linguistics offers some clues for memorising the accent position: If one compares many standard Serbo-Croatian words to e.g. cognate Russian words, the accent in the Serbo-Croatian word will be one syllable before the one in the Russian word, with the rising tone. Historically, the rising tone appeared when the place of the accent shifted to the preceding syllable (the so-called "Neoshtokavian retraction"), but the quality of this new accent was different – its melody still "gravitated" towards the original syllable. Most Shtokavian dialects (Neoshtokavian) dialects underwent this shift, but Chakavian, Kajkavian and the Old Shtokavian dialects did not.
Accent diacritics are not used in the ordinary orthography, but only in the linguistic or language-learning literature (e.g. dictionaries, orthography and grammar books). However, there are very few minimal pairs where an error in accent can lead to misunderstanding.
Orthography
Serbo-Croatian orthography is almost entirely phonetic. Thus, most words should be spelled as they are pronounced. In practice, the writing system does not take into account allophones which occur as a result of interaction between words:
bit ćepronounced biće (and only written separately in Bosnian and Croatian)
od togapronounced otoga (in many vernaculars)
iz čegapronounced iščega (in many vernaculars)
Also, there are some exceptions, mostly applied to foreign words and compounds, that favor morphological/etymological over phonetic spelling:
postdiplomski (postgraduate)pronounced pozdiplomski
One systemic exception is that the consonant clusters ds and dš do not change into ts and tš (although d tends to be unvoiced in normal speech in such clusters):
predstava (show)
odšteta (damages)
Only a few words are intentionally "misspelled", mostly in order to resolve ambiguity:
šeststo (six hundred)pronounced šesto (to avoid confusion with "šesto" [sixth])
prstni (adj., finger)pronounced prsni (to avoid confusion with "prsni" [adj., chest])
Writing systems
Through history, this language has been written in a number of writing systems:
Glagolitic alphabet, chiefly in Croatia.
Arabic alphabet (mostly in Bosnia).
Cyrillic script.
various modifications of the Latin and Greek alphabets.
The oldest texts since the 11th century are in Glagolitic, and the oldest preserved text written completely in the Latin alphabet is "Red i zakon sestara reda Svetog Dominika", from 1345. The Arabic alphabet had been used by Bosniaks; Greek writing is out of use there, and Arabic and Glagolitic persisted so far partly in religious liturgies.
Today, it is written in both the Latin and Cyrillic scripts. Serbian and Bosnian variants use both alphabets, while Croatian uses the Latin only.
The Serbian Cyrillic alphabet was revised by Vuk Stefanović Karadžić in the 19th century.
The Croatian Latin alphabet (Gajica) followed suit shortly afterwards, when Ljudevit Gaj defined it as standard Latin with five extra letters that had diacritics, apparently borrowing much from Czech, but also from Polish, and inventing the unique digraphs "lj", "nj" and "dž". These digraphs are represented as "ļ, ń and ǵ" respectively in the "Rječnik hrvatskog ili srpskog jezika", published by the former Yugoslav Academy of Sciences and Arts in Zagreb. Gramatika hrvatskosrpskoga jezika, Group of Authors (Ivan Brabec, Mate Hraste and Sreten Živković), Zagreb, 1968. The latter digraphs, however, are unused in the literary standard of the language. All in all, this makes Serbo-Croatian the only Slavic language to officially use both the Latin and Cyrillic scripts, albeit the Latin version is more commonly used.
In both cases, spelling is phonetic and spellings in the two alphabets map to each other one-to-one:
Latin to Cyrillic
A a B b C c Č č Ć ć D d Dž dž Đ đ E e F f G g H h I i J j K k А а Б б Ц ц Ч ч Ћ ћ Д д Џ џ Ђ ђ Е е Ф ф Г г Х х И и Ј ј К к
L l Lj lj M m N n Nj nj O o P p R r S s Š š T t U u V v Z z Ž ž Л л Љ љ М м Н н Њ њ О о П п Р р С с Ш ш Т т У у В в З з Ж ж
Cyrillic to Latin
А а Б б В в Г г Д д Ђ ђ Е е Ж ж З з И и Ј ј К к Л л Љ љ М м A a B b V v G g D d Đ đ E e Ž ž Z z I i J j K k L l Lj lj M m
Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ф ф Х х Ц ц Ч ч Џ џ Ш ш N n Nj nj O o P p R r S s T t Ć ć U u F f H h C c Č č Dž dž Š š
Sample collation Latin collation order CyrilliccollationorderLatin CyrillicequivalentInaИна Ина Инверзија Инјекција ИњеInjekcijaИнјекцијаInverzijaИнверзијаInjeИње
The digraphs Lj, Nj and Dž represent distinct phonemes and are considered to be single letters. In crosswords, they are put into a single square, and in sorting, lj follows l and nj follows n, except in a few words where the individual letters are pronounced separately. For instance, nadživ(j)eti "to outlive" is composed of the prefix nad- "out, over" and the verb živ(j)eti "to live". The Cyrillic alphabet avoids such ambiguity by providing a single letter for each phoneme.
Đ used to be commonly written as Dj on typewriters, but that practice led to too many ambiguities. It is also used on car license plates. Today Dj is often used again in place of Đ on the Internet as a replacement due to the lack of installed Serbo-Croat keyboard layouts.
Dialects
See also: South Slavic dialect continuum
South Slavic historically formed a dialect continuum, i.e. each dialect has some similarities with the neighboring one, and differences grow with distance. However, migrations from the 16th to 18th centuries resulting from the spread of Ottoman Empire on the Balkans have caused large-scale population displacement that broke the dialect continuum into many geographical pockets. Migrations in the 20th century, primarily caused by urbanization and wars, also contributed to the reduction of dialectal differences.
The primary dialects are named after the most common question word for what: Shtokavian uses the pronoun što or šta, Chakavian uses ča or ca, Kajkavian (kajkavski), kaj or kej. In native terminology they are referred to as nar(j)ečje, which would be equivalent of "group of dialects", whereas their many subdialects are referred to as dijalekti "dialects" or govori "speeches".
The pluricentric Serbo-Croatian standard language and all four contemporary standard variants are based on the Eastern Herzegovinian subdialect of Neo-Shtokavian. Other dialects are not taught in schools or used by the state media. The Torlakian dialect is often added to the list, though sources usually note that it is a transitional dialect between Shtokavian and the Bulgaro-Macedonian dialects.
thumb|Likely distribution of major dialects prior to the 16th-century migrationsthumb|Shtokavian subdialects (Pavle Ivić, 1988). Yellow is the widespread Eastern Herzegovinian subdialect that forms the basis of all national standards, though it is not spoken natively in any of the capital cities.thumb|Mid-20th-century distribution of dialects in Croatia
The Serbo-Croatian dialects differ not only in the question word they are named after, but also heavily in phonology, accentuation and intonation, case endings and tense system (morphology) and basic vocabulary. In the past, Chakavian and Kajkavian dialects were spoken on a much larger territory, but have been replaced by Štokavian during the period of migrations caused by Ottoman Turkish conquest of the Balkans in the 15th and the 16th centuries. These migrations caused the koinéisation of the Shtokavian dialects, that used to form the West Shtokavian (more closer and transitional towards the neighbouring Chakavian and Kajkavian dialects) and East Shtokavian (transitional towards the Torlakian and the whole Bulgaro-Macedonian area) dialect bundles, and their subsequent spread at the expense of Chakavian and Kajkavian. As a result, Štokavian now covers an area larger than all the other dialects combined, and continues to make its progress in the enclaves where non-literary dialects are still being spoken.E.g., big coastal Croatian cities Rijeka and Split together with their hinterland become basically completely Štokavianised during the 20th century, which had been Čakavian-speaking urban centres.
The differences among the dialects can be illustrated on the example of Schleicher's fable. Diacritic signs are used to show the difference in accents and prosody, which are often quite significant, but which are not reflected in the usual orthography.
style="font-size:90%;"
Neoštokavian Ijekavian/Ekavian
Óvca i kònji
Óvca koja níje ìmala vȕnē vȉd(j)ela je kònje na br(ij)égu. Jèdan je òd njīh vȗkao téška kȍla, drȕgī je nòsio vèliku vrȅću, a trȅćī je nòsio čòv(j)eka.
Óvca rȅče kònjima: «Sȑce me bòlī glȅdajūći čòv(j)eka kako jȁšē na kònju».
A kònji rȅkoše: «Slȕšāj, ȏvco, nȃs sȑca bòlē kada vȉdīmo da čòv(j)ek, gospòdār, rȃdī vȕnu od ovácā i prȁvī òd(j)eću zá se. I ȍndā óvca nȇmā vȉše vȕnē.
Čȗvši tō, óvca pȍb(j)eže ȕ polje.
Old Štokavian (Orubica, Posavina):
Óvca i kònji
Óvca kòjā nî ìmala vȕnē vȉdla kònje na brîgu. Jèdān od njȉjū vũkō tȇška kȍla, drȕgī nosȉjo vȅlikū vrȅću, a trȅćī nosȉjo čovȉka.
Óvca kȃza kȍnjima: «Svȅ me bolĩ kad glȅdām kako čòvik na kònju jȁšī».
A kònji kāzȁše: «Slȕšāj, ȏvco, nãs sȑca bolũ kad vȉdīmo da čòvik, gȁzda, prȁvī vȕnu od ovãc i prȁvī rȍbu zá se od njẽ. I ȍndā ōvcȁ néma vȉšē vȕnē.
Kad tȏ čȕ ōvcȁ, ȕteče ȕ polje.
Čakavian (Matulji near Rijeka):
Ovcȁ i konjı̏
Ovcȁ kȃ ni imȅla vȕni vȉdela je konjȉ na brȇge. Jedȃn je vȗkal tȇški vȏz, drȕgi je nosîl vȅlu vrȅt'u, a trȅt'i je nosîl čovȅka.
Ovcȁ je reklȁ konjȇn: «Sȑce me bolĩ dok glȅdan čovȅka kako jȁše na konjȅ».
A konjȉ su reklȉ: «Poslȕšaj, ovcȁ, nȃs sȑca bolẽ kad vȉdimo da čovȅk, gospodãr dȅla vȕnu od ovãc i dȅla rȍbu zȃ se. I ȍnda ovcȁ nĩma vȉše vȕni.
Kad je tȏ čȕla, ovcȁ je pobȅgla va pȍje.
Kajkavian (Marija Bistrica):
õfca i kȍjni
õfca tera nı̃je imȅ̩la vȕne vȉdla je kȍjne na briẽgu. Jȇn od nîh je vlẽ̩ke̩l tẽška kȍla, drȕgi je nȍsil vȅliku vrȅ̩ču, a trẽjti je nȍsil čovȅ̩ka.
õfca je rȇkla kȍjnem: «Sȑce me bolĩ kad vîdim čovȅka kak jȃše na kȍjnu».
A kȍjni su rȇkli: «Poslȕhni, õfca, nȃs sȑca bolĩju kad vîdime da čȍve̩k, gospodãr, dȇ̩la vȕnu ot õfci i dȇ̩la oblȅ̩ku zȃ se. I ȏnda õfca nȇma vȉše vȕne.
Kad je to čȗla, õfca je pobȇ̩gla f pȍlje.
English language
The Sheep and the Horses
[On a hill,] a sheep that had no wool saw horses, one of them pulling a heavy wagon, one carrying a big load, and one carrying a man quickly.
The sheep said to the horses: "My heart pains me, seeing a man driving horses".
The horses said: "Listen, sheep, our hearts pain us when we see this: a man, the master, makes the wool of the sheep into a warm garment for himself. And the sheep has no wool".
Having heard this, the sheep fled into the plain.
Division by jat reflex
A basic distinction among the dialects is in the reflex of the long Common Slavic vowel jat, usually transcribed as *ě. Depending on the reflex, the dialects are divided into Ikavian, Ekavian, and Ijekavian, with the reflects of jat being /i/, /e/, and /ije/ or /je/ respectively. The long and short jat is reflected as long or short */i/ and /e/ in Ikavian and Ekavian, but Ijekavian dialects introduce a ije/je alternation to retain a distinction.
Standard Croatian and Bosnian are based on Ijekavian, whereas Serbian uses both Ekavian and Ijekavian forms (Ijekavian for Bosnian Serbs, Ekavian for most of Serbia). Influence of standard language through state media and education has caused non-standard varieties to lose ground to the literary forms.
The jat-reflex rules are not without exception. For example, when short jat is preceded by r, in most Ijekavian dialects developed into /re/ or, occasionally, /ri/. The prefix prě- ("trans-, over-") when long became pre- in eastern Ijekavian dialects but to prije- in western dialects; in Ikavian pronunciation, it also evolved into pre- or prije- due to potential ambiguity with pri- ("approach, come close to"). For verbs that had -ěti in their infinitive, the past participle ending -ěl evolved into -io in Ijekavian Neoštokavian.
The following are some examples:
EnglishPredecessorEkavianIkavianIjekavianIjekavian developmentbeautiful*lěplepliplijep long ě → ijetime*vrěmevremevrimevrijemefaith*věraveraviravjerashort ě → jecrossing*prělazprelazprеlaz orprijelazprеlaz orprijelazpr + long ě → prijetimes*vrěmenavremenavrimenavremena r + short ě → reneed*trěbatitrebatitribat(i)trebatiheat*grějatigrejatigrijatigrijatir + short ě → risaw*vidělvideovidiovidioěl → iovillage*seloseloseloseloe in root, not ě
Present sociolinguistic situation
Comparison with other pluricentric languages
Enisa Kafadar argues that there is only one Serbo-Croatian language with several varieties. This has made possible to include all four varieties into a new grammar book. Daniel Bunčić concludes that it is a pluricentric language, with four standard variants spoken in Serbia, Croatia, Montenegro and Bosnia and Herzegovina. The mutual intelligibility between their speakers "exceeds that between the standard variants of English, French, German, or Spanish". Other linguists have argued that the differences between the variants of Serbo-Croatian are less significant than those between the variants of English, German, Dutch, and Hindi–Urdu.
Among pluricentric languages, (ÖNB). Serbo-Croatian was the only one with a pluricentric standardisation within one state. (ÖNB). The dissolution of Yugoslavia has made Serbo-Croatian even more typical pluricentric language, since the variants of other pluricentric languages are also spoken in different states. (ÖNB).
Contemporary names
thumb|Ethno-political variants of Serbo-Croatian as of 2006.
The current Serbian constitution of 2006 refers to the official language as Serbian, while the Montenegrin constitution of 2007 proclaimed Montenegrin as the primary official language, but also grants other languages the right of official use.
Most Bosniaks refer to their language as Bosnian.
Most Croats refer to their language as Croatian.
Most Serbs refer to their language as Serbian.
Montenegrins refer to their language either as Serbian or Montenegrin.
Ethnic Bunjevci refer to their language as Bunjevac.
The International Organization for Standardization (ISO) has specified different Universal Decimal Classification (UDC) numbers for Croatian (UDC 862, abbreviation hr) and Serbian (UDC 861, abbreviation sr), while the cover term Serbo-Croatian is used to refer to the combination of original signs (UDC 861/862, abbreviation sh). Furthermore, the ISO 639 standard designates the Bosnian language with the abbreviations bos and bs.
The International Criminal Tribunal for the former Yugoslavia considers what it calls BCS (Bosnian-Croatian-Serbian) to be the main language of all Bosnian, Croatian, and Serbian defendants. The indictments, documents, and verdicts of the ICTY are not written with any regard for consistently following the grammatical prescriptions of any of the three standardsbe they Serbian, Croatian, or Bosnian.
For utilitarian purposes, the Serbo-Croatian language is often called "Naš jezik" ("Our language") or "Naški" (sic. "Ourish" or "Ourian") by native speakers. This politically correct term is frequently used to describe the Serbo-Croatian language by those who wish to avoid nationalistic and linguistic discussions.
Views of linguists in the former Yugoslavia
Serbian linguists
The majority of mainstream Serbian linguists consider Serbian and Croatian to be one language, that is called Serbo-Croatian (srpskohrvatski) or Croato-Serbian (hrvatskosrpski). A minority of Serbian linguists are of the opinion that Serbo-Croatian did exist, but has, in the meantime, dissolved.
Croatian linguists
The opinion of the majority of Croatian linguists is that there has never been a Serbo-Croatian language, but two different standard languages that overlapped sometime in the course of history. However, Croatian linguist Snježana Kordić has been leading an academic discussion on that issue in the Croatian journal Književna republika . from 2001 to 2010. In the discussion, she shows that linguistic criteria such as mutual intelligibility, huge overlap in linguistic system, and the same dialectic basis of standard language provide evidence that Croatian, Serbian, Bosnian and Montenegrin are four national variants of the pluricentric Serbo-Croatian language. (CROLIB). (NSK). Igor Mandić states: "During the last ten years, it has been the longest, the most serious and most acrid discussion (…) in 21st-century Croatian culture". Inspired by that discussion, a monograph on language and nationalism has been published.
The views of the majority of Croatian linguists that there is no Serbo-Croatian language, but several different standard languages, have been sharply criticized by German linguist Bernhard Gröschel in his monograph Serbo-Croatian Between Linguistics and Politics. (NSK).
A more detailed overview, incorporating arguments from the Croatian philology and contemporary linguistics, would be as follows:
Serbo-Croatian is a language
One still finds many references to Serbo-Croatian, and proponents of Serbo-Croatian who deny that Croats, Serbs, Bosnians and Montenegrins speak different languages. The usual argument generally goes along the following lines:
Standard Croatian, Serbian, Bosnian, and Montenegrin are completely mutually intelligible. In addition, they use two alphabets that perfectly match each other (Latin and Cyrillic), thanks to Ljudevit Gaj and Vuk Karadžić. Croats exclusively use Latin script and Serbs equally use both Cyrillic and Latin. Although Cyrillic is taught in Bosnia, most Bosnians, especially non-Serbs (Bosniaks and Croats), favor Latin.
The list of 100 words of the basic Croatian, Serbian, Bosnian, and Montenegrin vocabulary, as set out by Morris Swadesh, shows that all 100 words are identical. According to Swadesh, 81 per cent are sufficient to be considered as a single language.
Typologically and structurally, these standard variants have virtually the same grammar, i.e. morphology and syntax. (ÖNB).
The Serbo-Croatian language was standardised in the mid-19th century, and all subsequent attempts to dissolve its basic unity have not succeeded.
The affirmation of distinct Croatian, Serbian, Bosnian, and Montenegrin languages is politically motivated.
According to phonology, morphology and syntax, these standard variants are essentially one language because they are based on the same, Štokavian dialect.
Serbo-Croatian is not a language
Similar arguments are made for other official standards which are nearly indistinguishable when spoken and which are therefore pluricentric languages, such as Malaysian, and Indonesian (together called Malay), or Standard Hindi and Urdu (together called Hindustani or Hindi-Urdu). However, some argue that these arguments have flaws:
Phonology, morphology, and syntax are not the only dimensions of a language: other fields (semantics, pragmatics, stylistics, lexicology, etc.) also differ slightly. However, it is the case with other pluricentric languages. A comparison is made to the closely related North Germanic languages (or dialects, if one prefers), though these are not fully mutually intelligible as the Serbo-Croatian standards are. A closer comparison may be General American and Received Pronunciation in English, which are closer to each other than the latter is to other dialects which are subsumed under "British English".
Since the Croatian language as recorded in Držić and Gundulić's works (16th and 17th centuries) is virtually the same as the contemporary standard Croatian (understandable archaisms apart), it is evident that the 19th-century formal standardization was just the final touch in the process that, as far as the Croatian language is concerned, had lasted more than three centuries. The radical break with the past, characteristic of modern Serbian (whose vernacular was likely not as similar to Croatian as it is today), is a trait completely at variance with Croatian linguistic history. In short, formal standardization processes for Croatian and Serbian had coincided chronologically (and, one could add, ideologically), but they haven't produced a unified standard language. Gundulić did not write in "Serbo-Croatian", nor did August Šenoa. Marko Marulić and Marin Držić wrote in a sophisticated idiom of the Croatian language some 300–350 years before "Serbo-Croatian" ideology appeared. Marulić explicitly called his Čakavian-written Judita as u uerish haruacchi slosena ("arranged in Croatian stanzas") in 1501, and the Štokavian grammar and dictionary of Bartol Kašić written in 1604 unambiguously identifies the ethnonyms Slavic and Illyrian with Croatian.
The linguistic debate in this region is more about politics than about linguistics per se.
The topic of language for writers from Dalmatia and Dubrovnik prior to the 19th century made a distinction only between speakers of Italian or Slavic, since those were the two main groups that inhabited Dalmatian city-states at that time. Whether someone spoke Croatian or Serbian was not an important distinction then, as the two languages were not distinguished by most speakers. This has been used as an argument to state that Croatian literature Croatian per se, but also includes Serbian and other languages that are part of Serbo-Croatian, These facts undermine the Croatian language proponents' argument that modern-day Croatian is based on a language called Old Croatian.
However, most intellectuals and writers from Dalmatia who used the Štokavian dialect and practiced the Catholic faith saw themselves as part of a Croatian nation as far back as the mid-16th to 17th centuries, some 300 years before Serbo-Croatian ideology appeared. Their loyalty was first and foremost to Catholic Christendom, but when they professed an ethnic identity, they referred to themselves as "Slovin" and "Illyrian" (a sort of forerunner of Catholic baroque pan-Slavism) and Croatthese 30-odd writers over the span of c. 350 years always saw themselves as Croats first and never as part of a Serbian nation. It should also be noted that, in the pre-national era, Catholic religious orientation did not necessarily equate with Croat ethnic identity in Dalmatia. A Croatian follower of Vuk Karadžić, Ivan Broz, noted that for a Dalmatian to identify oneself as a Serb was seen as foreign as identifying oneself as Macedonian or Greek. Vatroslav Jagić pointed out in 1864:
"As I have mentioned in the preface, history knows only two national names in these parts—Croatian and Serbian. As far as Dubrovnik is concerned, the Serbian name was never in use; on the contrary, the Croatian name was frequently used and gladly referred to"
"At the end of the 15th century [in Dubrovnik and Dalmatia], sermons and poems were exquisitely crafted in the Croatian language by those men whose names are widely renowned by deep learning and piety."
(From The History of the Croatian language, Zagreb, 1864.)
On the other hand, the opinion of Jagić from 1864 is argued not to have firm grounds. When Jagić says "Croatian", he refers to a few cases referring to the Dubrovnik vernacular as ilirski (Illyrian). This was a common name for all Slavic vernaculars in Dalmatian cities among the Roman inhabitants. In the meantime, other written monuments are found that mention srpski, lingua serviana (= Serbian), and some that mention Croatian.Mladenovic. Kratka istorija srpskog književnog jezika. Beograd 2004, 67 By far the most competent Serbian scientist on the Dubrovnik language issue, Milan Rešetar, who was born in Dubrovnik himself, wrote behalf of language characteristics: "The one who thinks that Croatian and Serbian are two separate languages must confess that Dubrovnik always (linguistically) used to be Serbian."
Finally, the former medieval texts from Dubrovnik and Montenegro dating before the 16th century were neither true Štokavian nor Serbian, but mostly specific a Jekavian-Čakavian that was nearer to actual Adriatic islanders in Croatia.S. Zekovic & B. Cimeša: Elementa montenegrina, Chrestomatia 1/90. CIP, Zagreb 1991
Political connotations
Nationalists have conflicting views about the language(s). The nationalists among the Croats conflictingly claim either that they speak an entirely separate language from Serbs and Bosnians or that these two peoples have, due to the longer lexicographic tradition among Croats, somehow "borrowed" their standard languages from them. Bosniak nationalists claim that both Croats and Serbs have "appropriated" the Bosnian language, since Ljudevit Gaj and Vuk Karadžić preferred the Neoštokavian-Ijekavian dialect, widely spoken in Bosnia and Herzegovina, as the basis for language standardization, whereas the nationalists among the Serbs claim either that any divergence in the language is artificial, or claim that the Štokavian dialect is theirs and the Čakavian Croats'— in more extreme formulations Croats have "taken" or "stolen" their language from the Serbs.
Proponents of unity among Southern Slavs claim that there is a single language with normal dialectal variations. The term "Serbo-Croatian" (or synonyms) is not officially used in any of the successor countries of former Yugoslavia.
In Serbia, the Serbian language is the official one, while both Serbian and Croatian are official in the province of Vojvodina. A large Bosniak minority is present in the southwest region of Sandžak, but the "official recognition" of Bosnian language is moot.Official communique, 27 December 2004, Serbian Ministry of Education Bosnian is an optional course in 1st and 2nd grade of the elementary school, while it is also in official use in the municipality of Novi Pazar. , 30 April 2002, page 1 However, its nomenclature is controversial, as there is incentive that it is referred to as "Bosniak" (bošnjački) rather than "Bosnian" (bosanski) (see Bosnian language for details).
Croatian is the official language of Croatia, while Serbian is also official in municipalities with significant Serb population.
In Bosnia and Herzegovina, all three languages are recorded as official but in practice and media, mostly Bosnian and Serbian are applied. Confrontations have on occasion been absurd. The academic Muhamed Filipović, in an interview to Slovenian television, told of a local court in a Croatian district requesting a paid translator to translate from Bosnian to Croatian before the trial could proceed.
Words of Serbo-Croatian origin
See Category:English terms derived from Serbo-Croatian on Wiktionary
Cravat, from French cravate "Croat", by analogy with Flemish Krawaat and German Krabate, from Serbo-Croatian Hrvat, as cravats were characteristic of Croatian dress
Polje, from Serbo-Croatian polje "field"
Slivovitz, from German Slibowitz, from Bulgarian slivovitza or Serbo-Croatian šljivovica "plum brandy", from Old Slavic *sliva "plum" (cognate with English sloe)
Tamburitza, Serbo-Croatian diminutive of tambura, from Turkish, from Persian ṭambūr "tanbur"
Uvala, from Serbo-Croatian uvala "hollow"
See also
Differences between Serbo-Croatian standard varieties
Dialects of Serbo-Croatian
Language secessionism in Serbo-Croatian
Mutual intelligibility
Pluricentric Serbo-Croatian language
Serbo-Croatian relative clauses
Serbo-Croatian grammar
Serbo-Croatian kinship
Serbo-Croatian phonology
Shtokavian dialect
South Slavic dialect continuum
Standard language
Notes and references
Notes
References
Bibliography
(COBISS-Sr).
Further reading
Banac, Ivo: Main Trends in the Croatian Language Question. Yale University Press, 1984.
Franolić, Branko: A Historical Survey of Literary Croatian. Nouvelles éditions Latines, Paris, 1984.
Ivić, Pavle: Die serbokroatischen Dialekte. the Hague, 1958.
(COBISS-CG).
.
Magner, Thomas F.: Zagreb Kajkavian dialect. Pennsylvania State University, 1966.
(COBISS-CG).
Murray Despalatović, Elinor: Ljudevit Gaj and the Illyrian Movement. Columbia University Press, 1975.
Zekovic, Sreten & Cimeša, Boro: Elementa montenegrina, Chrestomatia 1/90. CIP, Zagreb 1991.
External links
Ethnologue15th edition of the Ethnologue (released 2005) shows changes in this area:
Previous Ethnologue entry for Serbo-Croatian
Ethnologue 15th Edition report on western South Slavic languages.
Integral text of Novi Sad Agreement (In Serbo-Croatian).
IKI Translate: Translating between different dialects of Serbo-Croatian
Serbian and Croatian alphabets at Omniglot.
Serbian, Croatian, Bosnian, Or Montenegrin? Or Just 'Our Language'?, Radio Free Europe, February 21, 2009
Category:South Slavic languages
Category:Languages of Serbia
Category:Languages of Montenegro
Category:Languages of Vojvodina
Category:Languages of Kosovo
Category:Languages of Bosnia and Herzegovina
Category:Languages of Croatia
Category:Languages of Slovenia
Category:Language versus dialect
Category:Dialect levelling | 27,730 | 2017-01 |
United Nations Population Fund | The United Nations Population Fund (UNFPA), formerly the United Nations Fund for Population Activities, is a UN organization. The UNFPA says it "is the lead UN agency for delivering a world where every pregnancy is wanted, every childbirth is safe and every young person's potential is fulfilled." Their work involves the improvement of reproductive health; including creation of national strategies and protocols, and providing supplies and services. The organization has recently been known for its worldwide campaign against obstetric fistula and female genital mutilation.
The UNFPA supports programs in more than 150 countries, territories and areas spread across four geographic regions: Arab States and Europe, Asia and the Pacific, Latin America and the Caribbean, and sub-Saharan Africa. Around three quarters of the staff work in the field. It is a member of the United Nations Development Fund and part of its Executive Committee.UNFPA in the UN system. Retrieved 28 February 2013.
Origins
UNFPA began operations in 1969 as the United Nations Fund for Population Activities (the name was changed in 1987) under the administration of the United Nations Development Fund. In 1971 it was placed under the authority of the United Nations General Assembly.
UNFPA and the Sustainable Development Goals
In September 2015, the 193 member states of the United Nations unanimously adopted the Sustainable Development Goals, a set of 17 goals aiming to transform the world over the next 15 years. These goals are designed to eliminate poverty, discrimination, abuse and preventable deaths, address environmental destruction, and usher in an era of development for all people, everywhere.
The Sustainable Development Goals are ambitious, and they will require enormous efforts across countries, continents, industries and disciplines - but they are achievable. UNFPA is working with governments, partners and other UN agencies to directly tackle many of these goals - in particular Goal 3 on health, Goal 4 on education and Goal 5 on gender equality - and contributes in a variety of ways to achieving many of the rest.
Leadership
Executive Directors and Under-Secretaries General of the UN
2011–present Dr Babatunde Osotimehin (Nigeria)
2000–2010 Ms Thoraya Ahmed Obaid (Saudi Arabia)
1987–2000 Dr Nafis Sadik (Pakistan)
1969–87 Mr Rafael M. Salas (Philippines)
Goodwill ambassadors
The Fund's Patron is Crown Princess Mary of Denmark.
Its goodwill ambassadors are:
Catarina Furtado
Goedele Liekens
Ashi Sangay Choden Wangchuck
Princess Basma bint Talal.
Areas of work
UNFPA is the world's largest multilateral source of funding for population and reproductive health programs. The Fund works with governments and non-governmental organizations in over 150 countries with the support of the international community, supporting programs that help women, men and young people:
voluntarily plan and have the number of children they desire and to avoid unwanted pregnancies
undergo safe pregnancy and childbirth
avoid spreading sexually transmitted infections
decrease violence against women
increase the equality of women
According to UNFPA these elements promote the right of "reproductive health", that is physical, mental, and social health in matters related to reproduction and the reproductive system.
The Fund raises awareness of and supports efforts to meet these needs in developing countries, advocates close attention to population concerns, and helps developing nations formulate policies and strategies in support of sustainable development. Dr. Osotimehin assumed leadership in January 2011. The Fund is also represented by UNFPA Goodwill Ambassadors and a Patron.
How UNFPA Works
UNFPA works in partnership with governments, along with other United Nations agencies, communities, NGOs, foundations and the private sector, to raise awareness and mobilize the support and resources needed to achieve its mission to promote the rights and health of women and young people.
Contributions from governments and the private sector to UNFPA in 2014 exceeded $1 billion. The amount includes $477 million to the organization’s core resources and $529 million earmarked for specific programs and initiatives.
Campaign to end fistula
This UNFPA-led global campaign works to prevent obstetric fistula, a devastating and socially isolating injury of childbirth, to treat women who live with the condition and help those who have been treated to return to their communities. The campaign works in more than 40 countries in Africa, the Arab States and South Asia.
Ending female genital mutilation
UNFPA has worked for many years to end the practice of female genital mutilation, the partial or total removal of external female genital organs for cultural or other non-medical reasons. The practice, which affects 100–140 million women and girls across the world, violates their right to health and bodily integrity. In 2007, UNFPA in partnership with UNICEF, launched a $44-million program to reduce the practice by 40 per cent in 16 countries by 2015 and to end it within a generation. UNFPA also recently sponsored a Global Technical Consultation, which drew experts from all over the world to discuss strategies to convince communities to abandon the practice. UNFPA supports the campaign to end female genital mutilation with The Guardian.
Relations with the US government
UNFPA has been falsely accused by groups opposed to abortion of providing support for government programs which have promoted forced-abortions and coercive sterilizations. Controversies regarding these claims have resulted in a sometimes shaky relationship between the organization and three presidential administrations, that of Ronald Reagan, George H. W. Bush and George W. Bush, withholding funding from the UNFPA.
UNFPA provided aid to Peru's reproductive health program in the mid-to-late '90s. When it was discovered a Peruvian program had been engaged in carrying out coercive sterilizations, UNFPA called for reforms and protocols to protect the rights of women seeking assistance. UNFPA was not involved in the scandal, but continued work with the country after the abuses had become public to help end the abuses and reform laws and practices.
From 2002 through 2008, the Bush Administration denied funding to UNFPA that had already been allocated by the US Congress, partly on the refuted claims that the UNFPA supported Chinese government programs which include forced abortions and coercive sterilizations. In a letter from the Undersecretary of State for Political Affairs Nicholas Burns to Congress, the administration said it had determined that UNFPA’s support for China’s population program “facilitates (its) government’s coercive abortion program”, thus violating the Kemp-Kasten Amendment, which bans the use of United States aid to finance organizations that support or take part in managing a program of coercive abortion of sterilization.Background on withheld US funds , United Nations Economic and Social Commission for Asia and the Pacific, 2007
UNFPA says it "does not provide support for abortion services". Its charter includes a strong statement condemning coercion.
UNFPA's connection to China's administration of forced abortions was refuted by investigations carried out by various US, UK, and UN teams sent to examine UNFPA activities in China. Specifically, a three-person U.S State Department fact-finding team was sent on a two-week tour throughout China. It wrote in a report to the State Department that it found "no evidence that UNFPA has supported or participated in the management of a program of coercive abortion or involuntary sterilization in China," as has been charged by critics.
However, according to then-Secretary of State Colin Powell, the UNFPA contributed vehicles and computers to the Chinese to carry out their population planning policies. However, both the Washington Post and the Washington Times reported that Powell simply fell in line, signing a brief written by someone else. left|thumb|upright|Colin Powell at the United Nations. Rep. Christopher H. Smith (R-NJ), criticized the State Department investigation, saying the investigators were shown "Potemkin Villages" where residents had been intimidated into lying about the family-planning program. Dr. Nafis Sadik, former director of UNFPA said her agency had been pivotal in reversing China's coercive population planning methods, but a 2005 report by Amnesty International and a separate report by the United States State Department found that coercive techniques were still regularly employed by the Chinese, casting doubt upon Sadik's statements.
But Amnesty International found no evidence that UNFPA had supported the coercion. A 2001 study conducted by the pro-life Population Research Institute (PRI) falsely claimed that the UNFPA shared an office with the Chinese family planning officials who were carrying out forced abortions.
"We located the family planning offices, and in that family planning office, we located the UNFPA office, and we confirmed from family planning officials there that there is no distinction between what the UNFPA does and what the Chinese Family Planning Office does," said Scott Weinberg, a spokesman for PRI. However, United Nations Members disagreed and approved UNFPA’s new country program me in January 2006. The more than 130 members of the “Group of 77” developing countries in the United Nations expressed support for the UNFPA programmes. In addition, speaking for European democracies -- Norway, Denmark, Sweden, Finland, the Netherlands, France, Belgium, Switzerland and Germany -- the United Kingdom stated, ”UNFPA’s activities in China, as in the rest of the world, are in strict conformity with the unanimously adopted Programme of Action of the ICPD, and play a key role in supporting our common endeavor, the promotion and protection of all human rights and fundamental freedoms.”
President Bush denied funding to the UNFPA. Over the course of the Bush Administration, a total of $244 million in Congressionally approved funding was blocked by the Executive Branch.
In response, the EU decided to fill the gap left behind by the US under the Sandbaek report. According to its Annual Report for 2008, the UNFPA received its funding mainly from European Governments:
Of the total income of M845.3 M, $118 was donated by the Netherlands, $67 M by Sweden, $62 M by Norway, $54 M by Denmark, $53 M by the UK, $52 M by Spain, $19 M by Luxembourg. The European Commission donated further $36 M. The most important non-European donor State was Japan ($36 M). The number of donors exceeded 180 in one year.
In America, nonprofit organizations like Friends of UNFPA (formerly Americans for UNFPA) worked to compensate for the loss of United States federal funding by raising private donations.
In January 2009 President Barack Obama restored US funding to UNFPA, saying in a public statement that he would "look forward to working with Congress to restore US financial support for the UN Population Fund. By resuming funding to UNFPA, the US will be joining 180 other donor nations working collaboratively to reduce poverty, improve the health of women and children, prevent HIV/AIDS and provide family planning assistance to women in 154 countries."
Other UN population agencies and entities
Entities with competencies about population in the United Nations:
Commission on Population and Development
United Nations Department of Economic and Social Affairs
See also
Sandbæk Report
Reproductive Health Supplies Coalition
References
External links
Category:Organizations established in 1969
Category:United Nations Development Group
* | 63,952 | 2017-01 |
Brain | The brain is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. The brain is located in the head, usually close to the sensory organs for senses such as vision. The brain is the most complex organ in a vertebrate's body. In a human, the cerebral cortex contains approximately 15–33 billion neurons, each connected by synapses to several thousand other neurons. These neurons communicate with one another by means of long protoplasmic fibers called axons, which carry trains of signal pulses called action potentials to distant parts of the brain or body targeting specific recipient cells.
Physiologically, the function of the brain is to exert centralized control over the other organs of the body. The brain acts on the rest of the body both by generating patterns of muscle activity and by driving the secretion of chemicals called hormones. This centralized control allows rapid and coordinated responses to changes in the environment. Some basic types of responsiveness such as reflexes can be mediated by the spinal cord or peripheral ganglia, but sophisticated purposeful control of behavior based on complex sensory input requires the information integrating capabilities of a centralized brain.
The operations of individual brain cells are now understood in considerable detail but the way they cooperate in ensembles of millions is yet to be solved. Recent models in modern neuroscience treat the brain as a biological computer, very different in mechanism from an electronic computer, but similar in the sense that it acquires information from the surrounding world, stores it, and processes it in a variety of ways.
This article compares the properties of brains across the entire range of animal species, with the greatest attention to vertebrates. It deals with the human brain insofar as it shares the properties of other brains. The ways in which the human brain differs from other brains are covered in the human brain article. Several topics that might be covered here are instead covered there because much more can be said about them in a human context. The most important is brain disease and the effects of brain damage, that are covered in the human brain article.
Anatomy
thumb|right|alt=a blob with a blue patch in the center, surrounded by a white area, surrounded by a thin strip of dark-colored material|Cross section of the olfactory bulb of a rat, stained in two different ways at the same time: one stain shows neuron cell bodies, the other shows receptors for the neurotransmitter GABA.
The shape and size of the brain varies greatly between species, and identifying common features is often difficult. Nevertheless, there are a number of principles of brain architecture that apply across a wide range of species. Some aspects of brain structure are common to almost the entire range of animal species; others distinguish "advanced" brains from more primitive ones, or distinguish vertebrates from invertebrates.
The simplest way to gain information about brain anatomy is by visual inspection, but many more sophisticated techniques have been developed. Brain tissue in its natural state is too soft to work with, but it can be hardened by immersion in alcohol or other fixatives, and then sliced apart for examination of the interior. Visually, the interior of the brain consists of areas of so-called grey matter, with a dark color, separated by areas of white matter, with a lighter color. Further information can be gained by staining slices of brain tissue with a variety of chemicals that bring out areas where specific types of molecules are present in high concentrations. It is also possible to examine the microstructure of brain tissue using a microscope, and to trace the pattern of connections from one brain area to another.
Cellular structure
thumb|250px|alt=drawing showing a neuron with a fiber emanating from it labeled "axon" and making contact with another cell. An inset shows an enlargement of the contact zone.|Neurons generate electrical signals that travel along their axons. When a pulse of electricity reaches a junction called a synapse, it causes a neurotransmitter chemical to be released, which binds to receptors on other cells and thereby alters their electrical activity.
The brains of all species are composed primarily of two broad classes of cells: neurons and glial cells. Glial cells (also known as glia or neuroglia) come in several types, and perform a number of critical functions, including structural support, metabolic support, insulation, and guidance of development. Neurons, however, are usually considered the most important cells in the brain.
The property that makes neurons unique is their ability to send signals to specific target cells over long distances. They send these signals by means of an axon, which is a thin protoplasmic fiber that extends from the cell body and projects, usually with numerous branches, to other areas, sometimes nearby, sometimes in distant parts of the brain or body. The length of an axon can be extraordinary: for example, if a pyramidal cell, (an excitatory neuron) of the cerebral cortex were magnified so that its cell body became the size of a human body, its axon, equally magnified, would become a cable a few centimeters in diameter, extending more than a kilometer. These axons transmit signals in the form of electrochemical pulses called action potentials, which last less than a thousandth of a second and travel along the axon at speeds of 1–100 meters per second. Some neurons emit action potentials constantly, at rates of 10–100 per second, usually in irregular patterns; other neurons are quiet most of the time, but occasionally emit a burst of action potentials.
Axons transmit signals to other neurons by means of specialized junctions called synapses. A single axon may make as many as several thousand synaptic connections with other cells. When an action potential, traveling along an axon, arrives at a synapse, it causes a chemical called a neurotransmitter to be released. The neurotransmitter binds to receptor molecules in the membrane of the target cell.
thumb|left|alt=A bright green cell is seen against a red and black background, with long, highly branched, green processes extending out from it in multiple directions.|Neurons often have extensive networks of dendrites, which receive synaptic connections. Shown is a pyramidal neuron from the hippocampus, stained for green fluorescent protein.
Synapses are the key functional elements of the brain. The essential function of the brain is cell-to-cell communication, and synapses are the points at which communication occurs. The human brain has been estimated to contain approximately 100 trillion synapses; even the brain of a fruit fly contains several million. The functions of these synapses are very diverse: some are excitatory (exciting the target cell); others are inhibitory; others work by activating second messenger systems that change the internal chemistry of their target cells in complex ways. A large number of synapses are dynamically modifiable; that is, they are capable of changing strength in a way that is controlled by the patterns of signals that pass through them. It is widely believed that activity-dependent modification of synapses is the brain's primary mechanism for learning and memory.
Most of the space in the brain is taken up by axons, which are often bundled together in what are called nerve fiber tracts. A myelinated axon is wrapped in a fatty insulating sheath of myelin, which serves to greatly increase the speed of signal propagation. (There are also unmyelinated axons). Myelin is white, making parts of the brain filled exclusively with nerve fibers appear as light-colored white matter, in contrast to the darker-colored grey matter that marks areas with high densities of neuron cell bodies.
Evolution
Generic bilaterian nervous system
thumb|right|300px|alt=A rod-shaped body contains a digestive system running from the mouth at one end to the anus at the other. Alongside the digestive system is a nerve cord with a brain at the end, near to the mouth. |Nervous system of a generic bilaterian animal, in the form of a nerve cord with segmental enlargements, and a "brain" at the front
Except for a few primitive organisms such as sponges (which have no nervous system) and cnidarians (which have a nervous system consisting of a diffuse nerve net), all living multicellular animals are bilaterians, meaning animals with a bilaterally symmetric body shape (that is, left and right sides that are approximate mirror images of each other). All bilaterians are thought to have descended from a common ancestor that appeared early in the Cambrian period, 485-540 million years ago, and it has been hypothesized that this common ancestor had the shape of a simple tubeworm with a segmented body. At a schematic level, that basic worm-shape continues to be reflected in the body and nervous system architecture of all modern bilaterians, including vertebrates. The fundamental bilateral body form is a tube with a hollow gut cavity running from the mouth to the anus, and a nerve cord with an enlargement (a ganglion) for each body segment, with an especially large ganglion at the front, called the brain. The brain is small and simple in some species, such as nematode worms; in other species, including vertebrates, it is the most complex organ in the body. Some types of worms, such as leeches, also have an enlarged ganglion at the back end of the nerve cord, known as a "tail brain".
There are a few types of existing bilaterians that lack a recognizable brain, including echinoderms, tunicates, and acoelomorphs (a group of primitive flatworms). It has not been definitively established whether the existence of these brainless species indicates that the earliest bilaterians lacked a brain, or whether their ancestors evolved in a way that led to the disappearance of a previously existing brain structure.
Invertebrates
thumb|right|alt=A fly resting on a reflective surface. A large, red eye faces the camera. The body appears transparent, apart from black pigment at the end of its abdomen. |Fruit flies (Drosophila) have been extensively studied to gain insight into the role of genes in brain development.
This category includes arthropods, molluscs, and numerous types of worms. The diversity of invertebrate body plans is matched by an equal diversity in brain structures.
Two groups of invertebrates have notably complex brains: arthropods (insects, crustaceans, arachnids, and others), and cephalopods (octopuses, squids, and similar molluscs). The brains of arthropods and cephalopods arise from twin parallel nerve cords that extend through the body of the animal. Arthropods have a central brain, the supraesophageal ganglion, with three divisions and large optical lobes behind each eye for visual processing. Cephalopods such as the octopus and squid have the largest brains of any invertebrates.
There are several invertebrate species whose brains have been studied intensively because they have properties that make them convenient for experimental work:
Fruit flies (Drosophila), because of the large array of techniques available for studying their genetics, have been a natural subject for studying the role of genes in brain development. In spite of the large evolutionary distance between insects and mammals, many aspects of Drosophila neurogenetics have been shown to be relevant to humans. The first biological clock genes, for example, were identified by examining Drosophila mutants that showed disrupted daily activity cycles. A search in the genomes of vertebrates revealed a set of analogous genes, which were found to play similar roles in the mouse biological clock—and therefore almost certainly in the human biological clock as well. Studies done on Drosophila, also show that most neuropil regions of the brain are continuously reorganized throughout life in response to specific living conditions.
The nematode worm Caenorhabditis elegans, like Drosophila, has been studied largely because of its importance in genetics. In the early 1970s, Sydney Brenner chose it as a model organism for studying the way that genes control development. One of the advantages of working with this worm is that the body plan is very stereotyped: the nervous system of the hermaphrodite contains exactly 302 neurons, always in the same places, making identical synaptic connections in every worm. Brenner's team sliced worms into thousands of ultrathin sections and photographed each one under an electron microscope, then visually matched fibers from section to section, to map out every neuron and synapse in the entire body. The complete neuronal wiring diagram of C.elegans – its connectome was achieved. Nothing approaching this level of detail is available for any other organism, and the information gained has enabled a multitude of studies that would otherwise have not been possible.
The sea slug Aplysia californica was chosen by Nobel Prize-winning neurophysiologist Eric Kandel as a model for studying the cellular basis of learning and memory, because of the simplicity and accessibility of its nervous system, and it has been examined in hundreds of experiments.
Vertebrates
thumb|upright|alt=A T-shaped object is made up of the cord at the bottom which feeds into a lower central mass. This is topped by a larger central mass with an arm extending from either side. |The brain of a shark
The first vertebrates appeared over 500 million years ago (Mya), during the Cambrian period, and may have resembled the modern hagfish in form. Sharks appeared about 450 Mya, amphibians about 400 Mya, reptiles about 350 Mya, and mammals about 200 Mya. Each species has an equally long evolutionary history, but the brains of modern hagfishes, lampreys, sharks, amphibians, reptiles, and mammals show a gradient of size and complexity that roughly follows the evolutionary sequence. All of these brains contain the same set of basic anatomical components, but many are rudimentary in the hagfish, whereas in mammals the foremost part (the telencephalon) is greatly elaborated and expanded.
Brains are most simply compared in terms of their size. The relationship between brain size, body size and other variables has been studied across a wide range of vertebrate species. As a rule, brain size increases with body size, but not in a simple linear proportion. In general, smaller animals tend to have larger brains, measured as a fraction of body size. For mammals, the relationship between brain volume and body mass essentially follows a power law with an exponent of about 0.75. This formula describes the central tendency, but every family of mammals departs from it to some degree, in a way that reflects in part the complexity of their behavior. For example, primates have brains 5 to 10 times larger than the formula predicts. Predators tend to have larger brains than their prey, relative to body size.
thumb|250px|left|alt=The nervous system is shown as a rod with protrusions along its length. The spinal cord at the bottom connects to the hindbrain which widens out before narrowing again. This is connected to the midbrain, which again bulges, and which finally connects to the forebrain which has two large protrusions.|The main subdivisions of the embryonic vertebrate brain, which later differentiate into the forebrain, midbrain and hindbrain
All vertebrate brains share a common underlying form, which appears most clearly during early stages of embryonic development. In its earliest form, the brain appears as three swellings at the front end of the neural tube; these swellings eventually become the forebrain, midbrain, and hindbrain (the prosencephalon, mesencephalon, and rhombencephalon, respectively). At the earliest stages of brain development, the three areas are roughly equal in size. In many classes of vertebrates, such as fish and amphibians, the three parts remain similar in size in the adult, but in mammals the forebrain becomes much larger than the other parts, and the midbrain becomes very small.
The brains of vertebrates are made of very soft tissue. Living brain tissue is pinkish on the outside and mostly white on the inside, with subtle variations in color. Vertebrate brains are surrounded by a system of connective tissue membranes called meninges that separate the skull from the brain. Blood vessels enter the central nervous system through holes in the meningeal layers. The cells in the blood vessel walls are joined tightly to one another, forming the blood–brain barrier, which blocks the passage of many toxins and pathogens (though at the same time blocking antibodies and some drugs, thereby presenting special challenges in treatment of diseases of the brain).
Neuroanatomists usually divide the vertebrate brain into six main regions: the telencephalon (cerebral hemispheres), diencephalon (thalamus and hypothalamus), mesencephalon (midbrain), cerebellum, pons, and medulla oblongata. Each of these areas has a complex internal structure. Some parts, such as the cerebral cortex and the cerebellar cortex, consist of layers that are folded or convoluted to fit within the available space. Other parts, such as the thalamus and hypothalamus, consist of clusters of many small nuclei. Thousands of distinguishable areas can be identified within the vertebrate brain based on fine distinctions of neural structure, chemistry, and connectivity.
Although the same basic components are present in all vertebrate brains, some branches of vertebrate evolution have led to substantial distortions of brain geometry, especially in the forebrain area. The brain of a shark shows the basic components in a straightforward way, but in teleost fishes (the great majority of existing fish species), the forebrain has become "everted", like a sock turned inside out. In birds, there are also major changes in forebrain structure. These distortions can make it difficult to match brain components from one species with those of another species.
thumb|right|alt=Corresponding regions of human and shark brain are shown. The shark brain is splayed out, while the human brain is more compact. The shark brain starts with the medulla, which is surrounded by various structures, and ends with the telencephalon. The cross-section of the human brain shows the medulla at the bottom surrounded by the same structures, with the telencephalon thickly coating the top of the brain. |The main anatomical regions of the vertebrate brain, shown for shark and human. The same parts are present, but they differ greatly in size and shape.
Here is a list of some of the most important vertebrate brain components, along with a brief description of their functions as currently understood:
The medulla, along with the spinal cord, contains many small nuclei involved in a wide variety of sensory and involuntary motor functions such as vomiting, heart rate and digestive processes.
The pons lies in the brainstem directly above the medulla. Among other things, it contains nuclei that control often voluntary but simple acts such as sleep, respiration, swallowing, bladder function, equilibrium, eye movement, facial expressions, and posture.
The hypothalamus is a small region at the base of the forebrain, whose complexity and importance belies its size. It is composed of numerous small nuclei, each with distinct connections and neurochemistry. The hypothalamus is engaged in additional involuntary or partially voluntary acts such as sleep and wake cycles, eating and drinking, and the release of some hormones.
The thalamus is a collection of nuclei with diverse functions: some are involved in relaying information to and from the cerebral hemispheres, while others are involved in motivation. The subthalamic area (zona incerta) seems to contain action-generating systems for several types of "consummatory" behaviors such as eating, drinking, defecation, and copulation.
The cerebellum modulates the outputs of other brain systems, whether motor related or thought related, to make them certain and precise. Removal of the cerebellum does not prevent an animal from doing anything in particular, but it makes actions hesitant and clumsy. This precision is not built-in, but learned by trial and error. The muscle coordination learned while riding a bicycle is an example of a type of neural plasticity that may take place largely within the cerebellum. 10% of the brain's total volume consists of the cerebellum and 50% of all neurons are held within its structure.Knierim, James. "Cerebellum (Section 3, Chapter 5) Neuroscience Online: An Electronic Textbook for the Neurosciences | Department of Neurobiology and Anatomy - The University of Texas Medical School at Houston." Cerebellum (Section 3, Chapter 5) Neuroscience Online: An Electronic Textbook for the Neurosciences | Department of Neurobiology and Anatomy - The University of Texas Medical School at Houston. The University of Texas Health Science Center at Houston (UTHealth), 2015. Web. 2 June 2015. <http://neuroscience.uth.tmc.edu/s3/chapter05.html>.
The optic tectum allows actions to be directed toward points in space, most commonly in response to visual input. In mammals it is usually referred to as the superior colliculus, and its best-studied function is to direct eye movements. It also directs reaching movements and other object-directed actions. It receives strong visual inputs, but also inputs from other senses that are useful in directing actions, such as auditory input in owls and input from the thermosensitive pit organs in snakes. In some primitive fishes, such as lampreys, this region is the largest part of the brain. The superior colliculus is part of the midbrain.
The pallium is a layer of gray matter that lies on the surface of the forebrain and is the most complex and most recent evolutionary development of the brain as an organ. In reptiles and mammals, it is called the cerebral cortex. Multiple functions involve the pallium, including smell and spatial memory. In mammals, where it becomes so large as to dominate the brain, it takes over functions from many other brain areas. In many mammals, the cerebral cortex consists of folded bulges called gyri that create deep furrows or fissures called sulci. The folds increase the surface area of the cortex and therefore increase the amount of gray matter and the amount of information that can be stored and processed.
The hippocampus, strictly speaking, is found only in mammals. However, the area it derives from, the medial pallium, has counterparts in all vertebrates. There is evidence that this part of the brain is involved in complex events such as spatial memory and navigation in fishes, birds, reptiles, and mammals.
The basal ganglia are a group of interconnected structures in the forebrain. The primary function of the basal ganglia appears to be action selection: they send inhibitory signals to all parts of the brain that can generate motor behaviors, and in the right circumstances can release the inhibition, so that the action-generating systems are able to execute their actions. Reward and punishment exert their most important neural effects by altering connections within the basal ganglia.
The olfactory bulb is a special structure that processes olfactory sensory signals and sends its output to the olfactory part of the pallium. It is a major brain component in many vertebrates, but is greatly reduced in humans and other primates (whose senses are dominated by information acquired by sight rather than smell).
Mammals
The most obvious difference between the brains of mammals and other vertebrates is in terms of size. On average, a mammal has a brain roughly twice as large as that of a bird of the same body size, and ten times as large as that of a reptile of the same body size.
Size, however, is not the only difference: there are also substantial differences in shape. The hindbrain and midbrain of mammals are generally similar to those of other vertebrates, but dramatic differences appear in the forebrain, which is greatly enlarged and also altered in structure. The cerebral cortex is the part of the brain that most strongly distinguishes mammals. In non-mammalian vertebrates, the surface of the cerebrum is lined with a comparatively simple three-layered structure called the pallium. In mammals, the pallium evolves into a complex six-layered structure called neocortex or isocortex. Several areas at the edge of the neocortex, including the hippocampus and amygdala, are also much more extensively developed in mammals than in other vertebrates.
The elaboration of the cerebral cortex carries with it changes to other brain areas. The superior colliculus, which plays a major role in visual control of behavior in most vertebrates, shrinks to a small size in mammals, and many of its functions are taken over by visual areas of the cerebral cortex. The cerebellum of mammals contains a large portion (the neocerebellum) dedicated to supporting the cerebral cortex, which has no counterpart in other vertebrates.
Primates
+Encephalization QuotientSpeciesEQHuman7.4–7.8Chimpanzee2.2–2.5Rhesus monkey2.1Bottlenose dolphin4.14Elephant1.13–2.36Dog1.2Horse0.9Rat0.4
The brains of humans and other primates contain the same structures as the brains of other mammals, but are generally larger in proportion to body size. The encephalization quotient (EQ) is used to compare brain sizes across species. It takes into account the nonlinearity of the brain-to-body relationship. Humans have an average EQ in the 7-to-8 range, while most other primates have an EQ in the 2-to-3 range. Dolphins have values higher than those of primates other than humans, but nearly all other mammals have EQ values that are substantially lower.
Most of the enlargement of the primate brain comes from a massive expansion of the cerebral cortex, especially the prefrontal cortex and the parts of the cortex involved in vision. The visual processing network of primates includes at least 30 distinguishable brain areas, with a complex web of interconnections. It has been estimated that visual processing areas occupy more than half of the total surface of the primate neocortex. The prefrontal cortex carries out functions that include planning, working memory, motivation, attention, and executive control. It takes up a much larger proportion of the brain for primates than for other species, and an especially large fraction of the human brain.
Development
thumb|right|300px|alt=Very simple drawing of the front end of a human embryo, showing each vesicle of the developing brain in a different color.|Brain of a human embryo in the sixth week of development
The brain develops in an intricately orchestrated sequence of stages. It changes in shape from a simple swelling at the front of the nerve cord in the earliest embryonic stages, to a complex array of areas and connections. Neurons are created in special zones that contain stem cells, and then migrate through the tissue to reach their ultimate locations. Once neurons have positioned themselves, their axons sprout and navigate through the brain, branching and extending as they go, until the tips reach their targets and form synaptic connections. In a number of parts of the nervous system, neurons and synapses are produced in excessive numbers during the early stages, and then the unneeded ones are pruned away.
For vertebrates, the early stages of neural development are similar across all species. As the embryo transforms from a round blob of cells into a wormlike structure, a narrow strip of ectoderm running along the midline of the back is induced to become the neural plate, the precursor of the nervous system. The neural plate folds inward to form the neural groove, and then the lips that line the groove merge to enclose the neural tube, a hollow cord of cells with a fluid-filled ventricle at the center. At the front end, the ventricles and cord swell to form three vesicles that are the precursors of the forebrain, midbrain, and hindbrain. At the next stage, the forebrain splits into two vesicles called the telencephalon (which will contain the cerebral cortex, basal ganglia, and related structures) and the diencephalon (which will contain the thalamus and hypothalamus). At about the same time, the hindbrain splits into the metencephalon (which will contain the cerebellum and pons) and the myelencephalon (which will contain the medulla oblongata). Each of these areas contains proliferative zones where neurons and glial cells are generated; the resulting cells then migrate, sometimes for long distances, to their final positions.
Once a neuron is in place, it extends dendrites and an axon into the area around it. Axons, because they commonly extend a great distance from the cell body and need to reach specific targets, grow in a particularly complex way. The tip of a growing axon consists of a blob of protoplasm called a growth cone, studded with chemical receptors. These receptors sense the local environment, causing the growth cone to be attracted or repelled by various cellular elements, and thus to be pulled in a particular direction at each point along its path. The result of this pathfinding process is that the growth cone navigates through the brain until it reaches its destination area, where other chemical cues cause it to begin generating synapses. Considering the entire brain, thousands of genes create products that influence axonal pathfinding.
The synaptic network that finally emerges is only partly determined by genes, though. In many parts of the brain, axons initially "overgrow", and then are "pruned" by mechanisms that depend on neural activity. In the projection from the eye to the midbrain, for example, the structure in the adult contains a very precise mapping, connecting each point on the surface of the retina to a corresponding point in a midbrain layer. In the first stages of development, each axon from the retina is guided to the right general vicinity in the midbrain by chemical cues, but then branches very profusely and makes initial contact with a wide swath of midbrain neurons. The retina, before birth, contains special mechanisms that cause it to generate waves of activity that originate spontaneously at a random point and then propagate slowly across the retinal layer. These waves are useful because they cause neighboring neurons to be active at the same time; that is, they produce a neural activity pattern that contains information about the spatial arrangement of the neurons. This information is exploited in the midbrain by a mechanism that causes synapses to weaken, and eventually vanish, if activity in an axon is not followed by activity of the target cell. The result of this sophisticated process is a gradual tuning and tightening of the map, leaving it finally in its precise adult form.
Similar things happen in other brain areas: an initial synaptic matrix is generated as a result of genetically determined chemical guidance, but then gradually refined by activity-dependent mechanisms, partly driven by internal dynamics, partly by external sensory inputs. In some cases, as with the retina-midbrain system, activity patterns depend on mechanisms that operate only in the developing brain, and apparently exist solely to guide development.
In humans and many other mammals, new neurons are created mainly before birth, and the infant brain contains substantially more neurons than the adult brain. There are, however, a few areas where new neurons continue to be generated throughout life. The two areas for which adult neurogenesis is well established are the olfactory bulb, which is involved in the sense of smell, and the dentate gyrus of the hippocampus, where there is evidence that the new neurons play a role in storing newly acquired memories. With these exceptions, however, the set of neurons that is present in early childhood is the set that is present for life. Glial cells are different: as with most types of cells in the body, they are generated throughout the lifespan.
There has long been debate about whether the qualities of mind, personality, and intelligence can be attributed to heredity or to upbringing—this is the nature and nurture controversy. Although many details remain to be settled, neuroscience research has clearly shown that both factors are important. Genes determine the general form of the brain, and genes determine how the brain reacts to experience. Experience, however, is required to refine the matrix of synaptic connections, which in its developed form contains far more information than the genome does. In some respects, all that matters is the presence or absence of experience during critical periods of development. In other respects, the quantity and quality of experience are important; for example, there is substantial evidence that animals raised in enriched environments have thicker cerebral cortices, indicating a higher density of synaptic connections, than animals whose levels of stimulation are restricted.
Physiology
The functions of the brain depend on the ability of neurons to transmit electrochemical signals to other cells, and their ability to respond appropriately to electrochemical signals received from other cells. The electrical properties of neurons are controlled by a wide variety of biochemical and metabolic processes, most notably the interactions between neurotransmitters and receptors that take place at synapses.
Neurotransmitters and receptors
Neurotransmitters are chemicals that are released at synapses when an action potential activates them—neurotransmitters attach themselves to receptor molecules on the membrane of the synapse's target cell, and thereby alter the electrical or chemical properties of the receptor molecules.
With few exceptions, each neuron in the brain releases the same chemical neurotransmitter, or combination of neurotransmitters, at all the synaptic connections it makes with other neurons; this rule is known as Dale's principle. Thus, a neuron can be characterized by the neurotransmitters that it releases. The great majority of psychoactive drugs exert their effects by altering specific neurotransmitter systems. This applies to drugs such as cannabinoids, nicotine, heroin, cocaine, alcohol, fluoxetine, chlorpromazine, and many others.
The two neurotransmitters that are used most widely in the vertebrate brain are glutamate, which almost always exerts excitatory effects on target neurons, and gamma-aminobutyric acid (GABA), which is almost always inhibitory. Neurons using these transmitters can be found in nearly every part of the brain. Because of their ubiquity, drugs that act on glutamate or GABA tend to have broad and powerful effects. Some general anesthetics act by reducing the effects of glutamate; most tranquilizers exert their sedative effects by enhancing the effects of GABA.
There are dozens of other chemical neurotransmitters that are used in more limited areas of the brain, often areas dedicated to a particular function. Serotonin, for example—the primary target of antidepressant drugs and many dietary aids—comes exclusively from a small brainstem area called the Raphe nuclei. Norepinephrine, which is involved in arousal, comes exclusively from a nearby small area called the locus coeruleus. Other neurotransmitters such as acetylcholine and dopamine have multiple sources in the brain, but are not as ubiquitously distributed as glutamate and GABA.
Electrical activity
thumb|right|alt=Graph showing 16 voltage traces going across the page from left to right, each showing a different signal. At the middle of the page all of the traces abruptly begin to show sharp jerky spikes, which continue to the end of the plot.|Brain electrical activity recorded from a human patient during an epileptic seizure
As a side effect of the electrochemical processes used by neurons for signaling, brain tissue generates electric fields when it is active. When large numbers of neurons show synchronized activity, the electric fields that they generate can be large enough to detect outside the skull, using electroencephalography (EEG) or magnetoencephalography (MEG). EEG recordings, along with recordings made from electrodes implanted inside the brains of animals such as rats, show that the brain of a living animal is constantly active, even during sleep. Each part of the brain shows a mixture of rhythmic and nonrhythmic activity, which may vary according to behavioral state. In mammals, the cerebral cortex tends to show large slow delta waves during sleep, faster alpha waves when the animal is awake but inattentive, and chaotic-looking irregular activity when the animal is actively engaged in a task. During an epileptic seizure, the brain's inhibitory control mechanisms fail to function and electrical activity rises to pathological levels, producing EEG traces that show large wave and spike patterns not seen in a healthy brain. Relating these population-level patterns to the computational functions of individual neurons is a major focus of current research in neurophysiology.
Metabolism
All vertebrates have a blood–brain barrier that allows metabolism inside the brain to operate differently from metabolism in other parts of the body. Glial cells play a major role in brain metabolism by controlling the chemical composition of the fluid that surrounds neurons, including levels of ions and nutrients.
Brain tissue consumes a large amount of energy in proportion to its volume, so large brains place severe metabolic demands on animals. The need to limit body weight in order, for example, to fly, has apparently led to selection for a reduction of brain size in some species, such as bats. Most of the brain's energy consumption goes into sustaining the electric charge (membrane potential) of neurons. Most vertebrate species devote between 2% and 8% of basal metabolism to the brain. In primates, however, the percentage is much higher—in humans it rises to 20–25%. The energy consumption of the brain does not vary greatly over time, but active regions of the cerebral cortex consume somewhat more energy than inactive regions; this forms the basis for the functional brain imaging methods PET, fMRI, and NIRS. The brain typically gets most of its energy from oxygen-dependent metabolism of glucose (i.e., blood sugar), but ketones provide a major alternative source, together with contributions from medium chain fatty acids (caprylic and heptanoic acids), lactate, acetate, and possibly amino acids.
Functions
thumb|right|Model of a neural circuit in the cerebellum, as proposed by James S. Albus
The function of the brain can be understood as information flow and implementation of algorithms.
To generate purposeful and unified action, the brain is the central location where information from sense organs are collected. It processes this raw data to extract information about the structure of the environment. Next it combines the processed sensory information with information about the current needs of the animal and with memory of past circumstances. Finally, on the basis of the results, it generates motor response patterns that are suited to maximize the welfare of the animal. These signal-processing tasks require intricate interplay between a variety of functional subsystems.
The function of the brain is to provide coherent control over the actions of an animal. A centralized brain allows groups of muscles to be co-activated in complex patterns; it also allows stimuli impinging on one part of the body to evoke responses in other parts, and it can prevent different parts of the body from acting at cross-purposes to each other.
Perception
thumb|right|alt=Drawing showing the ear, inner ear, and brain areas involved in hearing. A series of light blue arrows shows the flow of signals through the system.|Diagram of signal processing in the auditory system
The brain extracts relevant information from sensory inputs. The human brain is provided with information about light, sound, the chemical composition of the atmosphere, temperature, head orientation, limb position, the chemical composition of the bloodstream, and more. In other animals additional senses are present, such as the infrared heat-sense of snakes, the magnetic field sense of some birds, or the electric field sense of some types of fish.
Each sensory system begins with specialized receptor cells, such as light-receptive neurons in the retina of the eye, or vibration-sensitive neurons in the cochlea of the ear. The axons of sensory receptor cells travel into the spinal cord or brain, where they transmit their signals to a first-order sensory nucleus dedicated to one specific sensory modality. This primary sensory nucleus sends information to higher-order sensory areas that are dedicated to the same modality. Eventually, via a way-station in the thalamus, the signals are sent to the cerebral cortex, where they are processed to extract the relevant features, and integrated with signals coming from other sensory systems.
Motor control
Motor systems are areas of the brain that are involved in initiating body movements, that is, in activating muscles. Except for the muscles that control the eye, which are driven by nuclei in the midbrain, all the voluntary muscles in the body are directly innervated by motor neurons in the spinal cord and hindbrain. Spinal motor neurons are controlled both by neural circuits intrinsic to the spinal cord, and by inputs that descend from the brain. The intrinsic spinal circuits implement many reflex responses, and contain pattern generators for rhythmic movements such as walking or swimming. The descending connections from the brain allow for more sophisticated control.
The brain contains several motor areas that project directly to the spinal cord. At the lowest level are motor areas in the medulla and pons, which control stereotyped movements such as walking, breathing, or swallowing. At a higher level are areas in the midbrain, such as the red nucleus, which is responsible for coordinating movements of the arms and legs. At a higher level yet is the primary motor cortex, a strip of tissue located at the posterior edge of the frontal lobe. The primary motor cortex sends projections to the subcortical motor areas, but also sends a massive projection directly to the spinal cord, through the pyramidal tract. This direct corticospinal projection allows for precise voluntary control of the fine details of movements. Other motor-related brain areas exert secondary effects by projecting to the primary motor areas. Among the most important secondary areas are the premotor cortex, basal ganglia, and cerebellum.
+ Major areas involved in controlling movement Area Location Function Ventral horn Spinal cord Contains motor neurons that directly activate muscles Oculomotor nuclei Midbrain Contains motor neurons that directly activate the eye muscles Cerebellum Hindbrain Calibrates precision and timing of movements Basal ganglia Forebrain Action selection on the basis of motivation Motor cortex Frontal lobe Direct cortical activation of spinal motor circuits Premotor cortex Frontal lobe Groups elementary movements into coordinated patterns Supplementary motor area Frontal lobe Sequences movements into temporal patterns Prefrontal cortex Frontal lobe Planning and other executive functions
In addition to all of the above, the brain and spinal cord contain extensive circuitry to control the autonomic nervous system, which works by secreting hormones and by modulating the "smooth" muscles of the gut.
Arousal
Many animals alternate between sleeping and waking in a daily cycle. Arousal and alertness are also modulated on a finer time scale by a network of brain areas.
A key component of the arousal system is the suprachiasmatic nucleus (SCN), a tiny part of the hypothalamus located directly above the point at which the optic nerves from the two eyes cross. The SCN contains the body's central biological clock. Neurons there show activity levels that rise and fall with a period of about 24 hours, circadian rhythms: these activity fluctuations are driven by rhythmic changes in expression of a set of "clock genes". The SCN continues to keep time even if it is excised from the brain and placed in a dish of warm nutrient solution, but it ordinarily receives input from the optic nerves, through the retinohypothalamic tract (RHT), that allows daily light-dark cycles to calibrate the clock.
The SCN projects to a set of areas in the hypothalamus, brainstem, and midbrain that are involved in implementing sleep-wake cycles. An important component of the system is the reticular formation, a group of neuron-clusters scattered diffusely through the core of the lower brain. Reticular neurons send signals to the thalamus, which in turn sends activity-level-controlling signals to every part of the cortex. Damage to the reticular formation can produce a permanent state of coma.
Sleep involves great changes in brain activity. Until the 1950s it was generally believed that the brain essentially shuts off during sleep, but this is now known to be far from true; activity continues, but patterns become very different. There are two types of sleep: REM sleep (with dreaming) and NREM (non-REM, usually without dreaming) sleep, which repeat in slightly varying patterns throughout a sleep episode. Three broad types of distinct brain activity patterns can be measured: REM, light NREM and deep NREM. During deep NREM sleep, also called slow wave sleep, activity in the cortex takes the form of large synchronized waves, whereas in the waking state it is noisy and desynchronized. Levels of the neurotransmitters norepinephrine and serotonin drop during slow wave sleep, and fall almost to zero during REM sleep; levels of acetylcholine show the reverse pattern.
Homeostasis
thumb|right|Cross-section of a human head, showing location of the hypothalamus
For any animal, survival requires maintaining a variety of parameters of bodily state within a limited range of variation: these include temperature, water content, salt concentration in the bloodstream, blood glucose levels, blood oxygen level, and others. The ability of an animal to regulate the internal environment of its body—the milieu intérieur, as pioneering physiologist Claude Bernard called it—is known as homeostasis (Greek for "standing still"). Maintaining homeostasis is a crucial function of the brain. The basic principle that underlies homeostasis is negative feedback: any time a parameter diverges from its set-point, sensors generate an error signal that evokes a response that causes the parameter to shift back toward its optimum value. (This principle is widely used in engineering, for example in the control of temperature using a thermostat.)
In vertebrates, the part of the brain that plays the greatest role is the hypothalamus, a small region at the base of the forebrain whose size does not reflect its complexity or the importance of its function. The hypothalamus is a collection of small nuclei, most of which are involved in basic biological functions. Some of these functions relate to arousal or to social interactions such as sexuality, aggression, or maternal behaviors; but many of them relate to homeostasis. Several hypothalamic nuclei receive input from sensors located in the lining of blood vessels, conveying information about temperature, sodium level, glucose level, blood oxygen level, and other parameters. These hypothalamic nuclei send output signals to motor areas that can generate actions to rectify deficiencies. Some of the outputs also go to the pituitary gland, a tiny gland attached to the brain directly underneath the hypothalamus. The pituitary gland secretes hormones into the bloodstream, where they circulate throughout the body and induce changes in cellular activity.
Motivation
thumb|right|350px|Components of the basal ganglia, shown in two cross-sections of the human brain. Blue: caudate nucleus and putamen. Green: globus pallidus. Red: subthalamic nucleus. Black: substantia nigra.
The individual animals need to express survival-promoting behaviors, such as seeking food, water, shelter, and a mate. The motivational system in the brain monitors the current state of satisfaction of these goals, and activates behaviors to meet any needs that arise. The motivational system works largely by a reward–punishment mechanism. When a particular behavior is followed by favorable consequences, the reward mechanism in the brain is activated, which induces structural changes inside the brain that cause the same behavior to be repeated later, whenever a similar situation arises. Conversely, when a behavior is followed by unfavorable consequences, the brain's punishment mechanism is activated, inducing structural changes that cause the behavior to be suppressed when similar situations arise in the future.
Most organisms studied to date utilize a reward–punishment mechanism: for instance, worms and insects can alter their behavior to seek food sources or to avoid dangers. In vertebrates, the reward-punishment system is implemented by a specific set of brain structures, at the heart of which lie the basal ganglia, a set of interconnected areas at the base of the forebrain. The basal ganglia are the central site at which decisions are made: the basal ganglia exert a sustained inhibitory control over most of the motor systems in the brain; when this inhibition is released, a motor system is permitted to execute the action it is programmed to carry out. Rewards and punishments function by altering the relationship between the inputs that the basal ganglia receive and the decision-signals that are emitted. The reward mechanism is better understood than the punishment mechanism, because its role in drug abuse has caused it to be studied very intensively. Research has shown that the neurotransmitter dopamine plays a central role: addictive drugs such as cocaine, amphetamine, and nicotine either cause dopamine levels to rise or cause the effects of dopamine inside the brain to be enhanced.
Learning and memory
Almost all animals are capable of modifying their behavior as a result of experience—even the most primitive types of worms. Because behavior is driven by brain activity, changes in behavior must somehow correspond to changes inside the brain. Already in the late 19th century theorists like Santiago Ramón y Cajal argued that the most plausible explanation is that learning and memory are expressed as changes in the synaptic connections between neurons. Until 1970, however, experimental evidence to support the synaptic plasticity hypothesis was lacking. In 1971 Tim Bliss and Terje Lømo published a paper on a phenomenon now called long-term potentiation: the paper showed clear evidence of activity-induced synaptic changes that lasted for at least several days. Since then technical advances have made these sorts of experiments much easier to carry out, and thousands of studies have been made that have clarified the mechanism of synaptic change, and uncovered other types of activity-driven synaptic change in a variety of brain areas, including the cerebral cortex, hippocampus, basal ganglia, and cerebellum. Brain-derived neurotrophic factor (BDNF) and physical activity appear to play a beneficial role in the process.
Neuroscientists currently distinguish several types of learning and memory that are implemented by the brain in distinct ways:
Working memory is the ability of the brain to maintain a temporary representation of information about the task that an animal is currently engaged in. This sort of dynamic memory is thought to be mediated by the formation of cell assemblies—groups of activated neurons that maintain their activity by constantly stimulating one another.
Episodic memory is the ability to remember the details of specific events. This sort of memory can last for a lifetime. Much evidence implicates the hippocampus in playing a crucial role: people with severe damage to the hippocampus sometimes show amnesia, that is, inability to form new long-lasting episodic memories.
Semantic memory is the ability to learn facts and relationships. This sort of memory is probably stored largely in the cerebral cortex, mediated by changes in connections between cells that represent specific types of information.
Instrumental learning is the ability for rewards and punishments to modify behavior. It is implemented by a network of brain areas centered on the basal ganglia.
Motor learning is the ability to refine patterns of body movement by practicing, or more generally by repetition. A number of brain areas are involved, including the premotor cortex, basal ganglia, and especially the cerebellum, which functions as a large memory bank for microadjustments of the parameters of movement.
Research
thumb|The Human Brain Project is a large scientific research project, starting in 2013, which aims to simulate the complete human brain.
The field of neuroscience encompasses all approaches that seek to understand the brain and the rest of the nervous system. Psychology seeks to understand mind and behavior, and neurology is the medical discipline that diagnoses and treats diseases of the nervous system. The brain is also the most important organ studied in psychiatry, the branch of medicine that works to study, prevent, and treat mental disorders. Cognitive science seeks to unify neuroscience and psychology with other fields that concern themselves with the brain, such as computer science (artificial intelligence and similar fields) and philosophy.
The oldest method of studying the brain is anatomical, and until the middle of the 20th century, much of the progress in neuroscience came from the development of better cell stains and better microscopes. Neuroanatomists study the large-scale structure of the brain as well as the microscopic structure of neurons and their components, especially synapses. Among other tools, they employ a plethora of stains that reveal neural structure, chemistry, and connectivity. In recent years, the development of immunostaining techniques has allowed investigation of neurons that express specific sets of genes. Also, functional neuroanatomy uses medical imaging techniques to correlate variations in human brain structure with differences in cognition or behavior.
Neurophysiologists study the chemical, pharmacological, and electrical properties of the brain: their primary tools are drugs and recording devices. Thousands of experimentally developed drugs affect the nervous system, some in highly specific ways. Recordings of brain activity can be made using electrodes, either glued to the scalp as in EEG studies, or implanted inside the brains of animals for extracellular recordings, which can detect action potentials generated by individual neurons. Because the brain does not contain pain receptors, it is possible using these techniques to record brain activity from animals that are awake and behaving without causing distress. The same techniques have occasionally been used to study brain activity in human patients suffering from intractable epilepsy, in cases where there was a medical necessity to implant electrodes to localize the brain area responsible for epileptic seizures. Functional imaging techniques such as functional magnetic resonance imaging are also used to study brain activity; these techniques have mainly been used with human subjects, because they require a conscious subject to remain motionless for long periods of time, but they have the great advantage of being noninvasive.
thumb|left|300px|alt=Drawing showing a monkey in a restraint chair, a computer monitor, a rototic arm, and three pieces of computer equipment, with arrows between them to show the flow of information.|Design of an experiment in which brain activity from a monkey was used to control a robotic arm
Another approach to brain function is to examine the consequences of damage to specific brain areas. Even though it is protected by the skull and meninges, surrounded by cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier, the delicate nature of the brain makes it vulnerable to numerous diseases and several types of damage. In humans, the effects of strokes and other types of brain damage have been a key source of information about brain function. Because there is no ability to experimentally control the nature of the damage, however, this information is often difficult to interpret. In animal studies, most commonly involving rats, it is possible to use electrodes or locally injected chemicals to produce precise patterns of damage and then examine the consequences for behavior.
Computational neuroscience encompasses two approaches: first, the use of computers to study the brain; second, the study of how brains perform computation. On one hand, it is possible to write a computer program to simulate the operation of a group of neurons by making use of systems of equations that describe their electrochemical activity; such simulations are known as biologically realistic neural networks. On the other hand, it is possible to study algorithms for neural computation by simulating, or mathematically analyzing, the operations of simplified "units" that have some of the properties of neurons but abstract out much of their biological complexity. The computational functions of the brain are studied both by computer scientists and neuroscientists.
Computational neurogenetic modeling is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes.
Recent years have seen increasing applications of genetic and genomic techniques to the study of the brain and a focus on the roles of neurotrophic factors and physical activity in neuroplasticity. The most common subjects are mice, because of the availability of technical tools. It is now possible with relative ease to "knock out" or mutate a wide variety of genes, and then examine the effects on brain function. More sophisticated approaches are also being used: for example, using Cre-Lox recombination it is possible to activate or deactivate genes in specific parts of the brain, at specific times.
History
thumb|right|Illustration by René Descartes of how the brain implements a reflex response
The oldest brain to have been discovered was in Armenia in the Areni-1 cave complex. The brain, estimated to be over 5,000 years old, was found in the skull of a 12 to 14-year-old girl. Although the brains were shriveled, they were well preserved due to the climate found inside the cave.
Early philosophers were divided as to whether the seat of the soul lies in the brain or heart. Aristotle favored the heart, and thought that the function of the brain was merely to cool the blood. Democritus, the inventor of the atomic theory of matter, argued for a three-part soul, with intellect in the head, emotion in the heart, and lust near the liver. Hippocrates, the "father of medicine", came down unequivocally in favor of the brain. In his treatise on epilepsy he wrote:
thumb|left|upright|Andreas Vesalius' Fabrica, published in 1543, showing the base of the human brain, including optic chiasma, cerebellum, olfactory bulbs, etc.
The Roman physician Galen also argued for the importance of the brain, and theorized in some depth about how it might work. Galen traced out the anatomical relationships among brain, nerves, and muscles, demonstrating that all muscles in the body are connected to the brain through a branching network of nerves. He postulated that nerves activate muscles mechanically by carrying a mysterious substance he called pneumata psychikon, usually translated as "animal spirits". Galen's ideas were widely known during the Middle Ages, but not much further progress came until the Renaissance, when detailed anatomical study resumed, combined with the theoretical speculations of René Descartes and those who followed him. Descartes, like Galen, thought of the nervous system in hydraulic terms. He believed that the highest cognitive functions are carried out by a non-physical res cogitans, but that the majority of behaviors of humans, and all behaviors of animals, could be explained mechanistically.
The first real progress toward a modern understanding of nervous function, though, came from the investigations of Luigi Galvani, who discovered that a shock of static electricity applied to an exposed nerve of a dead frog could cause its leg to contract. Since that time, each major advance in understanding has followed more or less directly from the development of a new technique of investigation. Until the early years of the 20th century, the most important advances were derived from new methods for staining cells. Particularly critical was the invention of the Golgi stain, which (when correctly used) stains only a small fraction of neurons, but stains them in their entirety, including cell body, dendrites, and axon. Without such a stain, brain tissue under a microscope appears as an impenetrable tangle of protoplasmic fibers, in which it is impossible to determine any structure. In the hands of Camillo Golgi, and especially of the Spanish neuroanatomist Santiago Ramón y Cajal, the new stain revealed hundreds of distinct types of neurons, each with its own unique dendritic structure and pattern of connectivity.
thumb|right|alt=A drawing on yellowing paper with an archiving stamp in the corner. A spidery tree branch structure connects to the top of a mass. A few narrow processes follow away from the bottom of the mass. |Drawing by Santiago Ramón y Cajal of two types of Golgi-stained neurons from the cerebellum of a pigeon
In the first half of the 20th century, advances in electronics enabled investigation of the electrical properties of nerve cells, culminating in work by Alan Hodgkin, Andrew Huxley, and others on the biophysics of the action potential, and the work of Bernard Katz and others on the electrochemistry of the synapse. These studies complemented the anatomical picture with a conception of the brain as a dynamic entity. Reflecting the new understanding, in 1942 Charles Sherrington visualized the workings of the brain waking from sleep:
The invention of electronic computers in the 1940s, along with the development of mathematical information theory, led to a realization that brains can potentially be understood as information processing systems. This concept formed the basis of the field of cybernetics, and eventually gave rise to the field now known as computational neuroscience. The earliest attempts at cybernetics were somewhat crude in that they treated the brain as essentially a digital computer in disguise, as for example in John von Neumann's 1958 book, The Computer and the Brain. Over the years, though, accumulating information about the electrical responses of brain cells recorded from behaving animals has steadily moved theoretical concepts in the direction of increasing realism.
One of the most influential early contributions was a 1959 paper titled What the frog's eye tells the frog's brain: the paper examined the visual responses of neurons in the retina and optic tectum of frogs, and came to the conclusion that some neurons in the tectum of the frog are wired to combine elementary responses in a way that makes them function as "bug perceivers". A few years later David Hubel and Torsten Wiesel discovered cells in the primary visual cortex of monkeys that become active when sharp edges move across specific points in the field of view—a discovery for which they won a Nobel Prize. Follow-up studies in higher-order visual areas found cells that detect binocular disparity, color, movement, and aspects of shape, with areas located at increasing distances from the primary visual cortex showing increasingly complex responses. Other investigations of brain areas unrelated to vision have revealed cells with a wide variety of response correlates, some related to memory, some to abstract types of cognition such as space.
Theorists have worked to understand these response patterns by constructing mathematical models of neurons and neural networks, which can be simulated using computers. Some useful models are abstract, focusing on the conceptual structure of neural algorithms rather than the details of how they are implemented in the brain; other models attempt to incorporate data about the biophysical properties of real neurons. No model on any level is yet considered to be a fully valid description of brain function, though. The essential difficulty is that sophisticated computation by neural networks requires distributed processing in which hundreds or thousands of neurons work cooperatively—current methods of brain activity recording are only capable of isolating action potentials from a few dozen neurons at a time.
Furthermore, even single neurons appear to be complex and capable of performing computations. So, brain models that don't reflect this are too abstract to be representative of brain operation; models that do try to capture this are very computationally expensive and arguably intractable with present computational resources. However, the Human Brain Project is trying to build a realistic, detailed computational model of the entire human brain. The wisdom of this approach has been publicly contested, with high-profile scientists on both sides of the argument.
In the second half of the 20th century, developments in chemistry, electron microscopy, genetics, computer science, functional brain imaging, and other fields progressively opened new windows into brain structure and function. In the United States, the 1990s were officially designated as the "Decade of the Brain" to commemorate advances made in brain research, and to promote funding for such research.
In the 21st century, these trends have continued, and several new approaches have come into prominence, including multielectrode recording, which allows the activity of many brain cells to be recorded all at the same time; genetic engineering, which allows molecular components of the brain to be altered experimentally; genomics, which allows variations in brain structure to be correlated with variations in DNA properties and neuroimaging.
See also
Brain–computer interface
Central nervous system disease
List of neuroscience databases
Neurological disorder
Neuroplasticity
Outline of neuroscience
The brain as food
References
Further reading
Brain Valentino Braitenberg, Scholarpedia, 2(11):2918.
External links
Brain Museum, comparative mammalian brain collection
BrainInfo, neuroanatomy database
Neuroscience for Kids
BrainMaps.org, interactive high-resolution digital brain atlas of primate and non-primate brains
The Brain from Top to Bottom, at McGill University
The HOPES Brain Tutorial, at Stanford University
Brain injury help video
Vertebrate brain evolution
Semantic 3D model of brain from the Gallant lab at UC Berkley
Category:Organs (anatomy)
Category:Human anatomy by organ
Category:Animal anatomy | 3,717 | 2017-01 |
ASCII | ASCII ( ), abbreviated from American Standard Code for Information Interchange, is a character encoding standard (the Internet Assigned Numbers Authority (IANA) prefers the name US-ASCII). ASCII codes represent text in computers, telecommunications equipment, and other devices. Most modern character-encoding schemes are based on ASCII, although they support many additional characters.
thumb|361px|ASCII chart from a 1972 printer manual (b1 is the least significant bit).
Overview
ASCII was developed from telegraph code. Its first commercial use was as a seven-bit teleprinter code promoted by Bell data services. Work on the ASCII standard began on October 6, 1960, with the first meeting of the American Standards Association's (ASA) (now the American National Standards Institute or ANSI) X3.2 subcommittee. The first edition of the standard was published in 1963, underwent a major revision during 1967, and experienced its most recent update during 1986. Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists, and added features for devices other than teleprinters.
Originally based on the English alphabet, ASCII encodes 128 specified characters into seven-bit integers as shown by the ASCII chart above. The characters encoded are numbers 0 to 9, lowercase letters a to z, uppercase letters A to Z, basic punctuation symbols, control codes that originated with Teletype machines, and a space. For example, lowercase j would become binary 1101010 and decimal 106. ASCII includes definitions for 128 characters: 33 are non-printing control characters (many now obsolete) that affect how text and space are processedInternational Organization for Standardization (December 1, 1975). "The set of control characters for ISO 646". Internet Assigned Numbers Authority Registry. Alternate U.S. version: . Accessed 2008-04-14. and 95 printable characters, including the space (which is considered an invisible graphic (NB. Almost identical wording to USAS X3.4-1968 except for the intro.)).
A June 1992 RFC and the Internet Assigned Numbers Authority registry of character sets recognize the following case-insensitive aliases for ASCII as suitable for use on the Internet: ANSI_X3.4-1968 (canonical name), iso-ir-6, ANSI_X3.4-1986, ISO_646.irv:1991, ASCII, ISO646-US, US-ASCII (preferred MIME name),Internet Assigned Numbers Authority (IANA) (May 14, 2007). "Character Sets". Accessed 2008-04-14. us, IBM367, cp367, and csASCII.
Of these, the IANA encourages use of the name "US-ASCII" for Internet uses of ASCII (even if it is a redundant acronym, but the US is needed because of regular confusion of the ASCII term with other 8 bit based character encoding schemes such as Extended ASCII or UTF-8 for example). One often finds this in the optional "charset" parameter in the Content-Type header of some MIME messages, in the equivalent "meta" element of some HTML documents, and in the encoding declaration part of the prologue of some XML documents.
History
The American Standard Code for Information Interchange (ASCII) was developed under the auspices of a committee of the American Standards Association (ASA), called the X3 committee, by its X3.2 (later X3L2) subcommittee, and later by that subcommittee's X3.2.4 working group (now INCITS). The ASA became the United States of America Standards Institute (USASI) and ultimately the American National Standards Institute (ANSI).
With the other special characters and control codes filled in, ASCII was published as ASA X3.4-1963, leaving 28 code positions without any assigned meaning, reserved for future standardization, and one unassigned control code. There was some debate at the time whether there should be more control characters rather than the lowercase alphabet. The indecision did not last long: during May 1963 the CCITT Working Party on the New Telegraph Alphabet proposed to assign lowercase characters to sticks 6 and 7,Brief Report: Meeting of CCITT Working Party on the New Telegraph Alphabet, May 13–15, 1963. and International Organization for Standardization TC 97 SC 2 voted during October to incorporate the change into its draft standard.Report of ISO/TC/97/SC 2 – Meeting of October 29–31, 1963. The X3.2.4 task group voted its approval for the change to ASCII at its May 1963 meeting.Report on Task Group X3.2.4, June 11, 1963, Pentagon Building, Washington, DC. Locating the lowercase letters in sticks 6 and 7 caused the characters to differ in bit pattern from the upper case by a single bit, which simplified case-insensitive character matching and the construction of keyboards and printers.
The X3 committee made other changes, including other new characters (the brace and vertical bar characters),Report of Meeting No. 8, Task Group X3.2.4, December 17 and 18, 1963 renaming some control characters (SOM became start of header (SOH)) and moving or removing others (RU was removed). ASCII was subsequently updated as USAS X3.4-1967, then USAS X3.4-1968, ANSI X3.4-1977, and finally, ANSI X3.4-1986.
Revisions of the ASCII standard:
ASA X3.4-1963
ASA X3.4-1965 (approved, but not published, nevertheless used by IBM 2260 & 2265 Display Stations and IBM 2848 Display Control)
USAS X3.4-1967
USAS X3.4-1968
ANSI X3.4-1977
ANSI X3.4-1986
ANSI X3.4-1986 (R1992)
ANSI X3.4-1986 (R1997)
ANSI INCITS 4-1986 (R2002)
ANSI INCITS 4-1986 (R2007)
ANSI INCITS 4-1986 (R2012)
In the X3.15 standard, the X3 committee also addressed how ASCII should be transmitted (least significant bit first), and how it should be recorded on perforated tape. They proposed a 9-track standard for magnetic tape, and attempted to deal with some punched card formats.
Design considerations
Bit width
The X3.2 subcommittee designed ASCII based on the earlier teleprinter encoding systems. Like other character encodings, ASCII specifies a correspondence between digital bit patterns and character symbols (i.e. graphemes and control characters). This allows digital devices to communicate with each other and to process, store, and communicate character-oriented information such as written language. Before ASCII was developed, the encodings in use included 26 alphabetic characters, 10 numerical digits, and from 11 to 25 special graphic symbols. To include all these, and control characters compatible with the Comité Consultatif International Téléphonique et Télégraphique (CCITT) International Telegraph Alphabet No. 2 (ITA2) standard of 1924, FIELDATA (1956), and early EBCDIC (1963), more than 64 codes were required for ASCII.
ITA2 were in turn based on the 5-bit telegraph code Émile Baudot invented in 1870 and patented in 1874.
The committee debated the possibility of a shift function (like in ITA2), which would allow more than 64 codes to be represented by a six-bit code. In a shifted code, some character codes determine choices between options for the following character codes. It allows compact encoding, but is less reliable for data transmission, as an error in transmitting the shift code typically makes a long part of the transmission unreadable. The standards committee decided against shifting, and so ASCII required at least a seven-bit code.
The committee considered an eight-bit code, since eight bits (octets) would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal. However, it would require all data transmission to send eight bits when seven could suffice. The committee voted to use a seven-bit code to minimize costs associated with data transmission. Since perforated tape at the time could record eight bits in one position, it also allowed for a parity bit for error checking if desired. Eight-bit machines (with octets as the native data type) that did not use parity checking typically set the eighth bit to 0. In some printers, the high bit was used to enable Italics printing.
Internal organization
The code itself was patterned so that most control codes were together, and all graphic codes were together, for ease of identification. The first two so called ASCII sticks (32 positions) were reserved for control characters. The "space" character had to come before graphics to make sorting easier, so it became position 20hex; for the same reason, many special signs commonly used as separators were placed before digits. The committee decided it was important to support uppercase 64-character alphabets, and chose to pattern ASCII so it could be reduced easily to a usable 64-character set of graphic codes, as was done in the DEC SIXBIT code (1963). Lowercase letters were therefore not interleaved with uppercase. To keep options available for lowercase letters and other graphics, the special and numeric codes were arranged before the letters, and the letter A was placed in position 41hex to match the draft of the corresponding British standard. The digits 0–9 were arranged so they correspond to values in binary prefixed with 011, making conversion with binary-coded decimal straightforward.
Many of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters; an important subtlety is that these were based on mechanical typewriters, not electric typewriters. Mechanical typewriters followed the standard set by the Remington No. 2 (1878), the first typewriter with a shift key, and the shifted values of 23456789- were "#$%_&'() early typewriters omitted 0 and 1, using O (capital letter o) and l (lowercase letter L) instead, but 1! and 0) pairs became standard once 0 and 1 became common. Thus, in ASCII !"#$% were placed in the second stick, positions 1–5, corresponding to the digits 1–5 in the adjacent stick. The parentheses could not correspond to 9 and 0, however, because the place corresponding to 0 was taken by the space character. This was accommodated by removing _ (underscore) from 6 and shifting the remaining characters, which corresponded to many European typewriters that placed the parentheses with 8 and 9. This discrepancy from typewriters led to bit-paired keyboards, notably the Teletype Model 33, which used the left-shifted layout corresponding to ASCII, not to traditional mechanical typewriters. Electric typewriters, notably the more recently introduced IBM Selectric (1961), used a somewhat different layout that has become standard on computersfollowing the IBM PC (1981), especially Model M (1984)and thus shift values for symbols on modern keyboards do not correspond as closely to the ASCII table as earlier keyboards did. The /? pair also dates to the No. 2, and the ,< .> pairs were used on some keyboards (others, including the No. 2, did not shift , (comma) or . (full stop) so they could be used in uppercase without unshifting). However, ASCII split the ;: pair (dating to No. 2), and rearranged mathematical symbols (varied conventions, commonly -* =+) to :* ;+ -=.
Some common characters were not included, notably ½¼¢, while ^`~ were included as diacritics for international use, and <> for mathematical use, together with the simple line characters \| (in addition to common /). The @ symbol was not used in continental Europe and the committee expected it would be replaced by an accented À in the French variation, so the @ was placed in position 40hex, right before the letter A.
The control codes felt essential for data transmission were the start of message (SOM), end of address (EOA), end of message (EOM), end of transmission (EOT), "who are you?" (WRU), "are you?" (RU), a reserved device control (DC0), synchronous idle (SYNC), and acknowledge (ACK). These were positioned to maximize the Hamming distance between their bit patterns.
Character order
ASCII-code order is also called ASCIIbetical order. Collation of data is sometimes done in this order rather than "standard" alphabetical order (collating sequence). The main deviations in ASCII order are:
All uppercase come before lowercase letters; for example, "Z" precedes "a"
Digits and many punctuation marks come before letters; for example, "4" precedes "one"
Numbers are sorted naïvely as strings; for example, "10" precedes "2"
An intermediate orderreadily implementedconverts uppercase letters to lowercase before comparing ASCII values. Naïve number sorting can be averted by zero-filling all numbers (e.g. "02" will sort before "10" as expected), although this is an external fix and has nothing to do with the ordering itself.
Character groups
Control characters
ASCII reserves the first 32 codes (numbers 0–31 decimal) for control characters: codes originally intended not to represent printable information, but rather to control devices (such as printers) that make use of ASCII, or to provide meta-information about data streams such as those stored on magnetic tape.
For example, character 10 represents the "line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace". RFC 2822 refers to control characters that do not include carriage return, line feed or white space as non-whitespace control characters. (NB. NO-WS-CTL.) Except for the control characters that prescribe elementary line-oriented formatting, ASCII does not define any mechanism for describing the structure or appearance of text within a document. Other schemes, such as markup languages, address page and document layout and formatting.
The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this caused was sometimes intentional, for example where a character would be used slightly differently on a terminal link than on a data stream, and sometimes accidental, for example with the meaning of "delete".
Probably the most influential single device on the interpretation of these characters was the Teletype Model 33 ASR, which was a printing terminal with an available paper tape reader/punch option. Paper tape was a very popular medium for long-term program storage until the 1980s, less costly and in some ways less fragile than magnetic tape. In particular, the Teletype Model 33 machine assignments for codes 17 (Control-Q, DC1, also known as XON), 19 (Control-S, DC3, also known as XOFF), and 127 (Delete) became de facto standards. The Model 33 was also notable for taking the description of Control-G (BEL, meaning audibly alert the operator) literally as the unit contained an actual bell which it rang when it received a BEL character. Because the keytop for the O key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore), a noncompliant use of code 15 (Control-O, Shift In) interpreted as "delete previous character" was also adopted by many early timesharing systems but eventually became neglected.
When a Teletype 33 ASR equipped with the automatic paper tape reader received a Control-S (XOFF, an abbreviation for transmit off), it caused the tape reader to stop; receiving Control-Q (XON, "transmit on") caused the tape reader to resume. This technique became adopted by several early computer operating systems as a "handshaking" signal warning a sender to stop transmission because of impending overflow; it persists to this day in many systems as a manual output control technique. On some systems Control-S retains its meaning but Control-Q is replaced by a second Control-S to resume output. The 33 ASR also could be configured to employ Control-R (DC2) and Control-T (DC4) to start and stop the tape punch; on some units equipped with this function, the corresponding control character lettering on the keycap above the letter was TAPE and TAPE respectively.
Code 127 is officially named "delete" but the Teletype label was "rubout". Since the original standard did not give detailed interpretation for most control codes, interpretations of this code varied. The original Teletype meaning, and the intent of the standard, was to make it an ignored character, the same as NUL (all zeroes). This was useful specifically for paper tape, because punching the all-ones bit pattern on top of an existing mark would obliterate it. Tapes designed to be "hand edited" could even be produced with spaces of extra NULs (blank tape) so that a block of characters could be "rubbed out" and then replacements put into the empty space.
Some software assigned special meanings to ASCII characters sent to the software from the terminal. Operating systems from Digital Equipment Corporation, for example, interpreted DEL as an input character as meaning "remove previously-typed input character", and this interpretation also became common in Unix systems. Most other systems used BS for that meaning and used DEL to mean "remove the character at the cursor". That latter interpretation is the most common now.
Many more of the control codes have been given meanings quite different from their original ones. The "escape" character (ESC, code 27), for example, was intended originally to allow sending other control characters as literals instead of invoking their meaning. This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain characters have a reserved meaning. Over time this meaning has been co-opted and has eventually been changed. In modern use, an ESC sent to the terminal usually indicates the start of a command sequence usually in the form of a so-called "ANSI escape code" (or, more properly, a "Control Sequence Introducer") from ECMA-48 (1972) and its successors, beginning with ESC followed by a "[" (left-bracket) character. An ESC sent from the terminal is most often used as an out-of-band character used to terminate an operation, as in the TECO and vi text editors. In graphical user interface (GUI) and windowing systems, ESC generally causes an application to abort its current operation or to exit (terminate) altogether.
The inherent ambiguity of many control characters, combined with their historical usage, created problems when transferring "plain text" files between systems. The best example of this is the newline problem on various operating systems. Teletype machines required that a line of text be terminated with both "Carriage Return" (which moves the printhead to the beginning of the line) and "Line Feed" (which advances the paper one line without moving the printhead). The name "Carriage Return" comes from the fact that on a manual typewriter the carriage holding the paper moved while the position where the typebars struck the ribbon remained stationary. The entire carriage had to be pushed (returned) to the right in order to position the left margin of the paper for the next line.
DEC operating systems (OS/8, RT-11, RSX-11, RSTS, TOPS-10, etc.) used both characters to mark the end of a line so that the console device (originally Teletype machines) would work. By the time so-called "glass TTYs" (later called CRTs or terminals) came along, the convention was so well established that backward compatibility necessitated continuing the convention. When Gary Kildall created CP/M he was inspired by some command line interface conventions used in DEC's RT-11. Until the introduction of PC DOS in 1981, IBM had no hand in this because their 1970s operating systems used EBCDIC instead of ASCII and they were oriented toward punch-card input and line printer output on which the concept of carriage return was meaningless. IBM's PC DOS (also marketed as MS-DOS by Microsoft) inherited the convention by virtue of being a clone of CP/M, and Windows inherited it from MS-DOS.
Unfortunately, requiring two characters to mark the end of a line introduces unnecessary complexity and questions as to how to interpret each character when encountered alone. To simplify matters plain text data streams, including files, on Multics used line feed (LF) alone as a line terminator. Unix and Unix-like systems, and Amiga systems, adopted this convention from Multics. The original Macintosh OS, Apple DOS, and ProDOS, on the other hand, used carriage return (CR) alone as a line terminator; however, since Apple replaced these operating systems with the Unix-based macOS operating system, they now use line feed (LF) as well. The Radio Shack TRS-80 also used a lone CR to terminate lines.
Computers attached to the ARPANET included machines running operating systems such as TOPS-10 and TENEX using CR-LF line endings, machines running operating systems such as Multics using LF line endings, and machines running operating systems such as OS/360 that represented lines as a character count followed by the characters of the line and that used EBCDIC rather than ASCII. The Telnet protocol defined an ASCII "Network Virtual Terminal" (NVT), so that connections between hosts with different line-ending conventions and character sets could be supported by transmitting a standard text format over the network. Telnet used ASCII along with CR-LF line endings, and software using other conventions would translate between the local conventions and the NVT. The File Transfer Protocol adopted the Telnet protocol, including use of the Network Virtual Terminal, for use when transmitting commands and transferring data in the default ASCII mode. This adds complexity to implementations of those protocols, and to other network protocols, such as those used for E-mail and the World Wide Web, on systems not using the NVT's CR-LF line-ending convention.
Older operating systems such as TOPS-10, along with CP/M, tracked file length only in units of disk blocks and used Control-Z (SUB) to mark the end of the actual text in the file. For this reason, EOF, or end-of-file, was used colloquially and conventionally as a three-letter acronym for Control-Z instead of SUBstitute. The end-of-text code (ETX), also known as Control-C, was inappropriate for a variety of reasons, while using Z as the control code to end a file is analogous to it ending the alphabet and serves as a very convenient mnemonic aid. A historically common and still prevalent convention uses the ETX code convention to interrupt and halt a program via an input data stream, usually from a keyboard.
In C library and Unix conventions, the null character is used to terminate text strings; such null-terminated strings can be known in abbreviation as ASCIZ or ASCIIZ, where here Z stands for "zero".
Binary Oct Dec Hex Abbreviation Name ('67) '63 '65 '67 000 0000 000 0 00NULLNUL ␀ ^@ \0 Null 000 0001 001 1 01SOMSOH ␁ ^A Start of Heading 000 0010 002 2 02EOASTX ␂ ^B Start of Text 000 0011 003 3 03EOMETX ␃ ^C End of Text 000 0100 004 4 04EOT ␄ ^D End of Transmission 000 0101 005 5 05WRUENQ ␅ ^E Enquiry 000 0110 006 6 06RUACK ␆ ^F Acknowledgement 000 0111 007 7 07BELLBEL ␇ ^G \a Bell 000 1000 010 8 08FE0BS ␈ ^H \b Backspace 000 1001 011 9 09HT/SKHT ␉ ^I \t Horizontal Tab 000 1010 012 10 0ALF ␊ ^J \n Line Feed 000 1011 013 11 0BVTABVT ␋ ^K \v Vertical Tab 000 1100 014 12 0CFF ␌ ^L \f Form Feed 000 1101 015 13 0DCR ␍ ^M \r Carriage Return 000 1110 016 14 0ESO ␎ ^N Shift Out 000 1111 017 15 0FSI ␏ ^O Shift In 001 0000 020 16 10DC0DLE ␐ ^P Data Link Escape 001 0001 021 17 11DC1 ␑ ^Q Device Control 1 (often XON) 001 0010 022 18 12DC2 ␒ ^R Device Control 2 001 0011 023 19 13DC3 ␓ ^S Device Control 3 (often XOFF) 001 0100 024 20 14DC4 ␔ ^T Device Control 4 001 0101 025 21 15ERRNAK ␕ ^U Negative Acknowledgement 001 0110 026 22 16SYNCSYN ␖ ^V Synchronous Idle 001 0111 027 23 17LEMETB ␗ ^W End of Transmission Block 001 1000 030 24 18S0CAN ␘ ^X Cancel 001 1001 031 25 19S1EM ␙ ^Y End of Medium 001 1010 032 26 1AS2SSSUB ␚ ^Z Substitute 001 1011 033 27 1BS3ESC ␛ ^[ \e Escape 001 1100 034 28 1CS4FS ␜ ^\ File Separator 001 1101 035 29 1DS5GS ␝ ^] Group Separator 001 1110 036 30 1ES6RS ␞ ^^ Record Separator 001 1111 037 31 1FS7US ␟ ^_ Unit Separator 111 1111 177 127 7FDEL ␡ ^? Delete
Other representations might be used by specialist equipment, for example ISO 2047 graphics or hexadecimal numbers.
Printable characters
Codes 20hex to 7Ehex, known as the printable characters, represent letters, digits, punctuation marks, and a few miscellaneous symbols. There are 95 printable characters in total.
Code 20hex, the "space" character, denotes the space between words, as produced by the space bar of a keyboard. Since the space character is considered an invisible graphic (rather than a control character) it is listed in the table below instead of in the previous section.
Code 7Fhex corresponds to the non-printable "delete" (DEL) control character and is therefore omitted from this chart; it is covered in the previous section's chart. Earlier versions of ASCII used the up arrow instead of the caret (5Ehex) and the left arrow instead of the underscore (5Fhex).
Binary Oct Dec Hex Glyph ’63 ’65 ’67010 0000 040 32 20 010 0001 041 33 21 !010 0010 042 34 22 "010 0011 043 35 23 #010 0100 044 36 24 $010 0101 045 37 25 %010 0110 046 38 26 &010 0111 047 39 27 '010 1000 050 40 28 (010 1001 051 41 29 )010 1010 052 42 2A *010 1011 053 43 2B +010 1100 054 44 2C ,010 1101 055 45 2D -010 1110 056 46 2E .010 1111 057 47 2F /011 0000 060 48 30 0011 0001 061 49 31 1011 0010 062 50 32 2011 0011 063 51 33 3011 0100 064 52 34 4011 0101 065 53 35 5011 0110 066 54 36 6011 0111 067 55 37 7011 1000 070 56 38 8011 1001 071 57 39 9011 1010 072 58 3A :011 1011 073 59 3B ;011 1100 074 60 3C <011 1101 075 61 3D =011 1110 076 62 3E >011 1111 077 63 3F ?Binary Oct Dec Hex Glyph ’63 ’65 ’67100 0000 100 64 40 @ ` @100 0001 101 65 41 A100 0010 102 66 42 B100 0011 103 67 43 C100 0100 104 68 44 D100 0101 105 69 45 E100 0110 106 70 46 F100 0111 107 71 47 G100 1000 110 72 48 H100 1001 111 73 49 I100 1010 112 74 4A J100 1011 113 75 4B K100 1100 114 76 4C L100 1101 115 77 4D M100 1110 116 78 4E N100 1111 117 79 4F O101 0000 120 80 50 P101 0001 121 81 51 Q101 0010 122 82 52 R101 0011 123 83 53 S101 0100 124 84 54 T101 0101 125 85 55 U101 0110 126 86 56 V101 0111 127 87 57 W101 1000 130 88 58 X101 1001 131 89 59 Y101 1010 132 90 5A Z101 1011 133 91 5B [101 1100 134 92 5C \ ~ \101 1101 135 93 5D ]101 1110 136 94 5E ↑ ^101 1111 137 95 5F ← _Binary Oct Dec Hex Glyph ’63 ’65 ’67110 0000 140 96 60 @ `110 0001 141 97 61 a110 0010 142 98 62 b110 0011 143 99 63 c110 0100 144 100 64 d110 0101 145 101 65 e110 0110 146 102 66 f110 0111 147 103 67 g110 1000 150 104 68 h110 1001 151 105 69 i110 1010 152 106 6A j110 1011 153 107 6B k110 1100 154 108 6C l110 1101 155 109 6D m110 1110 156 110 6E n110 1111 157 111 6F o111 0000 160 112 70 p111 0001 161 113 71 q111 0010 162 114 72 r111 0011 163 115 73 s111 0100 164 116 74 t111 0101 165 117 75 u111 0110 166 118 76 v111 0111 167 119 77 w111 1000 170 120 78 x111 1001 171 121 79 y111 1010 172 122 7A z111 1011 173 123 7B {111 1100 174 124 7C ACK ¬ |111 1101 175 125 7D }111 1110 176 126 7E ESC | ~
Code chart
Use
ASCII itself was first used commercially during 1963 as a seven-bit teleprinter code for American Telephone & Telegraph's TWX (TeletypeWriter eXchange) network. TWX originally used the earlier five-bit ITA2, which was also used by the competing Telex teleprinter system. Bob Bemer introduced features such as the escape sequence. His British colleague Hugh McGregor Ross helped to popularize this work according to Bemer, "so much so that the code that was to become ASCII was first called the Bemer-Ross Code in Europe". (NB. Bemer was employed at IBM at that time.) Because of his extensive work on ASCII, Bemer has been called "the father of ASCII".
On March 11, 1968, U.S. President Lyndon B. Johnson mandated that all computers purchased by the United States federal government support ASCII, stating:
I have also approved recommendations of the Secretary of Commerce regarding standards for recording the Standard Code for Information Interchange on magnetic tapes and paper tapes when they are used in computer operations.
All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange and the formats prescribed by the magnetic tape and paper tape standards when these media are used.
ASCII was the most common character encoding on the World Wide Web until December 2007, when UTF-8 encoding surpassed it; UTF-8 is backward compatible with ASCII.
Variants and derivations
As computer technology spread throughout the world, different standards bodies and corporations developed many variations of ASCII to facilitate the expression of non-English languages that used Roman-based alphabets. One could class some of these variations as "ASCII extensions", although some misuse that term to represent all variants, including those that do not preserve ASCII's character-map in the 7-bit range. Furthermore, the ASCII extensions have also been mislabelled as ASCII.
7-bit codes
From early in its development,"Specific Criteria", attachment to memo from R. W. Reach, "X3-2 Meeting – September 14 and 15", September 18, 1961 ASCII was intended to be just one of several national variants of an international character code standard.
Other international standards bodies have ratified character encodings such as ISO 646 (1967) that are identical or nearly identical to ASCII, with extensions for characters outside the English alphabet and symbols used outside the United States, such as the symbol for the United Kingdom's pound sterling (£). Almost every country needed an adapted version of ASCII, since ASCII suited the needs of only the USA and a few other countries. For example, Canada had its own version that supported French characters.
Many other countries developed variants of ASCII to include non-English letters (e.g. é, ñ, ß, Ł), currency symbols (e.g. £, ¥), etc. See also YUSCII (Yugoslavia).
It would share most characters in common but assign other locally useful characters to several code points reserved for "national use". However, the four years that elapsed between the publication of ASCII-1963 and ISO's first acceptance of an international recommendation during 1967 caused ASCII's choices for the national use characters to seem to be de facto standards for the world, causing confusion and incompatibility once other countries did begin to make their own assignments to these code points.
ISO/IEC 646, like ASCII, is a 7-bit character set. It does not make any additional codes available, so the same code points encoded different characters in different countries. Escape codes were defined to indicate which national variant applied to a piece of text, but they were rarely used, so it was often impossible to know what variant to work with and therefore which character a code represented, and in general, text-processing systems could cope with only one variant anyway.
Because the bracket and brace characters of ASCII were assigned to "national use" code points that were used for accented letters in other national variants of ISO/IEC 646, a German, French, or Swedish, etc. programmer using their national variant of ISO/IEC 646, rather than ASCII, had to write, and thus read, something such as
ä aÄiÜ = 'Ön'; ü
instead of
{ a[i] = '\n'; }
C trigraphs were created to solve this problem for ANSI C, although their late introduction and inconsistent implementation in compilers limited their use. Many programmers kept their computers on US-ASCII, so plain-text in Swedish, German etc. (for example, in e-mail or Usenet) contained "{, }" and similar variants in the middle of words, something those programmers got used to. For example, a Swedish programmer mailing another programmer asking if they should go for lunch, could get "N{ jag har sm|rg}sar" as the answer, which should be "Nä jag har smörgåsar" meaning "No I've got sandwiches".
8-bit codes
Eventually, as 8-, 16- and 32-bit (and later 64-bit) computers began to replace 18- and 36-bit computers as the norm, it became common to use an 8-bit byte to store each character in memory, providing an opportunity for extended, 8-bit, relatives of ASCII. In most cases these developed as true extensions of ASCII, leaving the original character-mapping intact, but adding additional character definitions after the first 128 (i.e., 7-bit) characters.
Encodings include ISCII (India), VISCII (Vietnam). Although these encodings are sometimes referred to as ASCII, true ASCII is defined strictly only by the ANSI standard.
Most early home computer systems developed their own 8-bit character sets containing line-drawing and game glyphs, and often filled in some or all of the control characters from 0 to 31 with more graphics. Kaypro CP/M computers used the "upper" 128 characters for the Greek alphabet.
The PETSCII code Commodore International used for their 8-bit systems is probably unique among post-1970 codes in being based on ASCII-1963, instead of the more common ASCII-1967, such as found on the ZX Spectrum computer. Atari 8-bit computers and Galaksija computers also used ASCII variants.
The IBM PC defined code page 437, which replaced the control characters with graphic symbols such as smiley faces, and mapped additional graphic characters to the upper 128 positions. Operating systems such as DOS supported these code pages, and manufacturers of IBM PCs supported them in hardware. Digital Equipment Corporation developed the Multinational Character Set (DEC-MCS) for use in the popular VT220 terminal as one of the first extensions designed more for international languages than for block graphics. The Macintosh defined Mac OS Roman and Postscript also defined a set, both of these contained both international letters and typographic punctuation marks instead of graphics, more like modern character sets.
The ISO/IEC 8859 standard (derived from the DEC-MCS) finally provided a standard that most systems copied (at least as accurately as they copied ASCII, but with many substitutions). A popular further extension designed by Microsoft, Windows-1252 (often mislabeled as ISO-8859-1), added the typographic punctuation marks needed for traditional text printing. ISO-8859-1, Windows-1252, and the original 7-bit ASCII were the most common character encodings until 2008 when UTF-8 became more common.
ISO/IEC 4873 introduced 32 additional control codes defined in the 80–9F hexadecimal range, as part of extending the 7-bit ASCII encoding to become an 8-bit system.
Unicode
Unicode and the ISO/IEC 10646 Universal Character Set (UCS) have a much wider array of characters and their various encoding forms have begun to supplant ISO/IEC 8859 and ASCII rapidly in many environments. While ASCII is limited to 128 characters, Unicode and the UCS support more characters by separating the concepts of unique identification (using natural numbers called code points) and encoding (to 8-, 16- or 32-bit binary formats, called UTF-8, UTF-16 and UTF-32).
ASCII was incorporated into the Unicode (1991) character set as the first 128 symbols, so the 7-bit ASCII characters have the same numeric codes in both sets. This allows UTF-8 to be backward compatible with 7-bit ASCII, as a UTF-8 file containing only ASCII characters is identical to an ASCII file containing the same sequence of characters. Even more importantly, forward compatibility is ensured as software that recognizes only 7-bit ASCII characters as special and does not alter bytes with the highest bit set (as is often done to support 8-bit ASCII extensions such as ISO-8859-1) will preserve UTF-8 data unchanged.
To allow backward compatibility, the 128 ASCII and 256 ISO-8859-1 (Latin 1) characters are assigned Unicode/UCS code points that are the same as their codes in the earlier standards. Therefore, ASCII can be considered a 7-bit encoding scheme for a very small subset of Unicode/UCS, and ASCII (when prefixed with 0 as the eighth bit) is valid UTF-8.
See also
3568 ASCII, an asteroid named after the character encoding
Ascii85
ASCII art
ASCII Ribbon Campaign
Basic Latin (Unicode block) (ASCII as a subset of Unicode)
Extended ASCII
HTML decimal character rendering
List of Unicode characters
Jargon File, a glossary of computer programmer slang which includes a list of common slang names for ASCII characters
List of computer character sets
Alt codes
Notes
References
Further reading
from:
External links
Category:Character sets
Category:Latin-alphabet representations
Category:Presentation layer protocols | 586 | 2017-01 |
Ministry of Defence (United Kingdom) | The Ministry of Defence (MoD or MOD) is the British government department responsible for implementing the defence policy set by Her Majesty's Government and is the headquarters of the British Armed Forces.
The MoD states that its principal objectives are to defend the United Kingdom of Great Britain and Northern Ireland and its interests and to strengthen international peace and stability.The Defence Vision, Ministry of Defence website. With the collapse of the Soviet Union and the end of the Cold War, the MoD does not foresee any short-term conventional military threat; rather, it has identified weapons of mass destruction, international terrorism, and failed and failing states as the overriding threats to Britain's interests.Strategic Defence Review 1998 Ministry of Defence, accessed 8 December 2008. The MoD also manages day-to-day running of the armed forces, contingency planning and defence procurement.
History
During the 1920s and 1930s, British civil servants and politicians, looking back at the performance of the state during World War I, concluded that there was a need for greater co-ordination between the three Services that made up the armed forces of the United Kingdom—the British Army, the Royal Navy, and the Royal Air Force. The formation of a united ministry of defence was rejected by David Lloyd George's coalition government in 1921; but the Chiefs of Staff Committee was formed in 1923, for the purposes of inter-Service co-ordination. As rearmament became a concern during the 1930s, Stanley Baldwin created the position of Minister for Coordination of Defence. Lord Chatfield held the post until the fall of Neville Chamberlain's government in 1940; his success was limited by his lack of control over the existing Service departments and his limited political influence.
Winston Churchill, on forming his government in 1940, created the office of Minister of Defence to exercise ministerial control over the Chiefs of Staff Committee and to co-ordinate defence matters. The post was held by the Prime Minister of the day until Clement Attlee's government introduced the Ministry of Defence Act of 1946. The new ministry was headed by a Minister of Defence who possessed a seat in the Cabinet. The three existing service Ministers—the Secretary of State for War, the First Lord of the Admiralty, and the Secretary of State for Air—remained in direct operational control of their respective services, but ceased to attend Cabinet.
From 1946 to 1964 five Departments of State did the work of the modern Ministry of Defence: the Admiralty, the War Office, the Air Ministry, the Ministry of Aviation, and an earlier form of the Ministry of Defence. These departments merged in 1964; the defence functions of the Ministry of Aviation Supply merged into the Ministry of Defence in 1971.
Ministers
The Ministers in the Ministry of Defence are as follows:
Minister Rank Portfolio The Rt Hon. Sir Michael Fallon KCB MP Secretary of State Overall responsibility and strategic direction The Rt Hon. Earl Howe PC Minister of State Lords Defence spokesman, commemorations & ceremonies The Rt Hon. Mike Penning MP Minister of State Operations, force generation Harriett Baldwin MP Parliamentary Under-Secretary of State Defence Procurement Mark Lancaster MP Parliamentary Under-Secretary of State Minister for Defence Veterans, Reserves and Personnel
Senior officials
thumb|right|The plaque outside the South Door of the MoD's Main Building.
Permanent Secretaries and other senior officials
The Ministers and Chiefs of the Defence Staff are supported by a number of civilian, scientific and professional military advisors. The Permanent Under-Secretary of State for Defence (generally known as the Permanent Secretary) is the senior civil servant at the MoD. His or her role is to ensure the MoD operates effectively as a department of the government.
Permanent Under-Secretary of State: Stephen Lovegrove—commencing April 2016
Defence Equipment & Support CEO - Tony Douglas — commencing 2016
Chief Scientific Adviser: Professor Vernon C. Gibson—commencing July 2012
Director General Finance: Acting DG Finance is David Williams
Chiefs of the Defence Staff
The current Chief of the Defence Staff, the professional head of the British Armed Forces, is Air Chief Marshal Sir Stuart Peach. He is supported by the Vice Chief of the Defence Staff, by the professional heads of the three services of HM Armed Forces and by the Commander of Joint Forces Command.
Vice-Chief of the Defence Staff: General Sir Gordon Messenger, Royal Marines.
First Sea Lord and Chief of the Naval Staff: Admiral Sir Philip Jones, Royal Navy
Chief of the General Staff: General Sir Nick Carter, late Royal Green Jackets.
Chief of the Air Staff: Air Chief Marshal Sir Stephen Hillier, Royal Air Force
Commander of Joint Forces Command: General Sir Christopher Deverell
There are also three Deputy Chiefs of the Defence Staff with particular remits, Deputy Chief of the Defence Staff (Capability), Deputy CDS (Personnel and Training) and Deputy CDS (Operations). The Surgeon General, represents the Defence Medical Services on the Defence Staff, and is the clinical head of that service.
Additionally, there are a number of Assistant Chiefs of Defence Staff, including the Assistant Chief of the Defence Staff (Reserves and Cadets) and the Defence Services Secretary in the Royal Household of the Sovereign of the United Kingdom, who is also the Assistant Chief of Defence Staff (Personnel).
Defence policy
The 1998 Strategic Defence Review and the 2003 Delivering Security in a Changing World White Paper outlined the following posture for the British Armed Forces:
The ability to support three simultaneous small- to medium-scale operations, with at least one as an enduring peace-keeping mission (e.g. Kosovo). These forces must be capable of representing Britain as lead nation in any coalition operations.
The ability, at longer notice, to deploy forces in a large-scale operation while running a concurrent small-scale operation.
The MoD has since been regarded as a leader in elaborating the post-Cold War organising concept of "defence diplomacy". As a result of the Strategic Defence and Security Review 2010, Prime Minister David Cameron signed a 50-year treaty with French President Nicolas Sarkozy that would have the two countries co-operate intensively in military matters. The UK is establishing air and naval bases in the Persian Gulf, located in the UAE and Bahrain. A presence in Oman is also being considered.
The Strategic Defence and Security Review 2015 included £178 billion investment in new equipment and capabilities. The review set a defence policy with four primary missions for the Armed Forces:
Defend and contribute to the security and resilience of the UK and Overseas Territories.
Provide the nuclear deterrent.
Contribute to improved understanding of the world through strategic intelligence and the global defence network.
Reinforce international security and the collective capacity of our allies, partners and multilateral institutions.
The review stated the Armed Forces will also contribute to the government’s response to crises by being prepared to:
Support humanitarian assistance and disaster response, and conduct rescue missions.
Conduct strike operations.
Conduct operations to restore peace and stability.
Conduct major combat operations if required, including under NATO Article 5.
Current threats
Following the end of the Cold War, the threat of direct conventional military confrontation with other states has been replaced by terrorism. Sir Richard Dannatt predicted British forces to be involved in combating "predatory non-state actors" for the foreseeable future, in what he called an "era of persistent conflict". He told the Chatham House think tank that the fight against al-Qaeda and other militant Islamist groups was "probably the fight of our generation".
Dannatt criticised a remnant "Cold War mentality", with military expenditures based on retaining a capability against a direct conventional strategic threat; He said currently only 10% of the MoD's equipment programme budget between 2003 and 2018 was to be invested in the "land environment"—at a time when Britain was engaged in land-based wars in Afghanistan and Iraq.
The Defence Committee—Third Report "Defence Equipment 2009" cites an article from the Financial Times website"MoD orders spending clampdown", Financial Times, 16 November 2008, FT.com stating that the Chief of Defence Materiel, General Sir Kevin O'Donoghue, had instructed staff within Defence Equipment and Support (DE&S) through an internal memorandum to reprioritize the approvals process to focus on supporting current operations over the next three years; deterrence related programmes; those that reflect defence obligations both contractual or international; and those where production contracts are already signed. The report also cites concerns over potential cuts in the defence science and technology research budget; implications of inappropriate estimation of Defence Inflation within budgetary processes; underfunding in the Equipment Programme; and a general concern over striking the appropriate balance over a short-term focus (Current Operations) and long-term consequences of failure to invest in the delivery of future UK defence capabilities on future combatants and campaigns. The then Secretary of State for Defence, Bob Ainsworth MP, reinforced this reprioritisation of focus on current operations and had not ruled out "major shifts" in defence spending. In the same article the First Sea Lord and Chief of the Naval Staff, Admiral Sir Mark Stanhope, Royal Navy, acknowledged that there was not enough money within the defence budget and it is preparing itself for tough decisions and the potential for cutbacks. According to figures published by the London Evening StandardDefence cuts 'to leave aircraft carriers without any planes', Robert Fox, 23 June 2009 the defence budget for 2009 is "more than 10% overspent" (figures cannot be verified) and the paper states that this had caused Gordon Brown to say that the defence spending must be cut.
The MoD has been investing in IT to cut costs and improve services for its personnel.
Departmental organisation
thumb|180px|A British armed forces careers office in Oxford
Governance:
The Defence Board is 'the main MOD corporate board chaired by the Secretary of State and responsible for top level leadership and management across defence'. Its membership comprises the Secretary of State, the Armed Forces Minister, the Permanent Secretary, the Chief and Vice Chief of the Defence Staff, the Chief of Defence Materiel, Director General Finance and three non-executive board members.
The Defence Council 'provides the formal legal basis for the conduct of defence in the UK through a range of powers vested in it by statute and Letters Patent'. It too is chaired by the Secretary of State, and its members are ministers, the senior officers and senior civilian officials.
Central command organisations:
Headquarters Air Command
Army Headquarters
Navy Command Headquarters
Joint Forces Command
Head Office and Corporate Services (HOCS)
Support organisations:
Defence Business Services (DBS)
Defence Equipment and Support (DE&S)
Defence Infrastructure Organisation (DIO)
Executive agencies:
Defence Electronics and Components Agency (DECA)
Defence Science and Technology Laboratory (Dstl)
UK Hydrographic Office (UKHO)
Dstl and UKHO also have trading fund status.
Non-departmental public bodies:
National Army Museum
National Museum of the Royal Navy
Royal Air Force Museum
In addition, the MoD is responsible for the administration of the Sovereign Base Areas of Akrotiri and Dhekelia in Cyprus.
Property portfolio
right|thumb|The MoD Main Building, Whitehall, London
The Ministry of Defence is one of the United Kingdom's largest landowners, owning 227,300 hectares of land and foreshore (either freehold or leasehold) at April 2014, which was valued at "about £20 billion". The MoD also has "rights of access" to a further 222,000 hectares. In total, this is about 1.8% of the UK land mass. The total annual cost to support the defence estate is "in excess of £3.3 billion".
The defence estate is divided as training areas & ranges (84.0%), research & development (5.4%), airfields (3.4%), barracks & camps (2.5%), storage & supply depots (1.6%), and other (3.0%). These are largely managed by the Defence Infrastructure Organisation.
The headquarters of the MoD are in Whitehall and are now known as Main Building. This structure is neoclassical in style and was originally built between 1938 and 1959 to designs by Vincent Harris to house the Air Ministry and the Board of Trade. The northern entrance in Horse Guards Avenue is flanked by two monumental statues, Earth and Water, by Charles Wheeler. Opposite stands the Gurkha Monument, sculpted by Philip Jackson and unveiled in 1997 by Queen Elizabeth II. Within it is the Victoria Cross and George Cross Memorial, and nearby are memorials to the Fleet Air Arm and RAF (to its east, facing the riverside). A major refurbishment of the building was completed under a PFI contract by Skanska in 2004.Better Defence Builds Project Case Study
Henry VIII's wine cellar at the Palace of Whitehall, built in 1514–1516 for Cardinal Wolsey, is in the basement of Main Building, and is used for entertainment. The entire vaulted brick structure of the cellar was encased in steel and concrete and relocated nine feet to the west and nearly deeper in 1949, when construction was resumed at the site after World War II. This was carried out without any significant damage to the structure.
Controversies
Fraud
The most notable fraud conviction was that of Gordon Foxley, head of defence procurement at the Ministry of Defence from 1981 to 1984. Police claimed he received at least £3.5m in total in corrupt payments, such as substantial bribes from overseas arms contractors aiming to influence the allocation of contracts.
Germ and chemical warfare tests
A government report covered by the Guardian in 2002 indicates that between 1940 and 1979, the Ministry of Defence "turned large parts of the country into a giant laboratory to conduct a series of secret germ warfare tests on the public" and many of these tests "involved releasing potentially dangerous chemicals and micro-organisms over vast swaths of the population without the public being told." The Ministry of Defence claims that these trials were to simulate germ warfare and that the tests were harmless. Still, families who have been in the area of many of the tests are experiencing children with birth defects and physical and mental handicaps and many are asking for a public inquiry. According to the report these tests affected estimated millions of people including one period between 1961 and 1968 where "more than a million people along the south coast of England, from Torquay to the New Forest, were exposed to bacteria including e.coli and bacillus globigii, which mimics anthrax." Two scientists commissioned by the Ministry of Defence stated that these trials posed no risk to the public. This was confirmed by Sue Ellison, a representative of Porton Down who said that the results from these trials "will save lives, should the country or our forces face an attack by chemical and biological weapons." Asked whether such tests are still being carried out, she said: "It is not our policy to discuss ongoing research." It is unknown whether or not the harmlessness of the trials was known at the time of their occurrence.
Chinook HC3 helicopters
The MoD has been criticised for an ongoing fiasco, having spent £240m on eight Chinook HC3 helicopters which only started to enter service in 2010, years after they were ordered in 1995 and delivered in 2001. A National Audit Office report reveals that the helicopters have been stored in air conditioned hangars in Britain since their 2001 delivery, while troops in Afghanistan have been forced to rely on helicopters which are flying with safety faults. By the time the Chinooks are airworthy, the total cost of the project could be as much as £500m.
In April 2008, a £90m contract was signed with Boeing for a "quick fix" solution, so they can fly by 2010: QinetiQ will downgrade the Chinooks—stripping out some of their more advanced equipment.
Volunteer army cuts
In October 2009, the MoD was heavily criticized for withdrawing the bi-annual non-operational training £20m budget for the volunteer Territorial Army (TA), ending all non-operational training for 6 months until April 2010. The government eventually backed down and restored the funding. The TA provides a small percentage of the UK's operational troops. Its members train on weekly evenings and monthly weekends, as well as two-week exercises generally annually and occasionally bi-annually for troops doing other courses. The cuts would have meant a significant loss of personnel and would have had adverse effects on recruitment."Cuts force TA to cease training", BBC News, 10 October 2009
Overspending
In 2013 it was found that the Ministry of Defence had overspent on its equipment budget by £6.5bn on orders that could take up to 39 years to fulfil. The Ministry of Defence has been criticised in the past for poor management and financial control, investing in projects that have taken up to 10 and even as much as 15 years to be delivered.
See also
Defence Review
The Lancaster House Treaties (2010)
Stabilisation Unit
United Kingdom budget
References
Bibliography
Chester, D. N and Willson, F. M. G. The Organisation of British Central Government 1914–1964: Chapters VI and X (2nd edition). London: George Allen & Unwin, 1968.
External links
www.mod.uk Archived Website
Defence Relationship Management
Defencemanagement.com
Category:1964 establishments in the United Kingdom
United Kingdom
United Kingdom, Defence
Category:Military of the United Kingdom
Category:Military units and formations established in 1964
United Kingdom
Category:British landowners | 203,793 | 2017-01 |
Mandolin | A mandolin ( ; literally "small mandola") is a musical instrument in the lute family and is usually plucked with a plectrum or "pick". It commonly has four courses of doubled metal strings tuned in unison (8 strings), although five (10 strings) and six (12 strings) course versions also exist. The courses are normally tuned in a succession of perfect fifths. It is the soprano member of a family that includes the mandola, octave mandolin, mandocello and mandobass.
There are many styles of mandolin, but three are common, the Neapolitan or round-backed mandolin, the carved-top mandolin and the flat-backed mandolin. The round-back has a deep bottom, constructed of strips of wood, glued together into a bowl. The carved-top or arch-top mandolin has a much shallower, arched back, and an arched top—both carved out of wood. The flat-backed mandolin uses thin sheets of wood for the body, braced on the inside for strength in a similar manner to a guitar. Each style of instrument has its own sound quality and is associated with particular forms of music. Neapolitan mandolins feature prominently in European classical music and traditional music. Carved-top instruments are common in American folk music and bluegrass music. Flat-backed instruments are commonly used in Irish, British and Brazilian folk music. Some modern Brazilian instruments feature an extra fifth course tuned a fifth lower than the standard fourth course.
Other mandolin varieties differ primarily in the number of strings and include four-string models (tuned in fifths) such as the Brescian and Cremonese, six-string types (tuned in fourths) such as the Milanese, Lombard and the Sicilian and 6 course instruments of 12 strings (two strings per course) such as the Genoese. There has also been a twelve-string (three strings per course) type and an instrument with sixteen-strings (four strings per course).
Much of mandolin development revolved around the soundboard (the top). Pre-mandolin instruments were quiet instruments, strung with as many as six courses of gut strings, and were plucked with the fingers or with a quill. However, modern instruments are louder—using four courses of metal strings, which exert more pressure than the gut strings. The modern soundboard is designed to withstand the pressure of metal strings that would break earlier instruments. The soundboard comes in many shapes—but generally round or teardrop-shaped, sometimes with scrolls or other projections. There is usually one or more sound holes in the soundboard, either round, oval, or shaped like a calligraphic (f-hole). A round or oval sound hole may be covered or bordered with decorative rosettes or purfling.Musical Instruments: A Comprehensive Dictionary, by Sibyl Marcuse (Corrected Edition 1975)The New Grove Dictionary of Music and Musicians, Second Edition, edited by Stanley Sadie and others (2001)
History
thumb|left|In 1787 Luigi Bassi played the role of Don Giovanni in Mozart's opera, serenading a woman with a mandolin. This used to be the common picture of the mandolin, an obscure instrument of romance in the hands of a Spanish nobleman.
Mandolins evolved from the lute family in Italy during the 17th and 18th centuries, and the deep bowled mandolin, produced particularly in Naples, became common in the 19th century.
Early precursors
Dating to around c. 13,000 BC, a cave painting in the Trois Frères cave in France depicts what some believe is a musical bow, a hunting bow used as a single-stringed musical instrument. From the musical bow, families of stringed instruments developed; since each string played a single note, adding strings added new notes, creating bow harps, harps and lyres. In turn, this led to being able to play dyads and chords. Another innovation occurred when the bow harp was straightened out and a bridge used to lift the strings off the stick-neck, creating the lute.
This picture of musical bow to harp bow is theory and has been contested. In 1965 Franz Jahnel wrote his criticism stating that the early ancestors of plucked instruments are not currently known. He felt that the harp bow was a long cry from the sophistication of the 4th Century B.C. civilization that took the primitive technology and created "technically and artistically well made harps, lyres, citharas and lutes."
First lutes
Musicologists have put forth examples of that 4th Century B.C. technology, looking at engraved images that have survived. The earliest image showing a lute-like instrument came from Mesopotamia prior to 3000 BC . A cylinder seal from c. 3100 BC or earlier (now in the possession of the British Museum) shows what is thought to be a woman playing a stick lute.http://www.britishmuseum.org/research/collection_online/collection_object_details.aspx?objectId=1447477&partId=1&people=24615&peoA=24615-3-17&page=1 British Museum, Cylinder Seal, Culture/period Uruk, Date 3100BC (circa1), Museum number 41632. From the surviving images, theororists have categorized the Mesopotamian lutes, showing that they developed into a long variety and a short. The line of long lutes may have developed into the tamburs and pandura. The line of short lutes was further developed to the east of Mesopotamia, in Bactria and Gandhara, into a short, almond-shaped lute. Musician playing a 4th-to-5th-Century lute, excavated in Gandhara, and part of a Los Angeles County Art Museum collection of Five Celestial Musicians
Persian barbat, Arab oud
Andalusia
Bactria and Gandhara became part of the Sasanian Empire (224–651 AD). Under the Sasanians, a short almond shaped lute from Bactria came to be called the barbat or barbud, which was developed into the later Islamic world's oud or ud. When the Moors conquered Andalusia in 711 AD, they brought their ud along, into a country that had already known a lute tradition under the Romans, the pandura.
During the 8th and 9th centuries, many musicians and artists from across the Islamic world flocked to Iberia. Among them was Abu l-Hasan ‘Ali Ibn Nafi‘ (789–857), a prominent musician who had trained under Ishaq al-Mawsili (d. 850) in Baghdad and was exiled to Andalusia before 833 AD. He taught and has been credited with adding a fifth string to his oud and with establishing one of the first schools of music in Córdoba.
By the 11th century, Muslim Iberia had become a center for the manufacture of instruments. These goods spread gradually to Provence, influencing French troubadours and trouvères and eventually reaching the rest of Europe.
From Sicily to Germany
Beside the introduction of the lute to Spain (Andalusia) by the Moors, another important point of transfer of the lute from Arabian to European culture was Sicily, where it was brought either by Byzantine or later by Muslim musicians.Colin Lawson and Robin Stowell, The Cambridge History of Musical Performance, Cambridge University Press, Feb 16, 2012 There were singer-lutenists at the court in Palermo following the Norman conquest of the island from the Muslims, and the lute is depicted extensively in the ceiling paintings in the Palermo’s royal Cappella Palatina, dedicated by the Norman King Roger II of Sicily in 1140. His Hohenstaufen grandson Frederick II, Holy Roman Emperor (1194 - 1250) continued integrating Muslims into his court, including Moorish musicians.Roger Boase, The Origin and Meaning of Courtly Love: A Critical Study of European Scholarship, Manchester University Press, 1977, p. 70-71. By the 14th century, lutes had disseminated throughout Italy and, probably because of the cultural influence of the Hohenstaufen kings and emperor, based in Palermo, the lute had also made significant inroads into the German-speaking lands.
European lute beginnings
A distinct European tradition of lute development is noticeable in pictures and sculpture from the 13th century onward. As early as the beginning of the 14th century, strings were doubled into courses on the miniature lute or gittern, used throughout Europe. The small zigzag-shaped soundhole became a round soundhole covered with a decoration. The mandore, appeared in the late 16th century and although known here under a French name, was used elsewhere as indicated by the names in other European languages (German mandoer, Spanish vandola, and Italian mandola).
Development in Italy, birth of Neapolitan mandolin
The mandore was not a final form, and the design was tinkered with wherever it was built. The Italians redesigned it and produced the mandolino or Baroque mandolin, a small catgut-strung mandola, strung in 4, 5 or 6 courses tuned in fourths: e′–a′–d″–g″, b–e′–a′–d″–g″ or g–b–e′–a′–d″–g″, and played finger-style.
Vinaccia
First metal-string mandolins
The first evidence of modern metal-string mandolins is from literature regarding popular Italian players who travelled through Europe teaching and giving concerts. Notable are Signor Gabriele Leone, Giovanni Battista Gervasio, Pietro Denis, who travelled widely between 1750 and 1810. This, with the records gleaned from the Italian Vinaccia family of luthiers in Naples, Italy, led some musicologists to believe that the modern steel-string mandolins were developed in Naples by the Vinaccia family.
There is confusion currently as to the name of the eldest Vinaccia luthier who first ran the shop. His name has been put forth as Gennaro Vinaccia (active c. 1710 to c. 1788) and Nic. Vinaccia. His son Antonio Vinaccia was active c. 1734 to c. 1796. An early extant example of a mandolin is one built by Antonio Vinaccia in 1759, which resides at the University of Edinburgh. Another is by Giuseppe Vinaccia, built in 1893, is also at the University of Edinburgh. The earliest extant mandolin was built in 1744 by Antonio's son, Gaetano Vinaccia. It resides in the Conservatoire Royal de Musique in Brussels, Belgium.
Family mandolin modified to create Neapolitan mandolin
Gaetano's son, Pasquale Vinaccia (1806–1885), modernized the mandolin, adding features, creating the Neapolitan mandolin c. 1835. Pasquale remodeled, raised and extended the fingerboard to 17 frets, introduced stronger wire strings made of high-tension steel and substituted a machine head for the friction tuning pegs, then standard. The new wire strings required that he strengthen the mandolin's body, and he deepened the mandolin's bowl, giving the tonal quality more resonance.
Calace, Embergher and others
Other luthiers who built mandolins included Raffaele Calace (1863 onwards) in Naples, Luigi EmbergherThe Embergher mandolin, Ralf Leenen and Barry Pratt, 2004. ISBN 9073838312, 9789073838314 (1856–1943) in Rome and Arpino, the Ferrari family (1716 onwards, also originally mandolino makers) in Rome, and De Santi (1834–1916) in Rome. The Neapolitan style of mandolin construction was adopted and developed by others, notably in Rome, giving two distinct but similar types of mandolin – Neapolitan and Roman.
Rising and falling fortunes
First wave
The transition from the mandolino to the mandolin began around 1744 with the designing of the metal-string mandolin by the Vinaccia family, 3 brass strings and one of gut, using friction tuning pegs on a fingerboard that sat "flush" with the sound table. The mandolin grew in popularity over the next 60 years, in the streets where it was used by young men courting and by street musicians, and in the concert hall. After the Napoleonic Wars of 1815, however, its popularity began to fall. The 19th century produced some prominent players, including Bartolomeo Bortolazzi of Venice and Pietro Vimercati. However, professional virtuosity was in decline, and the mandolin music changed as the mandolin became a folk instrument; "the large repertoire of notated instrumental music for the mandolino and the mandoline was completely forgotten". The export market for mandolins from Italy dried up around 1815, and when Carmine de Laurentiis wrote a mandolin method in 1874, the Music World magazine wrote that the mandolin was "out of date." Salvador Léonardi mentioned this decline in his 1921 book, Méthode pour Banjoline ou Mandoline-Banjo, saying that the mandolin had been declining in popularity from previous times.Salvador Léonardi, Méthode pour Banjoline ou Mandoline-Banjo, Paris, 1921
It was during this slump in popularity (specifically in 1835) that Pasquale Vinaccia made his modifications to the instrument that his family made for generations, creating the Neapolitan mandolin. The mandolin was largely forgotten outside of Italy by that point, but the stage was set for it to become known again, starting with the Paris Exposition in 1878.
Second wave, the Golden Age of mandolins
Beginning with the Paris Exposition of 1878, the instrument's popularity began to rebound. The Exposition was one of many stops for the Estudiantes Españoles (Spanish Students). There has been confusion regarding this group.
The original Estudiantes Española or Estudiantina Española was a group of 64 students formed by 26 February 1878, principally from Madrid colleges. They dressed in historical clothing, representing ancient sophists of Salamanca and Alcala and traveled to Paris for Carnival staying from March 2 through March 15. This early group of students played flutes, guitars violins, bandurrias, flutes and tambourines. This early group was led by Ildefonso de Zabaleta (president) and Joaquin de Castañeda (vice president). The group performed before large audiences in Paris (reports of 10,000 and 56,000 people showing up for a night's entertainment were reported).
Their success in Paris preceded a second group of Spanish performers, known as the Esudiantina Figaro or Esudiantina Española Figaroa (Figaro Band of Spanish Students). This group was founded by Dionisio Granados and toured Europe dancing and playing guitars, violins and the bandurria, which became confused with the mandolin.
Along with their energy and the newfound awareness of the instrument created by the day's hit sensation, a wave of Italian mandolinists travelled Europe in the 1880s and 1890s and in the United States by the mid-1880s, playing and teaching their instrument. The instrument's popularity continued to increase during the 1890s and mandolin popularity was at its height in the "early years of the 20th century." Thousands were taking up the instrument as a pastime, and it became an instrument of society, taken up by young men and women. Mandolin orchestras were formed worldwide, incorporating not only the mandolin family of instruments, but also guitars, double basses and zithers.
thumb|right|250px|The Mandolin "Estudiantina" of Mayenne, France around 1900 when Mandolin orchestras were at the height of their popularity.
That era (from the late 19th century into the early 20th century) has come to be known as the "Golden Age" of the mandolin.The Classical Mandolin Society, Classical Mandolin - A (Very) Brief Overview The term is used online by mandolin enthusiasts to name the time period when the mandolin had become popular, when mandolin orchestras were being organized worldwide, and new and high-quality instruments were increasingly common.
After the First World War, the instrument's popularity again fell, though gradually. Reasons cited include the rise of Jazz, for which the instrument was too quiet. Also, modern conveniences (phonograph records, bicycle and automobiles, outdoor sports) competed with learning to play an instrument for fun.
Aftermath
The second decline was not as complete as the first. Thousands of people had learned to play the instrument. Even as the second wave of mandolin popularity declined in the early 20th century, new versions of the mandolin began to be used in new forms of music.Ian Pommerenke, The Mandolin in the early to mid 19th Century, Lanarkshire Guitar and Mandolin Association Newsletter, Spring 2007. Luthiers created the resonator mandolin, the flatback mandolin, the carved-top or arched-top mandolin, the mandolin-banjo and the electric mandolin. Musicians began playing it in Celtic, Bluegrass, Jazz and Rock-n-Roll styles — and Classical too.
Construction
thumb|left|150px|Schematic drawing of a bowlback mandolin
Mandolins have a body that acts as a resonator, attached to a neck. The resonating body may be shaped as a bowl (necked bowl lutes) or a box (necked box lutes). Traditional Italian mandolins, such as the Neapolitan mandolin, meet the necked bowl description. The necked box instruments include the carved top mandolins and the flatback mandolins.
Strings run between mechanical tuning machines at the top of the neck to a tailpiece that anchors the other end of the strings. The strings are suspended over the neck and soundboard and pass over a floating bridge. The bridge is kept in contact with the soundboard by the downward pressure from the strings. The neck is either flat or has a slight radius, and is covered with a fingerboard with frets. The action of the strings on the bridge causes the soundboard to vibrate, producing sound.
Like any plucked instrument, mandolin notes decay to silence rather than sound out continuously as with a bowed note on a violin, and mandolin notes decay faster than larger stringed instruments like the guitar. This encourages the use of tremolo (rapid picking of one or more pairs of strings) to create sustained notes or chords. The mandolin's paired strings facilitate this technique: the plectrum (pick) strikes each of a pair of strings alternately, providing a more full and continuous sound than a single string would.
Various design variations and amplification techniques have been used to make mandolins comparable in volume with louder instruments and orchestras, including the creation of mandolin-banjo hybrid with the louder banjo, adding metal resonators (most notably by Dobro and the National String Instrument Corporation) to make a resonator mandolin, and amplifying electric mandolins through amplifiers.
Tuning
A variety of different tunings are used. Usually, courses of 2 adjacent strings are tuned in unison. By far the most common tuning is the same as violin tuning, in scientific pitch notation G3–D4–A4–E5, or in Helmholtz pitch notation: g–d′–a′–e″.
fourth (lowest tone) course: G3 ()
third course: D4 ()
second course: A4 (; A above middle C)
first (highest tone) course: E5 ()
Note that the numbers of Hz shown above assume a 440 Hz A, standard in most parts of the western world. Some players use an A up to 10 Hz above or below a 440, mainly outside of the United States.
File:Mandolin fretboard.png
Other tunings exist, including cross-tunings, in which the usually doubled string runs are tuned to different pitches. Additionally, guitarists may sometimes tune a mandolin to mimic a portion of the intervals on a standard guitar tuning to achieve familiar fretting patterns.
Mandolin family
thumb|right|300px|From top left, clockwise: 1920 Gibson F-4 mandolin, 1917 Gibson H-2 mandola, 1929 Gibson mando-bass, and 1924 Gibson K-4 mandocello from Gregg Miner's collection.
thumb|left|58px|Piccolo mandolin.
Soprano
The mandolin is the soprano member of the mandolin family, as the violin is the soprano member of the violin family. Like the violin, its scale length is typically about . Modern American mandolins modelled after Gibsons have a longer scale, about . The strings in each of its double-strung courses are tuned in unison, and the courses use the same tuning as the violin: G3–D4–A4–E5.
The piccolo or sopranino mandolin is a rare member of the family, tuned one octave above the mandola and one fourth above the mandolin (C4–G4–D5–A5); the same relation as that of the piccolo or sopranino violin to the violin and viola. One model was manufactured by the Lyon & Healy company under the Leland brand. A handful of contemporary luthiers build piccolo mandolins. Its scale length is typically about .
thumb|right|50px|Mandola.
Alto
The mandola (US and Canada), termed the tenor mandola in Britain and Ireland and liola or alto mandolin in continental Europe, which is tuned to a fifth below the mandolin, in the same relationship as that of the viola to the violin. Some also call this instrument the "alto mandola." Its scale length is typically about . It is normally tuned like a viola (fifth below the mandolin): C3–G3–D4–A4.
Tenor
The octave mandolin (US and Canada), termed the octave mandola in Britain and Ireland and mandola in continental Europe, is tuned an octave below the mandolin: G2–D3–A3–E4. Its relationship to the mandolin is that of the tenor violin to the violin. Octave mandolin scale length is typically about , although instruments with scales as short as or as long as are not unknown.
The Irish bouzouki is also considered a member of the mandolin family; although derived from the Greek bouzouki (a long-necked lute), it is constructed like a flat-backed mandolin and uses fifth-based tunings, most often G2–D3–A3–E4 (an octave below the mandolin)—in which case it essentially functions as an octave mandolin. Common alternate tunings include: G2–D3–A3–D4, A2–D3–A3–D4 or A2–D3–A3–E4. Although the Irish bouzouki's bass course pairs are most often tuned in unison, on some instruments one of each pair is replaced with a lighter string and tuned in octaves, in the fashion of the 12-string guitar. While occupying the same range as the octave mandolin/octave mandola, the Irish bouzouki is theoretically distinguished from the former instrument by its longer scale length, typically from , although scales as long as , which is the usual Greek bouzouki scale, are not unknown. In modern usage, however, the terms "octave mandolin" and "Irish bouzouki" are often used interchangeably to refer to the same instrument.
thumb|right|135px|A waldzither
The modern cittern may also be loosely included in an "extended" mandolin family. It resembles the flatback mandolins, but predates them, dating back to the Renaissance. It is typically a five course (ten string) instrument having a scale length between . The instrument is most often tuned to either D2–G2–D3–A3–D4 or G2–D3–A3–D4–A4, and is essentially an octave mandola with a fifth course at either the top or the bottom of its range. Some luthiers, such as Stefan Sobell also refer to the octave mandola or a shorter-scaled Irish bouzouki as a cittern, irrespective of whether it has four or five courses.
Relatives of the cittern, which might also be loosely linked to the mandolins (and are sometimes tuned and played as such), include the 6-course/12-string Portuguese guitar and the 5-course/9-string waldzither.
Baritone/Bass
thumb|110px|right|A mandolone in the hands of Giuseppe Branzoli.
The mandolone was a Baroque member of the mandolin family in the bass range that was surpassed by the mandocello. Built as part of the Neapolitan mandolin family.
thumb|left|160px|Neapolitan styled mandocello built to scale.
The mandocello, which is classically tuned to an octave plus a fifth below the mandolin, in the same relationship as that of the cello to the violin: C2–G2–D3–A3. Its scale length is typically about . A typical violoncello scale is .
thumb|right|130px|19th- and 20th-century laouta
The Greek laouto or laghouto (long-necked lute) is similar to a mandocello, ordinarily tuned C3/C2–G3/G2–D3/D3–A3/A3 with half of each pair of the lower two courses being tuned an octave high on a lighter gauge string. The body is a staved bowl, the saddle-less bridge glued to the flat face like most ouds and lutes, with mechanical tuners, steel strings, and tied gut frets. Modern laoutos, as played on Crete, have the entire lower course tuned to C3, a reentrant octave above the expected low C. Its scale length is typically about .
Contrabass
thumb|180px|left|Gibson mando-bass from 1922 advertisement.
The mando-bass most frequently has 4 single strings, rather than double courses, and is tuned in fourths like a double bass or an acoustic bass guitar: E1–A1–D2–G2. These were made by the Gibson company in the early 20th century, but appear to have never been very common. A smaller scale four-string mandobass, usually tuned in fifths: G1–D2–A2–E3 (two octaves below the mandolin), though not as resonant as the larger instrument, was often preferred by players as easier to handle and more portable. Ruppa, Paul; American Mando-Bass History 101 Reportedly, however, most mandolin orchestras preferred to use the ordinary double bass, rather than a specialised mandolin family instrument. Calace and other Italian makers predating Gibson also made mandolin-basses.
The relatively rare eight-string mandobass, or tremolo-bass also exists, with double courses like the rest of the mandolin family, and is tuned either G1–D2–A2–E3, two octaves lower than the mandolin, or C1–G1–D2–A2, two octaves below the mandola.Marcuse, Sibyl; Musical Instruments: A Comprehensive Dictionary; W. W. Norton & Company (1975). (see entries for mandolin, and for individual mandolin family members.)Johnson, J. R.; The Mandolin Orchestra in America, Part 3: Other Instruments, American Lutherie, No. 21 (Spring) 1990, pp.45–46.
Variations
Bowlback
thumb|right|Modern Neapolitan bowlback mandolin manufactured by the Calace family workshop.
Bowlback mandolins (also known as roundbacks), are used worldwide. They are most commonly manufactured in Europe, where the long history of mandolin development has created local styles. However, Japanese luthiers also make them.
Neapolitan and Roman styles
The Neapolitan style has an almond-shaped body resembling a bowl, constructed from curved strips of wood. It usually has a bent sound table, canted in two planes with the design to take the tension of the eight metal strings arranged in four courses. A hardwood fingerboard sits on top of or is flush with the sound table. Very old instruments may use wooden tuning pegs, while newer instruments tend to use geared metal tuners. The bridge is a movable length of hardwood. A pickguard is glued below the sound hole under the strings. European roundbacks commonly use a scale instead of the common on archtop Mandolins.
Intertwined with the Neapolitan style is the Roman style mandolin, which has influenced it. The Roman mandolin had a fingerboard that was more curved and narrow. The fingerboard was lengthened over the sound hole for the E strings, the high pitched strings. The shape of the back of the neck was different, less rounded with an edge, the bridge was curved making the G strings higher. The Roman mandolin had mechanical tuning gears before the Neapolitan.
Prominent Italian manufacturers include Vinaccia (Naples), Embergher (Rome) and Calace (Naples). Other modern manufacturers include Lorenzo Lippi (Milan), Hendrik van den Broek (Netherlands), Brian Dean (Canada), Salvatore Masiello and Michele Caiazza (La Bottega del Mandolino) and Ferrara, Gabriele Pandini.
Lombardic styles, Milanese and Brescian
thumb|right|Mandolin with six courses of strings (2 strings per course), labelled Milanese at the museum.
Another family of bowlback mandolins came from Milan and Lombardy. These mandolins are closer to the mandolino or mandore than other modern mandolins. They are shorter and wider than the standard Neapolitan mandolin, with a shallow back. The instruments have 6 strings, 3 wire treble-strings and 3 gut or wire-wrapped-silk bass-strings. The strings ran between the tuning pegs and a bridge that was glued to the soundboard, as a guitar's. The Lombardic mandolins were tuned g–b–e′–a′–d″–g″ (shown in Helmholtz pitch notation). A developer of the Milanese stye was Antonio Monzino (Milan) and his family who made them for 6 generations.
Samuel Adelstein described the Lombardi mandolin in 1893 as wider and shorter than the Neapolitan mandolin, with a shallower back and a shorter and wider neck, with six single strings to the regular mandolin's set of 4. The Lombardi was tuned C–D–A–E–B–G. The strings were fastened to the bridge like a guitar's. There were 20 frets, covering three octaves, with an additional 5 notes. When Adelstein wrote, there were no nylon strings, and the gut and single strings "do not vibrate so clearly and sweetly as the double steel string of the Neapolitan."
Brescian Mandolin
Brescian mandolins that have survived in museums have four gut strings instead of six. The mandolin was tuned in fifths, like the Neapolitan mandolin.
Cremonese mandolin
In his 1805 mandolin method, Anweisung die Mandoline von selbst zu erlernen nebst einigen Uebungsstucken von Bortolazzi, Bartolomeo Bortolazzi popularised the Cremonese mandolin, which had four single-strings and a fixed bridge, to which the strings were attached. Bortolazzi said in this book that the new wire strung mandolins were uncomfortable to play, when compared with the gut-string instruments. Also, he felt they had a "less pleasing...hard, zither-like tone" as compared to the gut string's "softer, full-singing tone."
He favored the four single strings of the Cremonese instrument, which were tuned the same as the Neapolitan.
Manufacturers outside of Italy
In the United States, when the bowlback was being made in numbers, Lyon and Healy was a major manufacturer, especially under the "Washburn" brand. Other American manufacturers include Martin, Vega, and Larson Brothers.
In Canada, Brian Dean has manufactured instruments in Neapolitan, Roman, German and American stylesLabraid Mandolins but is also known for his original 'Grand Concert' design created for American virtuoso Joseph Brent.Labraid Mandolins. Grand Concert - Brian Dean.
German manufacturers include Albert & Mueller, Dietrich, Klaus Knorr, Reinhold Seiffert and Alfred Woll. The German bowlbacks use a style developed by Seiffert, with a larger and rounder body.
Japanese brands include Kunishima and Suzuki. Other Japanese manufacturers include Oona, Kawada, Noguchi, Toichiro Ishikawa, Rokutaro Nakade, Otiai Tadao, Yoshihiko Takusari, Nokuti Makoto, Watanabe, Kanou Kadama and Ochiai.
Archtop
At the very end of the 19th century, a new style, with a carved top and back construction inspired by violin family instruments began to supplant the European-style bowl-back instruments in the United States. This new style is credited to mandolins designed and built by Orville Gibson, a Kalamazoo, Michigan luthier who founded the "Gibson Mandolin-Guitar Manufacturing Co., Limited" in 1902. Gibson mandolins evolved into two basic styles: the Florentine or F-style, which has a decorative scroll near the neck, two points on the lower body and usually a scroll carved into the headstock; and the A-style, which is pear shaped, has no points and usually has a simpler headstock.
These styles generally have either two f-shaped soundholes like a violin (F-5 and A-5), or an oval sound hole (F-4 and A-4 and lower models) directly under the strings. Much variation exists between makers working from these archetypes, and other variants have become increasingly common. Generally, in the United States, Gibson F-hole F-5 mandolins and mandolins influenced by that design are strongly associated with bluegrass, while the A-style is associated other types of music, although it too is most often used for and associated with bluegrass. The F-5's more complicated woodwork also translates into a more expensive instrument.
Internal bracing to support the top in the F-style mandolins is usually achieved with parallel tone bars, similar to the bass bar on a violin. Some makers instead employ "X-bracing," which is two tone bars mortised together to form an X. Some luthiers now using a "modified x-bracing" that incorporates both a tone bar and X-bracing.
Numerous modern mandolin makers build instruments that largely replicate the Gibson F-5 Artist models built in the early 1920s under the supervision of Gibson acoustician Lloyd Loar. Original Loar-signed instruments are sought after and extremely valuable. Other makers from the Loar period and earlier include Lyon and Healy, Vega and Larson Brothers.
Flatback
Flatback mandolins use a thin sheet of wood with bracing for the back, as a guitar uses, rather than the bowl of the bowlback or the arched back of the carved mandolins.
Like the bowlback, the flatback has a round sound hole. This has been sometimes modified to an elongated hole, called a D-hole. The body has a rounded almond shape with flat or sometimes canted soundboard.
The type was developed in Europe in the 1850s. The French and Germans called it a Portuguese mandolin, although they also developed it locally. The Germans used it in Wandervogel.
The bandolim is commonly used wherever the Spanish and Portuguese took it: in South America, in Brazil (Choro) and in the Philippines.
In the early 1970s English luthier Stefan Sobell developed a large-bodied, flat-backed mandolin with a carved soundboard, based on his own cittern design; this is often called a 'Celtic' mandolin.Stefan Sobell Guitars
American forms include the Army-Navy mandolin, the flatiron and the pancake mandolins.
Double top
A variation of the flatback has a double top that encloses a resonating chamber, sound holes on the side, and a convex back. It is made by one manufacturer in Israel, luthier Arik Kerman. Players include Avi Avital,Artist To Artist: 10 Minutes With Avi Avital. The Bluegrass Special, January 2011 by Joe Brent. Alon Sariel,Instrumentarium - Alon Sariel. Jacob Reuven, and Tom Cohen.
Tone
The tone of the flatback is described as warm or mellow, suitable for folk music and smaller audiences. The instrument sound does not punch through the other players' sound like a carved top does.
Others
Mandolinetto
Other American-made variants include the mandolinetto or Howe-Orme guitar-shaped mandolin (manufactured by the Elias Howe Company between 1897 and roughly 1920), which featured a cylindrical bulge along the top from fingerboard end to tailpiece and the Vega mando-lute (more commonly called a cylinder-back mandolin manufactured by the Vega Company between 1913 and roughly 1927), which had a similar longitudinal bulge but on the back rather than the front of the instrument.
Banjolin or mandolin-banjo
The mandolin was given a banjo body in an 1882 patent by Benjamin Bradbury of Brooklyn and given the name banjolin by John Farris in 1885.The Irish Tenor Banjo by Don Meade Today banjolin describes an instrument with four strings, while the version with the four courses of double strings is called a mandolin-banjo.
Resonator mandolin
A resonator mandolin or "resophonic mandolin" is a mandolin whose sound is produced by one or more metal cones (resonators) instead of the customary wooden soundboard (mandolin top/face). Historic brands include Dobro and National.
Electric mandolin
As with almost every other contemporary string instrument, another modern variant is the electric mandolin. These mandolins can have four or five individual or double courses of strings.
They have been around since the late 1920s or early 1930s depending on the brand. They come in solid body and acoustic electric forms.
Instruments have been designed that overcome the mandolin's lack of sustain with its plucked notes. Fender released a model in 1992 with an additional string (a high a, above the e string), a tremolo bridge and extra humbucker pickup (total of two). The result was an instrument capable of playing heavy metal style guitar riffs or violin-like passages with sustained notes that can be adjusted as with an electric guitar.
Playing traditions in Italy and worldwide
150px|right|thumb|Italian mandolin virtuoso and child prodigy Giuseppe Pettine (here pictured in 1898) brought the Italian playing style to America where he settled in Providence, Rhode Island, as a mandolin teacher and composer. Pettine is credited with promoting a style where "one player plays both the rhythmic chords and the lyric melodic line at once, combining single strokes and tremolo."
The international repertoire of music for mandolin is almost unlimited, and musicians use it to play various types of music. This is especially true of violin music, since the mandolin has the same tuning as the violin. Following its invention and early development in Italy the mandolin spread throughout the European continent. The instrument was primarily used in a classical tradition with Mandolin orchestras, so called Estudiantinas or in Germany Zupforchestern appearing in many cities. Following this continental popularity of the mandolin family local traditions appeared outside of Europe in the Americas and in Japan. Travelling mandolin virtuosi like Giuseppe Pettine, Raffaele Calace and Silvio Ranieri contributed to the mandolin becoming a "fad" instrument in the early 20th century. This "mandolin craze" was fading by the 1930s, but just as this practice was falling into disuse, the mandolin found a new niche in American country, old-time music, bluegrass and folk music. More recently, the Baroque and Classical mandolin repertory and styles have benefited from the raised awareness of and interest in Early music, with media attention to classical players such as Israeli Avi Avital, Italian Carlo Aonzo and American Joseph Brent.
thumb|150px|Musician playing a mandolin.
Australia
See Australian mandolinists
The earliest references to the mandolin in Australia come from Phil Skinner (1903–1991). In his article "Recollections" he mentions a Walter Stent, who was “active in the early part of the century and organised possibly the first Mandolin Orchestra in Sydney.”
Phil Skinner played a key role in 20th century development of the mandolin movement in Australia, and was awarded an MBE in 1979 for services to music and the community. He was born Harry Skinner in Sydney in 1903 and started learning music at age 10 when his uncle tutored him on the banjo. Skinner began teaching part-time at age 18, until the Great Depression forced him to begin teaching full-time and learn a broader range of instruments. Skinner founded the Sydney Mandolin Orchestra, the oldest surviving mandolin orchestra in Australia.
The Sydney Mandolins (Artistic Director: Adrian Hooper) have contributed greatly to the repertoire through commissioning over 200 works by Australian and International composers. Most of these works have been released on Compact Disks and can regularly be heard on radio stations on the ABC and MBS networks. One of their members, mandolin virtuoso Paul Hooper, has had a number of Concertos written for him by composers such as Eric Gross. He has performed and recorded these works with the Sydney Symphony Orchestra and the Tasmanian Symphony Orchestra. As well, Paul Hooper has had many solo works dedicated to him by Australian composers e.g., Caroline Szeto, Ian Shanahan, Larry Sitsky and Michael Smetanin.
In January 1979, the Federation of Australian Mandolin Ensembles (FAME) Inc. formed. Bruce Morey from Melbourne is the first FAME President. An Australian Mandolin Orchestra toured Germany in May 1980.
Australian popular groups such as My Friend The Chocolate Cake use the mandolin extensively. The McClymonts also use the mandolin, as do Mic Conway's National Junk Band and the Blue Tongue Lizards. Nevertheless, in folk and traditional styles, the mandolin remains more popular in Irish Music and other traditional repertoires.
Belgium
In the early 20th century several mandolin orchestras (Estudiantinas) were active in Belgium. Today only a few groups remain: Royal Estudiantina la Napolitaine (founded in 1904) in Antwerp, Brasschaats mandoline orkest in Brasschaat and an orchestra in Mons (Bergen). Gerda Abts is a well known mandolin virtuoso in Belgium. She is also mandolin teacher and gives lessons in the music academies of Lier, Wijnegem and Brasschaat. She is now also professor mandolin at the music high school “Koninklijk Conservatorium Artesis Hogeschool Antwerpen”. She also gives various concerts each year in different ensembles. She is in close contact to the Brasschaat mandolin Orchestra. Her site is www.gevoeligesnaar.be
Brazil
See Brazilian mandolinists
See also the Portuguese guitar and Portuguese music.
thumb|right|150px|Brazilian Mandolin virtuoso Hamilton de Holanda playing a ten-string bandolim
The mandolin has a particular shape in Brazilian music and is known as the bandolim in the Portuguese language, which is spoken there. The Portuguese have a long tradition of mandolins and mandolin-like instruments and brought their music to their colonies.
In modern Brazilian music, the bandolim is almost exclusively a melody instrument, often accompanied by the chordal accompaniment of the cavaquinho, a steel stringed instrument similar to a ukulele. The bandolim's popularity has risen and fallen with instrumental folk music styles, especially choro. The later part of the 20th century saw a renaissance of choro in Brazil, and with it, a revival of the country's mandolin tradition. Composer and mandolin virtuoso, Jacob do Bandolim, did much to popularize the instrument through many recordings, and his influence continues to the present day. Some contemporary mandolin players in Brazil include Jacob's disciple Déo Rian, and Hamilton de Holanda (the former, a traditional choro-style player, the latter an eclectic innovator Another is Joel Nascimento.
Croatia
Mandolin is staple of folk and traditional music on Croatian coast.
Czech and Slovak republics
See Czech mandolinists, Czech bluegrass
From Italy mandolin music extended in popularity throughout Europe in the early 20th century, with mandolin orchestras appearing throughout the continent.
In the 21st century an increased interest in bluegrass music, especially in Central European countries such as the Czech Republic and Slovak Republic, has inspired many new mandolin players and builders. These players often mix traditional folk elements with bluegrass. Radim Zenkl came from this tradition, emigrating to the United States, where he has played with U.S. stars, including David Grisman and Béla Fleck.
Finland
thumb|right|Finnish soldiers from 1943 with 12-string mandolins. One of the soldiers, Private Törrönen was credited as the luthier.http://www.mandolincafe.com/forum/showthread.php?101928-Finnish-1943-mandolins-from-a-public-war-history-photo-collection Mandolin cafe forum Mandolin in bottom left is strung.
Finland has mandolin players rooted in the folk music scene. Prominent names include Petri Hakala, Seppo Sillanpää and Heikki Lahti, who have taught and recorded albums.
France
Prior to the Golden Age of Mandolins, France had a history with the mandolin, with mandolinists playing in Paris until the Napoleonic Wars. The players, teachers and composers included Giovanni Fouchetti, Eduardo Mezzacapo, Gabriele Leon, and Gervasio. During the Golden age itself (1880s-1920s), the mandolin had a strong presence in France. Prominent mandolin players or composers included Jules Cottin and his sister Madeleine Cottin, Jean Pietrapertosa, and Edgar Bara. Paris had dozens of "estudiantina" mandolin orchestras in the early 1900s. Mandolin magazines included L'Estudiantina, Le Plectre, École de la mandolie.
Today, French mandolinists include Patrick Vaillant, a prominent modern player, composer and recording artist for the mandolin, who also organises courses for aspiring players.
Germany
thumb|right|150px|A woman playing a bowlback mandolin in Germany in 1952.
The mandore was known in Germany, prior to the invention of the Neapolitan mandolin. The mandolin spread to Germany with the visits of Italian mandolin virtuosi, including Achille Coronati. The mandolin gained popularity as a folk instrument, especially with the groups of youth participating in the German youth movement, which began in 1897. In the early 20th century the mandolin was popular in the Wandervogel movement (groups of young men and women, hiking and camping, singing and playing instruments), due to the instrument's small size.
The mandolin became increasingly popular in contemporary music during the 20th century. An important German composer for mandolin and mandolin orchestra of the 20th century was Konrad Wölki; He was instrumental in bringing musicological recognition of the mandolin and the orchestra. Mandolin orchestras are still playing in Germany, in various chamber music ensembles and as a solo instrument. The instrument is popular enough today that there is an increasing number of professional mandolin players and composers writing new works for the mandolin.
At the college level the mandolin has a presence, in the professorial chair for mandolins, chaired by Caterina Lichtenberg. She successed Marga Wilden-Hüsgen at the Hochschule für Musik und Tanz Köln, in Wuppertal. Another program with specialized training for students offering a diploma in this instrument takes place Music Academy Kassel by Gertrud Weyhofen at the and more recently at the University of Music Saar and by Steffen Trekel at Hamburg Conservatory.
The instrument was present in the folk revival of the 1970s. One mandolin player was Erich Schmeckenbecher in the duo Zupfgeigenhansel. Comedian Hans Süper played a modified mandolin, which he called his "Bovec," in the duo Colonia.
Greece
The mandolin has a long tradition in the Ionian islands (the Heptanese) and Crete. It has long been played in the Aegean islands outside of the control of the Ottoman Empire. It is common to see choirs accompanied by mandolin players (the mandolinátes) in the Ionian islands and especially in the cities of Corfu, Zakynthos, Lefkada and Kefalonia. The evolution of the repertoire for choir and mandolins (kantádes) occurred during Venetian rule over the islands.
On the island of Crete, along with the lyra and the laouto (lute), the mandolin is one of the main instruments used in Cretan Music. It appeared on Crete around the time of the Venetian rule of the island. Different variants of the mandolin, such as the "mantola," were used to accompany the lyra, the violin, and the laouto. Stelios Foustalierakis reported that the mandolin and the mpoulgari were used to accompany the lyra in the beginning of the 20th century in the city of Rethimno. There are also reports that the mandolin was mostly a woman's musical instrument. Nowadays it is played mainly as a solo instrument in personal and family events on the Ionian islands and Crete.
India
See Indian mandolinists
Mandolin music was used in Indian Movies as far back as the 1940s by the Raj Kapoor Studios in movies such as Barsaat. The movie Dilwale Dulhania Le Jayenge (1995) used mandolin in several places.
Adoption of the mandolin in Carnatic music is recent and involves an electric instrument. U. Srinivas has, over the last couple of decades, made his version of the mandolin very popular in India and abroad.
thumb|right|U. Srinivas
Many adaptations of the instrument have been done to cater to the special needs of Indian Carnatic music.
In Indian classical music and Indian light music, the mandolin, which bears little resemblance to the European mandolin, is usually tuned E–B–E–B. As there is no concept of absolute pitch in Indian classical music, any convenient tuning maintaining these relative pitch intervals between the strings can be used. Another prevalent tuning with these intervals is C–G–C–G, which corresponds to sa–pa–sa–pa in the Indian carnatic classical music style. This tuning corresponds to the way violins are tuned for carnatic classical music. This type of mandolin is also used in Bhangra, dance music popular in Punjabi culture.
Use of the mandolin also spread into Afghanistan and the mandolin is often used in Afghan popular music.
Ireland
See Irish mandolinists
The mandolin has become a more common instrument amongst Irish traditional musicians. Fiddle tunes are readily accessible to the mandolin player because of the equivalent tuning and range of the two instruments, and the practically identical (allowing for the lack of frets on the fiddle) left hand fingerings.
Though almost any variety of acoustic mandolin might be adequate for Irish traditional music, virtually all Irish players prefer flat-backed instruments with oval sound holes to the Italian-style bowl-back mandolins or the carved-top mandolins with f-holes favoured by bluegrass mandolinists. The former are often too soft-toned to hold their own in a session (as well as having a tendency to not stay in place on the player's lap), whilst the latter tend to sound harsh and overbearing to the traditional ear. The f-hole mandolin, however, does come into its own in a traditional session, where its brighter tone cuts through the sonic clutter of a pub. Greatly preferred for formal performance and recording are flat-topped "Irish-style" mandolins (reminiscent of the WWI-era Martin Army-Navy mandolin) and carved (arch) top mandolins with oval soundholes, such as the Gibson A-style of the 1920s.
Noteworthy Irish mandolinists include Andy Irvine (who, like Johnny Moynihan, almost always tunes the top E down to D, to achieve an open tuning of G–D–A–D), Paul Brady, Mick Moloney, Paul Kelly and Claudine Langille. John Sheahan and the late Barney McKenna, respectively fiddle player and tenor banjo player with the Dubliners, are also accomplished Irish mandolin players. The instruments used are either flat-backed, oval hole examples as described above (made by UK luthier Roger Bucknall of Fylde Guitars), or carved-top, oval hole instruments with arched back (made by Stefan Sobell in Northumberland). The Irish guitarist Rory Gallagher often played the mandolin on stage, and he most famously used it in the song "Going To My Hometown."
Israel
Israel has four especially prominent mandolinists: Avi Avital, Alon Sariel, Jacob Reuven, and Tom Cohen. As well as the composer and mandolinist Shaul Bustan that also compose new music for the mandolin.
Italy
See Italian mandolinists
Important performers in the Italitan tradition include Raffaele Calace (luthier, virtuoso and composer of 180 works for many instruments including mandolin), Pietro Denis (whole also composed Sonata for mandolin & continuo No. 1 in D major and Sonata No. 3), Giovanni Fouchetti, Gabriele Leone, Carlo Munier (1859–1911), Giuseppe Branzoli (1835–1909), Giovanni Gioviale (1885–1949) and Silvio Ranieri (1882–1956).Ian Pommerenke, The Mandolin the early 18th Century, Lanarkshire Guitar and Mandolin Association Newsletter, November 2006.
Antonio Vivaldi composed a mandolin concerto (Concerto in C major op.3 no.6) and two concertos for two mandolins and orchestra. Wolfgang Amadeus Mozart placed it in his 1787 work Don Giovanni and Beethoven created four variations of it. Antonio Maria Bononcini composed La conquista delle Spagne di Scipione Africano il giovane in 1707 and George Frideric Handel composed Alexander Balus in 1748. Others include Giovani Battista Gervasio (Sonata in D major for Mandolin and Basso Continuo), Giuseppe Giuliano (Sonata in D major for Mandolin and Basso Continuo), Emanuele Barbella (Sonata in D major for Mandolin and Basso Continuo), Domenico Scarlatti (Sonata no.54 (K.89) in D minor for Mandolin and Basso Continuo), and Addiego Guerra (Sonata in G major for Mandolin and Basso Continuo).Frances Taylor, mandolinist, Italian Mandolin Sonatas, Claudio Records.
More contemporary composers for the mandolin include Giuseppe Anedda (a virtuoso performer and teacher of the first chair of the Conservatory of Italian mandolin), Carlo Aonzo and Dorina Frati.
Japan
Instruments of the mandolin family are popular in Japan, particularly Neapolitan (round-back) style instruments, and Roman-Embergher style mandolins are still being made there. Japan became seriously interested in mandolins at the beginning of the 20th Century during a process of becoming westernized.
Where interest in the mandolin declined in the United States and parts of Europe after World War I, in Japan there was a boom, with orchestras being formed all over the country.
Connections to the West, including cultural connections with World War II ally Italy, were forming. One musical connection that encouraged mandolin music growth was a visit by mandolin virtuoso Raffaele Calace, who toured extensively at the end of 1924, into 1925, and who gave a performance for the Japanese emperor. Another visiting mandolin virtuoso, Samuel Adelstein, toured from his home in the United States.
The expansion of mandolin use continued after World War II through the late 1960s, and Japan still maintains a strong classical music tradition using mandolins, with active orchestras and university music programs. New orchestras were founded and new orchestral compositions composed. Japanese mandolin orchestras today may consist of up to 40 or 50 members, and can include woodwind, percussion, and brass sections. Japan also maintains an extensive collection of 20th-century mandolin music from Europe and one of the most complete collections of mandolin magazines from mandolin's golden age, purchased by Morishige Takei.
thumb|Morishige Takei in 1913.
Morishige Takei (1890–1949), who studied Italian at Tokyo College of Language and was a member of the court of Emperor Hirohito, established the mandolin orchestra in the Italian style before World War II. He was also a major composer, with 114 compositions for mandolin.
The military government could not persecute Japanese mandolinists by the authority of Takei So the Japanese mandolin orchestras continued to perform old Italian works after World War II, and they are prosperous today.
Another composer, Jiro Nakano (1902–2000), arranged many of the Italian works for regular orchestras or winds composed before World War II as new repertoires for Japanese mandolin orchestras.
Original compositions for mandolin orchestras were composed increasingly after World War II. Seiichi Suzuki (1901–1980) composed music for early Kurosawa films. Others include Tadashi Hattori (1908–2008), and Hiroshi Ohguri (1918–1982). Ohguri was influenced by Béla Bartók and composed many symphonic works for Japanese mandolin orchestras. Yasuo Kuwahara (1946–2003) used German techniques. Many of his works were published in Germany.
Latvia
An online article talks about Elfa Heifecs, a Jewish musician from Riga, Latvia, who organized mandolin orchestra there, about 1930.
Macedonia
Macedonia has a mandolin tradition that dates back before World War II; The United States Holocaust Museum has a 1929 photo of a Jewish mandolin orchestra on display online. In modern times, the Skopje Mandolin orchestra was formed in 2001 and has performed in international competitions. Mandolinists associated with the Skopje Mandolin Orchestra include Ramadan Shukri, Suzana Turundzieva, Mustafa Imeri, Serafina Fantauzo, Gligor Popovski and Lazar Sandev.
New Zealand
The Auckland Mandolinata mandolin orchestra was formed in 1969 by Doris Flameling (1932–2004). Soon after arriving from the Netherlands with her family, Doris started teaching guitar and mandolin in West Auckland. In 1969, she formed a small ensemble for her pupils. This ensemble eventually developed into a full size mandolin orchestra, which survives today. Doris was the musical director and conductor of this orchestra for many years. The orchestra is currently led by Bryan Holden (conductor).
The early history of the mandolin in New Zealand is currently being researched by members of the Auckland Mandolinata.
Poland
The mandolin was used as a folk instrument throughout eastern part of European continent, including Poland, Ukraine and Slovakia in the early part of the 20th Century. Mandolin orchestras were present as well. One example was the Mandolin Orchestra of Ger (Gora Kalwaria, Poland). A photo exists of the orchestra, made up of Jewish residents who were in the orchestra in the 1930s before the Jewish population was sent to the Treblinka extermination camp. A similar photo exists of a mandolin orchestra in the community of Grodno.
Portugal
The bandolim (Portuguese for "mandolin") was a favourite instrument within the Portuguese bourgeoisie of the 19th century, but its rapid spread took it to other places, joining other instruments. Today you can see mandolins as part of the traditional and folk culture of Portuguese singing groups and the majority of the mandolin scene in Portugal is in Madeira Island. Madeira has over 17 active mandolin Orchestras and Tunas. The mandolin virtuoso Fabio Machado is one of Portugal's most accomplished mandolin players, also Norberto Cruz is a highly regarded mandolin player and Maestro. Diogo Gomes is known as the first mandolin player to study the instrument in a professional course in the whole country, at the Conservatoire School of Arts of Madeira. The Portuguese influence brought the mandolin to Brazil.
Romania
See Romanian mandolinists
Russia
The mandolin was popular in Russia in the pre-Soviet era and after. The instrument was used in movies, including The Adventures of Pinocchio (Приключения Буратино). The cheap, flat "Portugurese" style mandolin was widespread in the Soviet Union. The mandolin was actively used as a folk instrument as well. Dave Apollon was a well known (U.S.-based) player born in Kiev, then part of Russia.
South Africa
Mandolin has been a prominent instrument in the recordings of Johnny Clegg and his bands Juluka and Savuka. Since 1992, Andy Innes has been the mandolinist for Johnny Clegg and Savuka.
Sri Lanka
The mandolin was brought to Sri Lanka by the Portuguese, who colonized Sri Lanka from 1505 to 1658. The instrument has been heavily used in baila, a genre of Sri Lankan music formed from a mixture of Portuguese, African and Sinhala music. For example, the mandolin features prominently in M.S. Fernando's baila song, "Bola Bola Meti". Modern mandolinists include Antony Surendra and V. Hemapala Perera.
Turkey
Turkey has been the home of a mandolin-banjo manufacturer, Cümbüs, since the early 20th century. The country had what was claimed to be its first Mandolin festival in June 2015, and has at least one Mandolin Orchestra.
One professional musician to use the mandolin is Sumru Ağıryürüyen who is known for singing and playing many styles of music including world music, Klezmer, Turkish folk, Balkan folk, blues, jazz, krautrock, protest rock and maqam.
Ukraine
The Mandolin has been played in Ukraine, and pictures of it being played today can be found online. One famous former resident of Kiev was Dave Apollon.http://www.blatata.com/biografii/bio01/9824-djejjv-apollon-dave-apollon.html Other photos dating back to the early 20th Century show Ukrainian Mandolin orchestras in Canada and the United States.
One of the offshoots of Ukrainian immigrants to North America is the Toronto Mandolin Orchestra (possibly the oldest mandolin orchestra in North America), which claims Ukrainians as among its founders. The orchestra's web site said of mandolins in Ukraine, that the instruments were popular in the early 20th century, but never reached folk-instrument status there. Ukrainian immigrants of the period took the instruments with them to their new countries.
The instrument has had to compete in Ukraine with native instruments that have been revived, such as the kobza. The orchestral variant of the kobza is similar to the Mandolin, having four strings and being tuned in fifths.
United Kingdom
See British mandolinists
The mandolin has been used extensively in the traditional music of England and Scotland for generations. Simon Mayor is a prominent British player who has produced six solo albums, instructional books and DVDs, as well as recordings with his mandolin quartet the Mandolinquents. The instrument has also found its way into British rock music. The mandolin was played by Mike Oldfield (and introduced by Vivian Stanshall) on Oldfield's album Tubular Bells, as well as on a number of his subsequent albums (particularly prominently on Hergest Ridge (1974) and Ommadawn (1975)). It was used extensively by the British folk-rock band Lindisfarne, who featured two members on the instrument, Ray Jackson and Simon Cowe, and whose "Fog on the Tyne" was the biggest selling UK album of 1971–72. The instrument was also used extensively in the UK folk revival of the 1960s and 1970s with bands such as Fairport Convention and Steeleye Span taking it on as the lead instrument in many of their songs. Maggie May by Rod Stewart, which hit No. 1 on both the British charts and the Billboard Hot 100, also featured Jackson's playing. It has also been used by other British rock musicians. Led Zeppelin's bassist John Paul Jones is an accomplished mandolin player and has recorded numerous songs on mandolin including Going to California and That's the Way; the mandolin part on The Battle of Evermore is played by Jimmy Page, who composed the song. Other Led Zeppelin songs featuring mandolin are Hey Hey What Can I Do, and Black Country Woman. Pete Townshend of the Who played mandolin on the track Mike Post Theme, along with many other tracks on Endless Wire. McGuinness Flint, for whom Graham Lyle played the mandolin on their most successful single, When I'm Dead And Gone, is another example. Lyle was also briefly a member of Ronnie Lane's Slim Chance, and played mandolin on their hit How Come. One of the more prominent early mandolin players in popular music was Robin Williamson in the Incredible String Band. Ian Anderson of Jethro Tull is a highly accomplished mandolin player (beautiful track Pussy Willow), as is his guitarist Martin Barre. The popular song Please Please Please Let Me Get What I Want by the Smiths featured a mandolin solo played by Johnny Marr. More recently, the Glasgow-based band Sons and Daughters featured the mandolin, played by Ailidh Lennon, on tracks such as Fight, Start to End, and Medicine. British folk-punk icons the Levellers also regularly use the mandolin in their songs. Current bands are also beginning to use the Mandolin and its unique sound - such as South London's Indigo Moss who use it throughout their recordings and live gigs. The mandolin has also featured in the playing of Matthew Bellamy in the rock band Muse. It also forms the basis of Paul McCartney's 2007 hit "Dance Tonight." That was not the first time a Beatle played a mandolin, however; that distinction goes to George Harrison on Gone Troppo, the title cut from the 1982 album of the same name. The mandolin is taught in Lanarkshire by the Lanarkshire Guitar and Mandolin Association to over 100 people. Also more recently hard rock supergroup Them Crooked Vultures have been playing a song based primarily using a mandolin. This song was left off their debut album, and features former Led Zeppelin bassist John Paul Jones.
In the Classical style, performers such as Hugo D'Alton, Alison Stephens and Michael Hooper have continued to play music by British composers such as Michael Finnissy, James Humberstone and Elspeth Brooke.
United States
See American mandolinists, American bluegrass mandolinists, Canadian mandolinists, and Canadian bluegrass mandolinists
Mandolin orchestras and classical-music virtuosos
See also: mandolin orchestras
thumb|right|250px|The Spanish Students, who were first brought to the United States by Henry Eugene Abbey's firm, performing with his "Humpty Dumpty Combination." Column 3 This poster was for a Manhattan performance February 3, 1880, at the Booth's Theatre on the corner of 6th Avenue and 23rd Street.
The mandolin's popularity in the United States was spurred by the success of a group of touring young European musicians known as the Estudiantina Figaro, or in the United States, simply the "Spanish Students." The group landed in the U.S. on January 2, 1880 in New York City, and played in Boston and New York to wildly enthusiastic crowds. Ironically, this ensemble did not play mandolins but rather bandurrias, which are also small, double-strung instruments that resemble the mandolin. The success of the Figaro Spanish Students spawned other groups who imitated their musical style and costumes. An Italian musician, Carlo Curti, hastily started a musical ensemble after seeing the Figaro Spanish Students perform; his group of Italian born Americans called themselves the "Original Spanish Students," counting on the American public to not know the difference between the Spanish bandurrias and Italian mandolins. The imitators' use of mandolins helped to generate enormous public interest in an instrument previously relatively unknown in the United States. Zerega's Spanish Troubadours was a quintet of three mandolins and two guitars. They weren't Spanish, and Zerega was the stage name of Indiana-born Edgar E. Hill who played with his wife May, both Americans, who had eloped to London, but toured America in 1887."The Troubadoars", Los Angeles Herald, Volume 27, Number 154, 6 September 1887, page 9"Identifies the New York Suicide", The Chicago Daily Tribune, Vol. LV., No. 140, Tuesday May 19, 1896, front page
left|thumb|Valentine Abt posing with a Gibson mandolin in a 1912 endorsement advertisement for the instrument. Abt called the Gibson Company "the pioneer of plectrum instrument making in America" and mentioned its carrying power.The Crescendo, Jan 1912, page 1, advertisement for Gibson by Valentine Abt.
Mandolin awareness in the United States blossomed in the 1880s, as the instrument became part of a fad that continued into the mid-1920s. According to Clarence L. Partee, the first mandolin made in the United States was made in 1883 or 1884 by Joseph Bohmann, who was an established maker of violins in Chicago. Partee characterized the early instrument as being larger than the European instruments he was used to, with a "peculiar shape" and "crude construction," and said that the quality improved, until American instruments were "superior" to imported instruments. At the time, Partee was using an imported French-made mandolin.
Instruments were marketed by teacher-dealers, much as the title character in the popular musical The Music Man. Often, these teacher-dealers conducted mandolin orchestras: groups of four to fifty musicians who played various mandolin family instruments. However, alongside the teacher-dealers were serious musicians, working to create a spot for the instrument in classical music, ragtime and jazz. Like the teacher-dealers, they traveled the U.S., recording records, giving performances and teaching individuals and mandolin orchestras. Samuel Siegel played mandolin in Vaudeville and became one of America's preeminent mandolinists. Seth Weeks was an African American who not only taught and performed in the United States, but also in Europe, where he recorded records. Another pioneering African American musician and director who made his start with a mandolin orchestra was composer James Reese Europe. W. Eugene Page toured the country with a group, and was well known for his mandolin and mandola performances.University of Iowa Digital Collections, concert announcement for W. Eugene Page and Florence Phelps McCuneSemi Weekly Iowa State Reporter, Waterloo, Iowa, July 3, 1900, page 5. Other names include Valentine Abt, Samuel Adelstein, William Place, Jr., and Aubrey Stauffer.
The instrument was primarily used in an ensemble setting well into the 1930s, and although the fad died out at the beginning of the 1930s, the instruments that were developed for the orchestra found a new home in bluegrass. The famous Lloyd Loar Master Model from Gibson (1923) was designed to boost the flagging interest in mandolin ensembles, with little success. However, the "Loar" became the defining instrument of bluegrass music when Bill Monroe purchased F-5 S/N 73987 in a Florida barbershop in 1943 and popularized it as his main instrument.
The mandolin orchestras never completely went away, however. In fact, along with all the other musical forms the mandolin is involved with, the mandolin ensemble (groups usually arranged like the string section of a modern symphony orchestra, with first mandolins, second mandolins, mandolas, mandocellos, mando-basses, and guitars, and sometimes supplemented by other instruments) continues to grow in popularity. Since the mid-nineties, several public-school mandolin-based guitar programs have blossomed around the country, including Fretworks Mandolin and Guitar Orchestra, the first of its kind. The national organization, Classical Mandolin Society of America, founded by Norman Levine, represents these groups. Prominent modern mandolinists and composers for mandolin in the classical music tradition include Samuel Firstman, Howard Fry, Rudy Cipolla, Dave Apollon, Neil Gladd, Evan Marshall, Marilynn Mair and Mark Davis (the Mair-Davis Duo), Brian Israel, David Evans, Emanuil Shynkman, Radim Zenkl, David Del Tredici and Ernst Krenek.
Bluegrass, Blues, and the jug band
When Cowan Powers and his family recorded their old-time music from 1924 to 1926, his daughter Orpha Powers was one of the earliest known southern-music artists to record with the mandolin. By the 1930s, single mandolins were becoming more commonly used in southern string band music, most notably by brother duets such as the sedate Blue Sky Boys (Bill Bolick and Earl Bolick) and the more hard-driving Monroe Brothers (Bill Monroe and Charlie Monroe). However, the mandolin's modern popularity in country music can be directly traced to one man: Bill Monroe, the father of bluegrass music. After the Monroe Brothers broke up in 1939, Bill Monroe formed his own group, after a brief time called the Blue Grass Boys, and completed the transition of mandolin styles from a "parlor" sound typical of brother duets to the modern "bluegrass" style. He joined the Grand Ole Opry in 1939 and its powerful clear-channel broadcast signal on WSM-AM spread his style throughout the South, directly inspiring many musicians to take up the mandolin. Monroe famously played Gibson F-5 mandolin, signed and dated July 9, 1923, by Lloyd Loar, chief acoustic engineer at Gibson. The F-5 has since become the most imitated tonally and aesthetically by modern builders.
Monroe's style involved playing lead melodies in the style of a fiddler, and also a percussive chording sound referred to as "the chop" for the sound made by the quickly struck and muted strings. He also perfected a sparse, percussive blues style, especially up the neck in keys that had not been used much in country music, notably B and E. He emphasized a powerful, syncopated right hand at the expense of left-hand virtuosity. Monroe's most influential follower of the second generation is Frank Wakefield and nowadays Mike Compton of the Nashville Bluegrass Band and David Long, who often tour as a duet. Tiny Moore of the Texas Playboys developed an electric five-string mandolin and helped popularize the instrument in Western Swing music.
Other major bluegrass mandolinists who emerged in the early 1950s and are still active include Jesse McReynolds (of Jim and Jesse) who invented a syncopated banjo-roll-like style called crosspicking—and Bobby Osborne of the Osborne Brothers, who is a master of clarity and sparkling single-note runs. Highly respected and influential modern bluegrass players include Herschel Sizemore, Doyle Lawson, and the multi-genre Sam Bush, who is equally at home with old-time fiddle tunes, rock, reggae, and jazz. Ronnie McCoury of the Del McCoury Band has won numerous awards for his Monroe-influenced playing. John Duffey of the original Country Gentlemen and later the Seldom Scene did much to popularize the bluegrass mandolin among folk and urban audiences, especially on the east coast and in the Washington, D.C. area.
Jethro Burns, best known as half of the comedy duo Homer and Jethro, was also the first important jazz mandolinist. Tiny Moore popularized the mandolin in Western swing music. He initially played an 8-string Gibson but switched after 1952 to a 5-string solidbody electric instrument built by Paul Bigsby. Modern players David Grisman, Sam Bush, and Mike Marshall, among others, have worked since the early 1970s to demonstrate the mandolin's versatility for all styles of music. Chris Thile of California is a well-known player, and has accomplished many feats of traditional bluegrass, classical, contemporary pop and rock; the band Nickel Creek featured his playing in its blend of traditional and pop styles, and he now plays in his band Punch Brothers. Most commonly associated with bluegrass, mandolin has been used a lot in country music over the years. Some well-known players include Marty Stuart, Vince Gill, and Ricky Skaggs.
Mandolin has also been used in blues music, most notably by Ry Cooder, who performed outstanding covers on his very first recordings, Yank Rachell, Johnny "Man" Young, Carl Martin, and Gerry Hundt. Howard Armstrong, who is famous for blues violin, got his start with his father's mandolin and played in string bands similar to the other Tennessee string bands he came into contact with, with band makeup including "mandolins and fiddles and guitars and banjos. And once in a while they would ease a little ukulele in there and a bass fiddle." Other blues players from the era's string bands include Willie Black (Whistler And His Jug Band), Dink Brister, Jim Hill, Charles Johnson, Coley Jones (Dallas String Band), Bobby Leecan (Need More Band), Alfred Martin, Charlie McCoy (1909–1950), Al Miller, Matthew Prater, and Herb Quinn.
It saw some use in jug band music, since that craze began as the mandolin fad was waning, and there were plenty of instruments available at relatively low cost.
Rock and Celtic
thumb|upright|Levon Helm playing mandolin in 1971
The mandolin has been used occasionally in rock music, first appearing in the psychedelic era of the late 1960s. Levon Helm of the Band occasionally moved from his drum kit to play mandolin, most notably on Rag Mama Rag, Rockin' Chair, and Evangeline. Ian Anderson of Jethro Tull played mandolin on Fat Man, from their second album, Stand Up, and also occasionally on later releases. Rod Stewarts 1971 No. 1 hit Maggie May features a significant mandolin riff. David Grisman played mandolin on two Grateful Dead songs on the American Beauty album, Friend of the Devil and Ripple, which became instant favorites among amateur pickers at jam sessions and campground gatherings. John Paul Jones and Jimmy Page both played mandolin on Led Zeppelin songs. Dash Crofts of the soft rock duo Seals and Crofts extensively used mandolin in their repertoire during the 1970s. Styx released the song "Boat on the River" in 1980, which featured Tommy Shaw on vocals and mandolin. The song didn't chart in the United States but was popular in much of Europe and the Philippines.
Some rock musicians today use mandolins, often single-stringed electric models rather than double-stringed acoustic mandolins. One example is Tim Brennan of the Irish-American punk rock band Dropkick Murphys. In addition to electric guitar, bass, and drums, the band uses several instruments associated with traditional Celtic music, including mandolin, tin whistle, and Great Highland bagpipes. The band explains that these instruments accentuate the growling sound they favor. The 1991 R.E.M. hit "Losing My Religion" was driven by a few simple mandolin licks played by guitarist Peter Buck, who also played the mandolin in nearly a dozen other songs. The single peaked at No. 4 on the Billboard Hot 100 chart (#1 on the rock and alternative charts),[ Billboard Hot 100] Luther Dickinson of North Mississippi Allstars and the Black Crowes has made frequent use of the mandolin, most notably on the Black Crowes song "Locust Street." Armenian American rock group System of A Down makes extensive use of the mandolin on their 2005 double album Mezmerize/Hypnotize. Pop punk band Green Day has used a mandolin in several occasions, especially on their 2000 album, Warning. Boyd Tinsley, violin player of the Dave Matthews Band has been using an electric mandolin since 2005. Frontman Colin Meloy and guitarist Chris Funk of the Decemberists regularly employ the mandolin in the band's music. Nancy Wilson, rhythm guitarist of Heart, uses a mandolin in Heart's song "Dream of the Archer" from the album Little Queen, as well as in Heart's cover of Led Zeppelin's song "The Battle of Evermore." "Show Me Heaven" by Maria McKee, the theme song to the film Days of Thunder, prominently features a mandolin. The popular alt rock group Imagine Dragons feature the mandolin on a few of their songs, most prominently being "It's Time". Folk rock band the Lumineers use a mandolin in the background of their 2012 hit "Ho Hey".
Many folk punk bands also feature the mandolin. One such band is Days N' Daze, who make use of the mandolin, banjo, ukulele, as well as several other acoustic plucked string instruments. Other folk punk acts include Blackbird Raum, and Johnny Hobo and the Freight Trains.
Venezuela
As in Brazil, the mandolin has played an important role in the Music of Venezuela. It has enjoyed a privileged position as the main melodic instrument in several different regions of the country. Specifically, the eastern states of Sucre, Nueva Esparta, Anzoategui and Monagas have made the mandolin the main instrument in their versions of Joropo as well as Puntos, Jotas, Polos, Fulias, Merengues and Malagueñas. Also, in the west of the country the sound of the mandolin is intrinsically associated with the regional genres of the Venezuelan Andes: Bambucos, Pasillos, Pasodobles, and Waltzes. In the western city of Maracaibo the Mandolin has been played in Decimas, Danzas and Contradanzas Zulianas; in the capital, Caracas, the Merengue Rucaneao, Pasodobles and Waltzes have also been played with mandolin for almost a century. Today, Venezuelan mandolists include an important group of virtuoso players and ensembles such as Alberto Valderrama, Jesus Rengel, Ricardo Sandoval, Saul Vera, and Cristobal Soto.
Notable literature
Art or "classical" music
The tradition of so-called "classical music" for the mandolin has been somewhat spotty, due to its being widely perceived as a "folk" instrument. Significant composers did write music specifically for the mandolin, but few large works were composed for it by the most widely regarded composers. The total number of works these works is rather small in comparison to—say—those composed for violin. One result of this dearth being that there were few positions for mandolinists in regular orchestras. To fill this gap in the literature, mandolin orchestras have traditionally played many arrangements of music written for regular orchestras or other ensembles. Some players have sought out contemporary composers to solicit new works.
Furthermore, of the works that have been written for mandolin from the 18th century onward, many have been lost or forgotten. Some of these await discovery in museums and libraries and archives. One example of rediscovered 18th-century music for mandolin and ensembles with mandolins is the Gimo collection, collected in the first half of 1762 by Jean Lefebure. Lefebure collected the music in Italy, and it was forgotten until manuscripts were rediscovered.
Vivaldi created some concertos for mandolinos and orchestra: one for 4-chord mandolino, string bass & continuous in C major, (RV 425), and one for two 5-chord mandolinos, bass strings & continuous in G major, (RV 532), and concerto for two mandolins, 2 violons "in Tromba"—2 flûtes à bec, 2 salmoe, 2 théorbes, violoncelle, cordes et basse continuein in C major (P. 16).
Beethoven composed mandolin music and enjoyed playing the mandolin. His 4 small pieces date from 1796: Sonatine WoO 43a; Adagio ma non troppo WoO 43b; Sonatine WoO 44a and Andante con Variazioni WoO 44b.
The opera Don Giovanni by Mozart (1787) includes mandolin parts, including the accompaniment to the famous aria Deh vieni alla finestra, and Verdi's opera Otello calls for guzla accompaniment in the aria Dove guardi splendono raggi, but the part is commonly performed on mandolin.
Gustav Mahler used the mandolin in his Symphony No. 7, Symphony No. 8 and Das Lied von der Erde.
Parts for mandolin are included in works by Schoenberg (Variations op. 31), Stravinsky (Agon), Prokofiev (Romeo and Juliet) and Webern (opus Parts 10)
Some 20th century composers also used the mandolin as their instrument of choice (amongst these are: Schoenberg, Webern, Stravinsky and Prokofiev).
Among the most important European mandolin composers of the 20th century are Raffaele Calace (composer, performer and luthier) and Giuseppe Anedda (virtuoso concert pianist and professor of the first chair of the Conservatory of Italian Mandolin, Padua, 1975). Today representatives of Italian classical music and Italian classical-contemporary music include Ugo Orlandi, Carlo Aonzo, Dorina Frati, Mauro Squillante and Duilio Galfetti.
Japanese composers also produced orchestral music for mandolin in the 20th century, but these are not well known outside Japan.
Traditional mandolin orchestras remain especially popular in Japan and Germany, but also exist throughout the United States, Europe and the rest of the world. They perform works composed for mandolin family instruments, or re-orchestrations of traditional pieces. The structure of a contemporary traditional mandolin orchestra consists of: first and second mandolins, mandolas (either octave mandolas, tuned an octave below the mandolin, or tenor mandolas, tuned like the viola), mandocellos (tuned like the cello), and bass instruments (conventional string bass or, rarely, mandobasses). Smaller ensembles, such as quartets composed of two mandolins, mandola, and mandocello, may also be found.
Unaccompanied solo
Accompaniment with solo
Duo
Concerto
Mandolin in the orchestra
See also
References
Further reading
Chord dictionaries
A comprehensive chord dictionary.
A case-style chord dictionary.
A very comprehensive chord dictionary.
Method and instructional guides
Instructional guide.
External links
Accademia Mandolinistica Pugliese (Puglia-Italy)
The Mandolin, The Serenade of Italy, podcast and slideshow
The Mandolin Tools, Freeware Windows application with chords and scales
Mandolin Cafe, a popular and eclectic website focusing on mandolin culture and community
theMandolinTuner, a mandolin site focusing on mandolin tuning, chords and tabs
List of mandolin method books from 1629 to present
List of composers for the mandolin with more than 1900 names. Includes mandolin solos, ensembles, concertos, chamber music, and bluegrass. Japanese website, but needed parts are in English
Works for orchestras that contain small parts for mandolin. Japanese website, but needed parts are in English.
19 works from Italian composers, during the mandolins first rise, copies from manuscript into modern notation.
Category:Greek musical instruments
Category:Italian musical instruments
Category:Necked bowl lutes
Category:Ukrainian musical instruments
Category:Venezuelan musical instruments
Category:Necked box lutes
Category:Baroque instruments | 18,888 | 2017-01 |
Great power | thumb|upright 1.6||Great powers are recognized in an international structure such as the United Nations Security Council. Shown here is the Security Council Chamber.
A great power is a sovereign state that is recognized as having the ability and expertise to exert its influence on a global scale. Great powers characteristically possess military and economic strength, as well as diplomatic and soft power influence, which may cause middle or small powers to consider the great powers' opinions before taking actions of their own. International relations theorists have posited that great power status can be characterized into power capabilities, spatial aspects, and status dimensions.
While some nations are widely considered to be great powers, there is no definitive list of them. Sometimes the status of great powers is formally recognized in conferences such as the Congress of ViennaDanilovic, Vesna. "When the Stakes Are High—Deterrence and Conflict among Major Powers", University of Michigan Press (2002), pp 27, 225–228 (PDF chapter downloads) (PDF copy). or the United Nations Security Council (China, France, Russia, the United Kingdom and the United States serve as the body's five permanent members). Accordingly, the great powers after the Cold War are Britain, China, France, Germany, Japan, Russia, and the United States p.59 Accordingly, the status of great powers has also been formally and informally recognised in forums such as the G7 and the now defunct G8. (see section on The G6/G7: great power governance)Contemporary Concert Diplomacy: The Seven-Power Summit as an International Concert, Professor John Kirton (The G8 as a Concert of Great Powers)Tables of Sciences Po and Documentation Francaise: Russia y las grandes potencias and
G8 et Chine (2004)
The term "great power" was first used to represent the most important powers in Europe during the post-Napoleonic era. The "Great Powers" constituted the "Concert of Europe" and claimed the right to joint enforcement of the postwar treaties.Charles Webster, (ed), British Diplomacy 1813–1815: Selected Documents Dealing with the Reconciliation of Europe, (1931), p307. The formalization of the division between small powersToje, A. (2010). The European Union as a small power: After the post-Cold War. New York: Palgrave Macmillan. and great powers came about with the signing of the Treaty of Chaumont in 1814. Since then, the international balance of power has shifted numerous times, most dramatically during World War I and World War II. In literature, alternative terms for great power are often world powerDictionary - World power or major power,Dictionary - Major power but these terms can also be interchangeable with superpower.Thesaurus - World Power
Characteristics
There are no set or defined characteristics of a great power. These characteristics have often been treated as empirical, self-evident to the assessor. However, this approach has the disadvantage of subjectivity. As a result, there have been attempts to derive some common criteria and to treat these as essential elements of great power status.
Early writings on the subject tended to judge states by the realist criterion, as expressed by the historian A. J. P. Taylor when he noted that "The test of a great power is the test of strength for war." Later writers have expanded this test, attempting to define power in terms of overall military, economic, and political capacity.Organski, AFK – World Politics, Knopf (1958) Kenneth Waltz, the founder of the neorealist theory of international relations, uses a set of five criteria to determine great power: population and territory; resource endowment; economic capability; political stability and competence; and military strength. These expanded criteria can be divided into three heads: power capabilities, spatial aspects, and status.Danilovic, Vesna. "When the Stakes Are High—Deterrence and Conflict among Major Powers", University of Michigan Press (2002), pp 27, 225–228 (PDF chapter downloads) (PDF copy).
Power dimensions
thumb|Leopold von Ranke was one of the first to attempt to scientifically document the great powers.
As noted above, for many, power capabilities were the sole criterion. However, even under the more expansive tests, power retains a vital place.
This aspect has received mixed treatment, with some confusion as to the degree of power required. Writers have approached the concept of great power with differing conceptualizations of the world situation, from multi-polarity to overwhelming hegemony. In his essay, 'French Diplomacy in the Postwar Period', the French historian Jean-Baptiste Duroselle spoke of the concept of multi-polarity: "A Great power is one which is capable of preserving its own independence against any other single power."contained on page 204 in: Kertesz and Fitsomons (eds) – Diplomacy in a Changing World, University of Notre Dame Press (1960)
This differed from earlier writers, notably from Leopold von Ranke, who clearly had a different idea of the world situation. In his essay 'The Great Powers', written in 1833, von Ranke wrote: "If one could establish as a definition of a Great power that it must be able to maintain itself against all others, even when they are united, then Frederick has raised Prussia to that position."Iggers and von Moltke "In the Theory and Practice of History", Bobbs-Merril (1973) These positions have been the subject of criticism.
Spatial dimension
All states have a geographic scope of interests, actions, or projected power. This is a crucial factor in distinguishing a great power from a regional power; by definition the scope of a regional power is restricted to its region. It has been suggested that a great power should be possessed of actual influence throughout the scope of the prevailing international system. Arnold J. Toynbee, for example, observes that "Great power may be defined as a political force exerting an effect co-extensive with the widest range of the society in which it operates. The Great powers of 1914 were 'world-powers' because Western society had recently become 'world-wide'."
Other suggestions have been made that a great power should have the capacity to engage in extra-regional affairs and that a great power ought to be possessed of extra-regional interests, two propositions which are often closely connected.Stoll, Richard J – State Power, World Views, and the Major Powers, Contained in: Stoll and Ward (eds) – Power in World Politics, Lynne Rienner (1989)
Status dimension
Formal or informal acknowledgment of a nation's great-power status has also been a criterion for being a great power. As political scientist George Modelski notes, "The status of Great power is sometimes confused with the condition of being powerful, The office, as it is known, did in fact evolve from the role played by the great military states in earlier periods ... But the Great power system institutionalizes the position of the powerful state in a web of rights and obligations."
This approach restricts analysis to the post-Congress of Vienna epoch; it being there that great powers were first formally recognized. In the absence of such a formal act of recognition it has been suggested that great power status can arise by implication, by judging the nature of a state's relations with other great powers.Domke, William K – Power, Political Capacity, and Security in the Global System, Contained in: Stoll and Ward (eds) – Power in World Politics, Lynn Rienner (1989)
A further option is to examine a state's willingness to act as a great power. As a nation will seldom declare that it is acting as such, this usually entails a retrospective examination of state conduct. As a result, this is of limited use in establishing the nature of contemporary powers, at least not without the exercise of subjective observation.
Other important criteria throughout history are that great powers should have enough influence to be included in discussions of political and diplomatic questions of the day, and have influence on the final outcome and resolution. Historically, when major political questions were addressed, several great powers met to discuss them. Before the era of groups like the United Nations, participants of such meetings were not officially named, but were decided based on their great power status. These were conferences which settled important questions based on major historical events. This might mean deciding the political resolution of various geographical and nationalist claims following a major conflict, or other contexts.
There are several historical conferences and treaties which display this pattern, such as the Congress of Vienna, the Congress of Berlin, the discussions of the Treaty of Versailles which redrew the map of Europe, and the Treaty of Westphalia.
History
thumb|The Congress of Vienna by Jean-Baptiste Isabey, 1819.
Different sets of great, or significant, powers have existed throughout history; however, the term "great power" has only been used in scholarly or diplomatic discourse since the Congress of Vienna in 1815. The Congress established the Concert of Europe as an attempt to preserve peace after the years of Napoleonic Wars.
Lord Castlereagh, the British Foreign Secretary, first used the term in its diplomatic context, in a letter sent on February 13, 1814: "It affords me great satisfaction to acquaint you that there is every prospect of the Congress terminating with a general accord and Guarantee between the Great powers of Europe, with a determination to support the arrangement agreed upon, and to turn the general influence and if necessary the general arms against the Power that shall first attempt to disturb the Continental peace."
The Congress of Vienna consisted of five main powers: the Austrian Empire, France, Prussia, Russia, and the United Kingdom. These five primary participants constituted the original great powers as we know the term today. Other powers, such as Spain, Portugal, and Sweden, which was a great power during the 17th century, were consulted on certain specific issues, but they were not full participants. Hanover, Bavaria, and Württemberg were also consulted on issues relating to Germany.
Of the five original great powers recognised at the Congress of Vienna, only France and the United Kingdom have maintained that status continuously to the present day, although France was defeated in the Franco-Prussian War and occupied during World War II. After the Congress of Vienna, the British Empire emerged as the pre-eminent power, due to its navy and the extent of its territories, which signalled the beginning of the Pax Britannica and of the Great Game between the UK and Russia. The balance of power between the Great Powers became a major influence in European politics, prompting Otto von Bismarck to say "All politics reduces itself to this formula: try to be one of three, as long as the world is governed by the unstable equilibrium of five great powers."Britain And Germany: from Ally to Enemy
Over time, the relative power of these five nations fluctuated, which by the dawn of the 20th century had served to create an entirely different balance of power. Some, such as the United Kingdom and Prussia (as the founder of the newly formed German state), experienced continued economic growth and political power. Others, such as Russia and Austria-Hungary, stagnated. At the same time, other states were emerging and expanding in power, largely through the process of industrialization. These countries seeking to attain great power status were: Italy after the Risorgimento, Japan after the Meiji Restoration, and the United States after its civil war. By the dawn of the 20th century, the balance of world power had changed substantially since the Congress of Vienna. The Eight-Nation Alliance was a belligerent alliance of eight nations against the Boxer Rebellion in China. It formed in 1900 and consisted of the five Congress powers plus Italy, Spain, Japan, and the United States, representing the great powers at the beginning of 20th century.
Great powers at war
thumb|The "Big Four" at the Paris Peace Conference of 1919: David Lloyd George, Vittorio Emanuele Orlando, Georges Clemenceau and Woodrow Wilson.
Shifts of international power have most notably occurred through major conflicts.Power Transitions as the cause of war. The conclusion of the Great War and the resulting treaties of Versailles, St-Germain, Neuilly, Trianon and Sèvres witnessed the United Kingdom, France, Italy, Japan and the United States as the chief arbiters of the new world order.Globalization and Autonomy by Julie Sunday, McMaster University. In the aftermath of World War I the German Empire was defeated, the Austria-Hungarian empire was divided into new, less powerful states and the Russian Empire fell to a revolution. During the Paris Peace Conference, the "Big Four"—France, Italy, United Kingdom and the United States—held noticeably more power and influence on the proceedings and outcome of the treaties than Japan. The Big Four were leading architects of the Treaty of Versailles which was signed by Germany; the Treaty of St. Germain, with Austria; the Treaty of Neuilly, with Bulgaria; the Treaty of Trianon, with Hungary; and the Treaty of Sèvres, with the Ottoman Empire. During the decision-making of the Treaty of Versailles, Italy pulled out of the conference because a part of its demands were not met and temporarily left the other three countries as the sole major architects of that treaty, referred to as the "Big Three".,See the part:The attitude towards Germany of the "Big Three":"The "Big Three" were David Lloyd George of Britain, Clemenceau of France and Woodrow Wilson of America."Italian delegates return to Paris Peace conferenceParis 1919 by Margaret MacMillan has the Council of Six (Britain, France, Italy, Japan and the United States) as the main victors and remaining Great Powers.
The victorious great powers also gained an acknowledgement of their status through permanent seats at the League of Nations Council, where they acted as a type of executive body directing the Assembly of the League. However, the Council began with only four permanent members—the United Kingdom, France, Italy, and Japan-because the United States, meant to be the fifth permanent member, left because the US Senate voted on 19 March 1920 against the ratification of the Treaty of Versailles, thus preventing American participation in the League.
When World War II started in 1939, it divided the world into two alliances—the Allies (the United Kingdom and France at first in Europe, China in Asia since 1937, followed in 1941 by the Soviet Union, the United States); and the Axis powers consisting of Germany, Italy and Japan.Harrison, M (2000) The Economics of World War II: Six Great Powers in International Comparison, Cambridge University Press.Even though the book The Economics of World War II lists seven great powers at the start of 1939 (the British Empire, the Empire of Japan, France, the Kingdom of Italy, Nazi Germany, the Soviet Union and the United States), it focuses only on six of them, because France surrendered shortly after the war began. During World War II, the United States, United Kingdom, and Soviet Union controlled Allied policy and emerged as the "Big Three". The Republic of China and the Big Three were referred as a "trusteeship of the powerful" and were recognized as the Allied "Big Four" in Declaration by United Nations in 1942.Hoopes, Townsend, and Douglas Brinkley. FDR and the Creation of the U.N. New Haven: Yale University Press, 1997. ISBN 978-0-300-06930-3. These four countries were referred as the "Four Policemen" of the Allies and considered as the primary victors of World War II.* The importance of France was acknowledged by their inclusion, along with the other four, in the group of countries allotted permanent seats in the United Nations Security Council.
thumb|The "Big Three" of Europe at the Yalta Conference: Winston Churchill, Franklin D. Roosevelt and Joseph Stalin.right|thumb|alt=Three men, Chiang Kai-shek, Franklin D. Roosevelt and Winston Churchill, sitting together elbow to elbow|The Allied leaders of the Asian and Pacific Theater: Generalissimo Chiang Kai-shek, Franklin D. Roosevelt, and Winston Churchill meeting at the Cairo Conference in 1943Since the end of the World Wars, the term "great power" has been joined by a number of other power classifications. Foremost among these is the concept of the superpower, used to describe those nations with overwhelming power and influence in the rest of the world. It was first coined in 1944 by William T.R. FoxThe Superpowers: The United States, Britain and the Soviet Union – Their Responsibility for Peace (1944), written by William T.R. Fox and according to him, there were three superpowers: the British Empire, the United States, and the Soviet Union. But after World War II the British Empire lost its superpower status, leaving the United States and the Soviet Union as the world's superpowers.The 1956 Suez Crisis suggested that the United Kingdom, financially weakened by two world wars, could not then pursue its foreign policy objectives on an equal footing with the new superpowers without sacrificing convertibility of its reserve currency as a central goal of policy. – from superpower cited by The term middle power has emerged for those nations which exercise a degree of global influence, but are insufficient to be decisive on international affairs. Regional powers are those whose influence is generally confined to their region of the world.
During the Cold War, the Asian power of Japan and the European powers of the United Kingdom, France, and West Germany rebuilt their economies. France and the United Kingdom maintained technologically advanced armed forces with power projection capabilities and maintain large defence budgets to this day. Yet, as the Cold War continued, authorities began to question if France and the United Kingdom could retain their long-held statuses as great powers. China, with the world's largest population, has slowly risen to great power status, with large growth in economic and military power in the post-war period. After 1949, the Republic of China began to lose its recognition as the sole legitimate government of China by the other great powers, in favour of the People's Republic of China. Subsequently, in 1971, it lost its permanent seat at the UN Security Council to the People's Republic of China.
Great powers at peace
According to Joshua Baron – a "researcher, lecturer, and consultant on international conflict" – since the early 1960s direct military conflicts and major confrontations have "receded into the background" with regards to relations among the great powers. Baron argues several reasons why this is the case, citing the unprecedented rise of the United States and its predominant position as the key reason. Baron highlights that since World War Two no other great power has been able to achieve parity or near parity with the United States, with the exception of the Soviet Union for a brief time. This position is unique among the great powers since the start of the modern era (the 16th century), where there has traditionally always been "tremendous parity among the great powers". This unique period of American primacy has been an important factor in maintaining a condition of peace between the great powers.
Another important factor is the apparent consensus among Western great powers that military force is no longer an effective tool of resolving disputes among their peers. This "subset" of great powers – France, Germany, Japan, the United Kingdom and the United States – consider maintaining a "state of peace" as desirable. As evidence, Baron outlines that since the Cuban missile crisis (1962) during the Cold War, these influential Western nations have resolved all disputes among the great powers peacefully at the United Nations and other forums of international discussion.
Referring to great power relations pre-1960, Joshua Baron highlights that starting from around the 16th century and the rise of several European great powers, military conflicts and confrontations was the defining characteristic of diplomacy and relations between such powers. "Between 1500 and 1953, there were 64 wars in which at least one great power was opposed to another, and they averaged little more than five years in length. In approximately a 450-year time frame, on average at least two great powers were fighting one another in each and every year." Even during the period of Pax Britannica (or "the British Peace") between 1815 and 1914, war and military confrontations among the great powers was still a frequent occurrence. In fact, Joshua Baron points out that, in terms of militarized conflicts or confrontations, the UK led the way in this period with nineteen such instances against; Russia (8), France (5), Germany/Prussia (5) and Italy (1).
Aftermath of the Cold War
349px|thumb|right|
China, France, Russia, the United Kingdom and the United States are often referred to as great powers by academics due to "their political and economic dominance of the global arena".Yasmi Adriansyah, 'Questioning Indonesia's place in the world', Asia Times (20 September 2011): 'Though there are still debates on which countries belong to which category, there is a common understanding that the GP [great power] countries are the United States, China, United Kingdom, France and Russia. Besides their political and economic dominance of the global arena, these countries have special status in the United Nations Security Council with their permanent seats and veto rights.' These five nations are the only states to have permanent seats with veto power on the UN Security Council. They are also the only recognized "Nuclear Weapons States" under the Nuclear Non-Proliferation Treaty, and maintain military expenditures which are among the largest in the world. However, there is no unanimous agreement among authorities as to the current status of these powers or what precisely defines a great power. For example, sources have at times referred to China,Gerald Segal, Does China Matter?, Foreign Affairs (September/October 1999). France, Russia and the United Kingdomaccording to P. Shearman, M. Sussex, European Security After 9/11, Ashgate, 2004, both UK and France were global powers now reduced to middle-power status. as middle powers.
Following the dissolution of the Soviet Union, its UN Security Council permanent seat was transferred to the Russian Federation in 1991, as its successor state. The newly formed Russian Federation emerged on the level of a great power, leaving the United States as the only remaining global superpowerThe fall of the Berlin Wall and the breakup of the Soviet Union left the United States as the only remaining superpower in the 1990s. (although some support a multipolar world view).
Japan and Germany are great powers too, though due to their large advanced economies (having the third and fourth largest economies respectively) rather than their strategic and hard power capabilities (i.e., the lack of permanent seats and veto power on the UN Security Council or strategic military reach).Great PowersAsia’s Overlooked Great Power, APR 18, 2007 Germany has been a member together with the five permanent Security Council members in the P5+1 grouping of world powers. Like China, France, Russia and the United Kingdom; Germany and Japan have also been referred to as middle powers.Er LP (2006) Japan's Human Security Rolein Southeast Asia"Merkel as a world star - Germany's place in the world", The Economist (November 18, 2006), p. 27: "Germany, says Volker Perthes, director of the German Institute for International and Security Affairs, is now pretty much where it belongs: squarely at the centre. Whether it wants to be or not, the country is a Mittelmacht, or middle power."Susanna Vogt, "Germany and the G20", in Wilhelm Hofmeister, Susanna Vogt, G20: Perceptions and Perspectives for Global Governance (Singapore: Oct. 19, 2011), p. 76, citing Thomas Fues and Julia Leininger (2008): "Germany and the Heiligendamm Process", in Andrew Cooper and Agata Antkiewicz (eds.): Emerging Powers in Global Governance: Lessons from the Heiligendamm Process, Waterloo: Wilfrid Laurier University Press, p. 246: "Germany’s motivation for the initiative had been '... driven by a combination of leadership qualities and national interests of a middle power with civilian characteristics'.""Change of Great Powers", in Global Encyclopaedia of Political Geography, by M.A. Chaudhary and Guatam Chaudhary (New Delhi, 2009.), p. 101: "Germany is considered by experts to be an economic power. It is considered as a middle power in Europe by Chancellor Angela Merkel, former President Johannes Rau and leading media of the country."Susanne Gratius, Is Germany still a EU-ropean power?, FRIDE Policy Brief, No. 115 (February 2012), pp. 1, 2: "Being the world's fourth largest economic power and the second largest in terms of exports has not led to any greater effort to correct Germany's low profile in foreign policy ... For historic reasons and because of its size, Germany has played a middle-power role in Europe for over 50 years."
In his 2014 publication Great Power Peace and American Primacy, Joshua Baron considers China, France, Russia, Germany, Japan, the United Kingdom and the United States as the current great powers.
Italy has been referred to as a great power by a number of academics and commentators throughout the post WWII era. ("The United States is the sole world's superpower. France, Italy, Germany and the United Kingdom are great powers") ("The great powers are super-sovereign states: an exclusive club of the most powerful states economically, militarily, politically and strategically. These states include veto-wielding members of the United Nations Security Council (United States, United Kingdom, France, China, and Russia), as well as economic powerhouses such as Germany, Italy and Japan.") (During the Kosovo War (1998) "...Contact Group consisting of six great powers (the United states, Russia, France, Britain, Germany and Italy).") American expert of international law, Professor Milena Sterio, writes; "The great powers are super-sovereign states: an exclusive club of the most powerful states economically, militarily, politically and strategically. These states include veto-wielding members of the United Nations Security Council (United States, United Kingdom, France, China, and Russia), as well as economic powerhouses such as Germany, Italy and Japan." Sterio also cites Italy's status in the G7 and the nations influence in regional and international organisations for its status as a great power. Some analysts however, assert that Italy is an "intermittent" or "the least" of the great powers,Italy: 150 years of a small great power, eurasia-rivista.org, 21 December 2010 while some others believe Italy is a middle or regional power."Operation Alba may be considered one of the most important instances in which Italy has acted as a regional power, taking the lead in executing a technically and politically coherent and determined strategy." See Federiga Bindi, Italy and the European Union (Washington, D.C.: Brookings Institution Press, 2011), p. 171."Italy plays a prominent role in European and global military, cultural and diplomatic affairs. The country's European political, social and economic influence make it a major regional power." See Italy: Justice System and National Police Handbook, Vol. 1 (Washington, D.C.: International Business Publications, 2009), p. 9.
In addition to those contemporary great powers mentioned above, Zbigniew BrzezinskiStrategic Vision: America & the Crisis of Global Power by Dr. Zbigniew Brzezinski, pp 43–45. Published 2012. and Malik Mohan consider India to be a great power too. Although unlike the contemporary great powers who have long been considered so, India's recognition among authorities as a great power is comparatively recent. However, there is no collective agreement among observers as to the status of India, for example, a number of academics believe that India is emerging as a great power, while some believe that India remains a middle power.Charalampos Efstathopoulosa, 'Reinterpreting India's Rise through the Middle Power Prism', Asian Journal of Political Science, Vol. 19, Issue 1 (2011), p. 75: 'India's role in the contemporary world order can be optimally asserted by the middle power concept. The concept allows for distinguishing both strengths and weakness of India's globalist agency, shifting the analytical focus beyond material-statistical calculations to theorise behavioural, normative and ideational parameters.'Robert W. Bradnock, India's Foreign Policy since 1971 (The Royal Institute for International Affairs, London: Pinter Publishers, 1990), quoted in Leonard Stone, 'India and the Central Eurasian Space', Journal of Third World Studies, Vol. 24, No. 2, 2007, p. 183: 'The U.S. is a superpower whereas India is a middle power. A superpower could accommodate another superpower because the alternative would be equally devastating to both. But the relationship between a superpower and a middle power is of a different kind. The former does not need to accommodate the latter while the latter cannot allow itself to be a satellite of the former."Jan Cartwright, 'India's Regional and International Support for Democracy: Rhetoric or Reality?', Asian Survey, Vol. 49, No. 3 (May/June 2009), p. 424: 'India’s democratic rhetoric has also helped it further establish its claim as being a rising "middle power." (A "middle power" is a term that is used in the field of international relations to describe a state that is not a superpower but still wields substantial influence globally. In addition to India, other "middle powers" include, for example, Australia and Canada.)'
The United Nations Security Council, the G7 (and now defunct G8), the BRICs, NATO Quint and the Contact Group have all been described as great power concerts.
Emerging powers
With continuing European integration, the European Union is increasingly being seen as a great power in its own right, with representation at the WTO and at G8 and G-20 summits. This is most notable in areas where the European Union has exclusive competence (i.e. economic affairs). It also reflects a non-traditional conception of Europe's world role as a global "civilian power", exercising collective influence in the functional spheres of trade and diplomacy, as an alternative to military dominance.Veit Bachmann and James D Sidaway, "Zivilmacht Europa: A Critical Geopolitics of the European Union as a Global Power", Transactions of the Institute of British Geographers, New Series, Vol. 34, No. 1 (Jan., 2009), pp. 94–109. The European Union is a supranational union and not a sovereign state, and has limited scope in the areas of foreign affairs and defence policy. These remain largely with the member states of the European Union, which include the three great powers of France, Germany and the United Kingdom (referred to as the "EU three").
Brazil and India are widely regarded as emerging powers with the potential to be great powers. Political scientist Stephen P. Cohen asserts that India is an emerging power, but highlights that some strategists consider India to be already a great power."India: Emerging Power", by Stephen P. Cohen, p. 60 Some academics such as Zbigniew Brzezinski and Dr David A. Robinson already regard India as a major or great power. Others suggest India may even have the potential to emerge as a superpower.
Permanent membership of the UN Security Council is widely regarded as being a central tenet of great power status in the modern world; Brazil, Germany, India and Japan form the G4 nations which support one another (and have varying degrees of support from the existing permanent members) in becoming permanent members. The G4 is opposed by the Italian-led Uniting for Consensus group. There are however few signs that reform of the Security Council will happen in the near future.
Hierarchy of great powers
Acclaimed political scientist, geo-strategist, and former United States National Security Advisor Zbigniew Brzezinski, appraised the current standing of the great powers in his 2012 publication Strategic Vision: America and the Crisis of Global Power. In relation to great powers, he makes the following points:
According to the 2014 report of the Hague centre for strategic studies:
Great powers by date
Timelines of the great powers since the end of the Napoleonic Wars in the early 19th century:
center
See also
Big Four (Western Europe)
Notes
References
Further reading
Category:States by power status
Category:International relations | 372,836 | 2017-01 |
Beer | thumb|Schlenkerla Rauchbier being poured from a cask
thumb|François Jaques: Peasants Enjoying Beer at Pub in Fribourg (Switz.) – (1923)
Beer is the world's oldest and most widely consumed alcoholic drink; it is the third most popular drink overall, after water and tea. The production of beer is called brewing, which involves the fermentation of starches, mainly derived from cereal grains—most commonly malted barley, although wheat, maize (corn), and rice are widely used.Barth, Roger. The Chemistry of Beer: The Science in the Suds, Wiley 2013: ISBN 978-1-118-67497-0. Most beer is flavoured with hops, which add bitterness and act as a natural preservative, though other flavourings such as herbs or fruit may occasionally be included. The fermentation process causes a natural carbonation effect, although this is often removed during processing, and replaced with forced carbonation. Some of humanity's earliest known writings refer to the production and distribution of beer: the Code of Hammurabi included laws regulating beer and beer parlours, and "The Hymn to Ninkasi", a prayer to the Mesopotamian goddess of beer, served as both a prayer and as a method of remembering the recipe for beer in a culture with few literate people.
Beer is sold in bottles and cans; it may also be available on draught, particularly in pubs and bars. The brewing industry is a global business, consisting of several dominant multinational companies and many thousands of smaller producers ranging from brewpubs to regional breweries. The strength of beer is usually around 4% to 6% alcohol by volume (abv), although it may vary between 0.5% and 20%, with some breweries creating examples of 40% abv and above. Beer forms part of the culture of beer-drinking nations and is associated with social traditions such as beer festivals, as well as a rich pub culture involving activities like pub crawling, and pub games such as bar billiards.
History
thumb|Egyptian wooden model of beer making in ancient Egypt, Rosicrucian Egyptian Museum, San Jose, California
Beer is one of the world's oldest prepared beverages, possibly dating back to the early Neolithic or 9500 BC , when cereal was first farmed, and is recorded in the written history of ancient Iraq and ancient Egypt.; Michael M. Homan, Beer and Its Drinkers: An Ancient near Eastern Love Story, Near Eastern Archaeology, Vol. 67, No. 2 (Jun. 2004), pp. 84–95. Archaeologists speculate that beer was instrumental in the formation of civilisations. Approximately 5000 years ago, workers in the city of Uruk (modern day Iraq) were paid by their employers in beer.George, Alison (June 22, 2016). "The world's oldest paycheck was cashed in beer". New Scientist. During the building of the Great Pyramids in Giza, Egypt, each worker got a daily ration of four to five litres of beer, which served as both nutrition and refreshment that was crucial to the pyramids' construction.Tucker, Abigail (August 2011). "The Beer Archaeologist". Smithsonian.com.
The earliest known chemical evidence of barley beer dates to circa 3500–3100 BC from the site of Godin Tepe in the Zagros Mountains of western Iran.McGovern, Patrick, Uncorking the Past, 2009, ISBN 978-0-520-25379-7. pp. 66–71. Some of the earliest Sumerian writings contain references to beer; examples include a prayer to the goddess Ninkasi, known as "The Hymn to Ninkasi", which served as both a prayer as well as a method of remembering the recipe for beer in a culture with few literate people, and the ancient advice (Fill your belly. Day and night make merry) to Gilgamesh, recorded in the Epic of Gilgamesh, by the ale-wife Siduri may, at least in part, have referred to the consumption of beer.Hartman, L. F. and Oppenheim, A. L., (1950) On Beer and Brewing Techniques in Ancient Mesopotamia. Supplement to the Journal of the American Oriental Society, 10. Retrieved 20 September 2013. The Ebla tablets, discovered in 1974 in Ebla, Syria, show that beer was produced in the city in 2500 BC.Dumper, Stanley. 2007, p.141. A fermented beverage using rice and fruit was made in China around 7000 BC. Unlike sake, mould was not used to saccharify the rice (amylolytic fermentation); the rice was probably prepared for fermentation by mastication or malting.
Almost any substance containing sugar can naturally undergo alcoholic fermentation. It is likely that many cultures, on observing that a sweet liquid could be obtained from a source of starch, independently invented beer. Bread and beer increased prosperity to a level that allowed time for development of other technologies and contributed to the building of civilisations.
Beer was spread through Europe by Germanic and Celtic tribes as far back as 3000 BC, and it was mainly brewed on a domestic scale. The product that the early Europeans drank might not be recognised as beer by most people today. Alongside the basic starch source, the early European beers might contain fruits, honey, numerous types of plants, spices and other substances such as narcotic herbs.Max Nelson, The Barbarian's Beverage: A History of Beer in Ancient Europe pp2, Routledge (2005), ISBN 0-415-31121-7. What they did not contain was hops, as that was a later addition, first mentioned in Europe around 822 by a Carolingian AbbotGoogle Books Richard W. Unger, Beer in the Middle Ages and the Renaissance pp57, University of Pennsylvania Press (2004), ISBN 0-8122-3795-1. and again in 1067 by Abbess Hildegard of Bingen.Max Nelson, The Barbarian's Beverage: A History of Beer in Ancient Europe pp110, Routledge (2005), ISBN 0-415-31121-7.
In 1516, William IV, Duke of Bavaria, adopted the Reinheitsgebot (purity law), perhaps the oldest food-quality regulation still in use in the 21st century, according to which the only allowed ingredients of beer are water, hops and barley-malt."492 Years of Good Beer: Germans Toast the Anniversary of Their Beer Purity Law". Der Spiegel 23 April 2008. Beer produced before the Industrial Revolution continued to be made and sold on a domestic scale, although by the 7th century AD, beer was also being produced and sold by European monasteries. During the Industrial Revolution, the production of beer moved from artisanal manufacture to industrial manufacture, and domestic manufacture ceased to be significant by the end of the 19th century. The development of hydrometers and thermometers changed brewing by allowing the brewer more control of the process and greater knowledge of the results.
Today, the brewing industry is a global business, consisting of several dominant multinational companies and many thousands of smaller producers ranging from brewpubs to regional breweries. As of 2006, more than 133 billion litres (35 billion gallons), the equivalent of a cube 510 metres on a side, of beer are sold per year, producing total global revenues of $294.5 billion (£147.7 billion).
In 2010, China's beer consumption hit 450 million hectolitres (45 billion litres), or nearly twice that of the United States, but only 5 per cent sold were premium draught beers, compared with 50 per cent in France and Germany.
Brewing
The process of making beer is known as brewing. A dedicated building for the making of beer is called a brewery, though beer can be made in the home and has been for much of its history. A company that makes beer is called either a brewery or a brewing company. Beer made on a domestic scale for non-commercial reasons is classified as homebrewing regardless of where it is made, though most homebrewed beer is made in the home. Brewing beer is subject to legislation and taxation in developed countries, which from the late 19th century largely restricted brewing to a commercial operation only. However, the UK government relaxed legislation in 1963, followed by Australia in 1972 and the US in 1978, allowing homebrewing to become a popular hobby.
The purpose of brewing is to convert the starch source into a sugary liquid called wort and to convert the wort into the alcoholic beverage known as beer in a fermentation process effected by yeast.
The first step, where the wort is prepared by mixing the starch source (normally malted barley) with hot water, is known as "mashing". Hot water (known as "liquor" in brewing terms) is mixed with crushed malt or malts (known as "grist") in a mash tun. The mashing process takes around 1 to 2 hours,ABGbrew.com Steve Parkes, British Brewing, American Brewers Guild. during which the starches are converted to sugars, and then the sweet wort is drained off the grains. The grains are now washed in a process known as "sparging". This washing allows the brewer to gather as much of the fermentable liquid from the grains as possible. The process of filtering the spent grain from the wort and sparge water is called wort separation. The traditional process for wort separation is lautering, in which the grain bed itself serves as the filter medium. Some modern breweries prefer the use of filter frames which allow a more finely ground grist.Goldhammer, Ted (2008), The Brewer's Handbook, 2nd ed., Apex, ISBN 978-0-9675212-3-7 pp. 181 ff.
Most modern breweries use a continuous sparge, collecting the original wort and the sparge water together. However, it is possible to collect a second or even third wash with the not quite spent grains as separate batches. Each run would produce a weaker wort and thus a weaker beer. This process is known as second (and third) runnings. Brewing with several runnings is called parti gyle brewing.Brewingtechniques.com, Randy Mosher, "Parti-Gyle Brewing", Brewing Techniques, March/April 1994
The sweet wort collected from sparging is put into a kettle, or "copper" (so called because these vessels were traditionally made from copper), and boiled, usually for about one hour. During boiling, water in the wort evaporates, but the sugars and other components of the wort remain; this allows more efficient use of the starch sources in the beer. Boiling also destroys any remaining enzymes left over from the mashing stage. Hops are added during boiling as a source of bitterness, flavour and aroma. Hops may be added at more than one point during the boil. The longer the hops are boiled, the more bitterness they contribute, but the less hop flavour and aroma remains in the beer.Books.google.co.uk, Michael Lewis, Tom W. Young, Brewing, page 275, Springer (2002), ISBN 0-306-47274-0
After boiling, the hopped wort is now cooled, ready for the yeast. In some breweries, the hopped wort may pass through a hopback, which is a small vat filled with hops, to add aromatic hop flavouring and to act as a filter; but usually the hopped wort is simply cooled for the fermenter, where the yeast is added. During fermentation, the wort becomes beer in a process which requires a week to months depending on the type of yeast and strength of the beer. In addition to producing ethanol, fine particulate matter suspended in the wort settles during fermentation. Once fermentation is complete, the yeast also settles, leaving the beer clear.
Fermentation is sometimes carried out in two stages, primary and secondary. Once most of the alcohol has been produced during primary fermentation, the beer is transferred to a new vessel and allowed a period of secondary fermentation. Secondary fermentation is used when the beer requires long storage before packaging or greater clarity.Google Books Michael Lewis, Tom W. Young, Brewing pp306, Springer (2002), ISBN 0-306-47274-0. Retrieved 29 September 2008. When the beer has fermented, it is packaged either into casks for cask ale or kegs, aluminium cans, or bottles for other sorts of beer.Harold M. Broderick, Alvin Babb, Beer Packaging: A Manual for the Brewing and Beverage Industries, Master Brewers Association of the Americas (1982)
Ingredients
thumb|Malted barley before roasting
The basic ingredients of beer are water; a starch source, such as malted barley, able to be saccharified (converted to sugars) then fermented (converted into ethanol and carbon dioxide); a brewer's yeast to produce the fermentation; and a flavouring such as hops.Alabev.com The Ingredients of Beer. Retrieved 29 September 2008. A mixture of starch sources may be used, with a secondary starch source, such as maize (corn), rice or sugar, often being termed an adjunct, especially when used as a lower-cost substitute for malted barley.beer-brewing.com Beer-brewing.com Ted Goldammer, The Brewers Handbook, Chapter 6 – Beer Adjuncts, Apex Pub (1 January 2000), ISBN 0-9675212-0-3. Retrieved 29 September 2008 Less widely used starch sources include millet, sorghum and cassava root in Africa, and potato in Brazil, and agave in Mexico, among others.BeerHunter.com Michael Jackson, A good beer is a thorny problem down Mexico way, What's Brewing, 1 October 1997. Retrieved 29 September 2008. The amount of each starch source in a beer recipe is collectively called the grain bill.
Water
Beer is composed mostly of water. Regions have water with different mineral components; as a result, different regions were originally better suited to making certain types of beer, thus giving them a regional character. For example, Dublin has hard water well-suited to making stout, such as Guinness; while the Plzeň Region has soft water well-suited to making Pilsner (pale lager), such as Pilsner Urquell. The waters of Burton in England contain gypsum, which benefits making pale ale to such a degree that brewers of pale ales will add gypsum to the local water in a process known as Burtonisation. Michael Jackson, BeerHunter, 19 October 1991, "Brewing a good glass of water". Retrieved 13 September 2008.
Starch source
thumb|Malted barley – a primary mash ingredient
The starch source in a beer provides the fermentable material and is a key determinant of the strength and flavour of the beer. The most common starch source used in beer is malted grain. Grain is malted by soaking it in water, allowing it to begin germination, and then drying the partially germinated grain in a kiln. Malting grain produces enzymes that convert starches in the grain into fermentable sugars.Wikisource 1911 Encyclopædia Britannica/Brewing/Chemistry. Retrieved 29 September 2008. Different roasting times and temperatures are used to produce different colours of malt from the same grain. Darker malts will produce darker beers.Farm-direct.co.uk Oz, Barley Malt, 6 February 2002. Retrieved 29 September 2008.
Nearly all beer includes barley malt as the majority of the starch. This is because its fibrous hull remains attached to the grain during threshing. After malting, barley is milled, which finally removes the hull, breaking it into large pieces. These pieces remain with the grain during the mash, and act as a filter bed during lautering, when sweet wort is separated from insoluble grain material. Other malted and unmalted grains (including wheat, rice, oats, and rye, and less frequently, corn and sorghum) may be used. Some brewers have produced gluten-free beer, made with sorghum with no barley malt, for those who cannot consume gluten-containing grains like wheat, barley, and rye.
Hops
thumb|right|Hop cone in a Hallertau, Germany, hop yard
Flavouring beer is the sole major commercial use of hops.A. H. Burgess, Hops: Botany, Cultivation and Utilization, Leonard Hill (1964), ISBN 0-471-12350-1 The flower of the hop vine is used as a flavouring and preservative agent in nearly all beer made today. The flowers themselves are often called "hops".
The first historical mention of the use of hops in beer was from 822 AD in monastery rules written by Adalhard the Elder, also known as Adalard of Corbie, though the date normally given for widespread cultivation of hops for use in beer is the thirteenth century. Before the thirteenth century, and until the sixteenth century, during which hops took over as the dominant flavouring, beer was flavoured with other plants; for instance, grains of paradise or alehoof. Combinations of various aromatic herbs, berries, and even ingredients like wormwood would be combined into a mixture known as gruit and used as hops are now used.Books.google.co.uk Richard W. Unger, Beer in the Middle Ages and the Renaissance, University of Pennsylvania Press (2004), ISBN 0-8122-3795-1. Retrieved 14 September 2008. Some beers today, such as Fraoch' by the Scottish Heather Ales company and Cervoise Lancelot by the French Brasserie-Lancelot company, use plants other than hops for flavouring.
Hops contain several characteristics that brewers desire in beer. Hops contribute a bitterness that balances the sweetness of the malt; the bitterness of beers is measured on the International Bitterness Units scale. Hops contribute floral, citrus, and herbal aromas and flavours to beer. Hops have an antibiotic effect that favours the activity of brewer's yeast over less desirable microorganisms and aids in "head retention", the length of time that a foamy head created by carbonation will last. The acidity of hops is a preservative.PDQ Guides, Hops: Clever Use For a Useless Plan
Yeast
Yeast is the microorganism that is responsible for fermentation in beer. Yeast metabolises the sugars extracted from grains, which produces alcohol and carbon dioxide, and thereby turns wort into beer. In addition to fermenting the beer, yeast influences the character and flavour.Ostergaard, S., Olsson, L., Nielsen, J., Metabolic Engineering of Saccharomyces cerevisiae, Microbiol. Mol. Biol. Rev. 2000 64: 34–50
The dominant types of yeast used to make beer are the top-fermenting Saccharomyces cerevisiae and bottom-fermenting Saccharomyces pastorianus.Google Books Paul R. Dittmer, J. Desmond, Principles of Food, Beverage, and Labor Cost Controls, John Wiley and Sons (2005),
ISBN 0-471-42992-9 Brettanomyces ferments lambics,Google Books Ian Spencer Hornsey, Brewing pp 221–222, Royal Society of Chemistry (1999), ISBN 0-85404-568-6 and Torulaspora delbrueckii ferments Bavarian weissbier.Web.mst.edu David Horwitz, Torulaspora delbrueckii. Retrieved 30 September 2008.
Before the role of yeast in fermentation was understood, fermentation involved wild or airborne yeasts. A few styles such as lambics rely on this method today, but most modern fermentation adds pure yeast cultures.Google Books Y. H. Hui, George G. Khachatourians, Food Biotechnology pp 847–848, Wiley-IEEE (1994), ISBN 0-471-18570-1
Clarifying agent
Some brewers add one or more clarifying agents to beer, which typically precipitate (collect as a solid) out of the beer along with protein solids and are found only in trace amounts in the finished product. This process makes the beer appear bright and clean, rather than the cloudy appearance of ethnic and older styles of beer such as wheat beers.
Examples of clarifying agents include isinglass, obtained from swimbladders of fish; Irish moss, a seaweed; kappa carrageenan, from the seaweed Kappaphycus cottonii; Polyclar (artificial); and gelatin.EFSA.europa.eu Opinion of the Scientific Panel on Dietetic Products, Nutrition and Allergies, 23 August 2007. Retrieved 29 September 2008. If a beer is marked "suitable for Vegans", it was clarified either with seaweed or with artificial agents.Food.gov.uk Draft Guidance on the Use of the Terms 'Vegetarian' and 'Vegan' in Food Labelling: Consultation Responses pp71, 5 October 2005. Retrieved 29 September 2008.
Brewing industry
thumb|350px|Annual beer consumption per capita by country
thumb|300px|Beer Exports by Country (2014) from http://atlas.cid.harvard.edu/explore/tree_map/export/show/all/2203/2014/ Harvard Atlas of Economic Complexity
The brewing industry is a global business, consisting of several dominant multinational companies and many thousands of smaller producers ranging from brewpubs to regional breweries. More than 133 billion litres (35 billion gallons) are sold per year—producing total global revenues of $294.5 billion (£147.7 billion) in 2006.
The history of breweries in the 21st century has been one of larger breweries absorbing smaller breweries in order to ensure economy of scale. In 2002 South African Breweries bought the North American Miller Brewing Company to found SABMiller, becoming the second largest brewery, after North American Anheuser-Bush. In 2004 the Belgian Interbrew was the third largest brewery by volume and the Brazilian AmBev was the fifth largest. They merged into InBev, becoming the largest brewery. In 2007, SABMiller surpassed InBev and Anheuser-Bush when it acquired Royal Grolsch, brewer of Dutch premium beer brand Grolsch in 2007. In 2008, when InBev (the second-largest) bought Anheuser-Busch (the third largest), the new Anheuser-Busch InBev company became again the largest brewer in the world. AB InBev remains the largest brewery, with SABMiller second, and Heineken International third.
A microbrewery, or craft brewery, produces a limited amount of beer. The maximum amount of beer a brewery can produce and still be classed as a microbrewery varies by region and by authority, though is usually around 15,000 barrels (1.8 megalitres, 396 thousand imperial gallons or 475 thousand US gallons) a year. A brewpub is a type of microbrewery that incorporates a pub or other eating establishment. The highest density of breweries in the world, most of them microbreweries, exists in the German Region of Franconia, especially in the district of Upper Franconia, which has about 200 breweries.: Bier und Franken at Bierfranken.de (german)Bierland-Oberfranken (German) The Benedictine Weihenstephan Brewery in Bavaria, Germany, can trace its roots to the year 768, as a document from that year refers to a hop garden in the area paying a tithe to the monastery. The brewery was licensed by the City of Freising in 1040, and therefore is the oldest working brewery in the world.Giebel, Wieland, ed (1992). The New Germany. Singapore: Höfer Press Pte. Ltd.
Brewing at home is subject to regulation and prohibition in many countries. Restrictions on homebrewing were lifted in the UK in 1963, Australia followed suit in 1972, and the US in 1978, though individual states were allowed to pass their own laws limiting production.Papazian The Complete Joy of Homebrewing (3rd Edition), ISBN 0-06-053105-3
Varieties
thumb|Cask ale hand pumps with pump clips detailing the beers and their breweries
While there are many types of beer brewed, the basics of brewing beer are shared across national and cultural boundaries.News.bbc.co.uk, Will Smale, BBC, 20 April 2006, Is today's beer all image over reality?. Retrieved 12 September 2008. The traditional European brewing regions—Germany, Belgium, England and the Czech Republic—have local varieties of beer.Sixpack, Joe (pseudonym for Don Russell), What the Hell am I Drinking, 2011. ISBN 978-1-4637-8981-7.
English writer Michael Jackson, in his 1977 book The World Guide To Beer, categorised beers from around the world in local style groups suggested by local customs and names. Fred Eckhardt furthered Jackson's work in The Essentials of Beer Style in 1989.
Top-fermented beers are most commonly produced with Saccharomyces cerevisiae, a top-fermenting yeast which clumps and rises to the surface, typically between 15 and 24 °C (60 and 75 °F). At these temperatures, yeast produces significant amounts of esters and other secondary flavour and aroma products, and the result is often a beer with slightly "fruity" compounds resembling apple, pear, pineapple, banana, plum, or prune, among others.Google Books Lalli Nykänen, Heikki Suomalainen, Aroma of Beer, Wine and Distilled Alcoholic Beverages p. 13, Springer (1983), ISBN 90-277-1553-X.
After the introduction of hops into England from Flanders in the 15th century, "ale" referred to an unhopped fermented beverage, "beer" being used to describe a brew with an infusion of hops.Google books F. G. Priest, Graham G. Stewart, Handbook of Brewing p. 2, CRC Press (2006), ISBN 0-8247-2657-X.
The word ale comes from Old English ealu (plural ealoþ), in turn from Proto-Germanic *alu (plural *aluþ), ultimately from the Proto-Indo-European base *h₂elut-, which holds connotations of "sorcery, magic, possession, intoxication". The word beer comes from Old English bēor, from Proto-Germanic *beuzą, probably from Proto-Indo-European *bʰeusóm, originally "brewer's yeast, beer dregs", although other theories have been provided connecting the word with Old English bēow, "barley", or Latin bibere, "to drink". On the currency of two words for the same thing in the Germanic languages, the 12th-century Old Icelandic poem Alvíssmál says, "Ale it is called among men, but among the gods, beer."Öl heitir með mönnum, en með Ásum bjór ("bēor" main entry and supplement, Bosworth & Toller).
Real ale is the term coined by the Campaign for Real Ale (CAMRA) in 1973 for "beer brewed from traditional ingredients, matured by secondary fermentation in the container from which it is dispensed, and served without the use of extraneous carbon dioxide". It is applied to bottle conditioned and cask conditioned beers.
Pale ale
Pale ale is a beer which uses a top-fermenting yeast and predominantly pale malt. It is one of the world's major beer styles.
Stout
Stout and porter are dark beers made using roasted malts or roast barley, and typically brewed with slow fermenting yeast. There are a number of variations including Baltic porter, dry stout, and Imperial stout. The name "porter" was first used in 1721 to describe a dark brown beer popular with the street and river porters of London. This same beer later also became known as stout, though the word stout had been used as early as 1677.Amazon Online Reader : Stout (Classic Beer Style Series, 10). The history and development of stout and porter are intertwined.
Mild
Mild ale has a predominantly malty palate. It is usually dark coloured with an abv of 3% to 3.6%, although there are lighter hued milds as well as stronger examples reaching 6% abv and higher.
Wheat
Wheat beer is brewed with a large proportion of wheat although it often also contains a significant proportion of malted barley. Wheat beers are usually top-fermented (in Germany they have to be by law).Eric Warner, German Wheat Beer. Boulder, CO: Brewers Publications, 1992. ISBN 978-0-937381-34-2. The flavour of wheat beers varies considerably, depending upon the specific style.
Lambic
thumb|upright|Kriek, a variety of beer brewed with cherries
Lambic, a beer of Belgium, is naturally fermented using wild yeasts, rather than cultivated. Many of these are not strains of brewer's yeast (Saccharomyces cerevisiae) and may have significant differences in aroma and sourness. Yeast varieties such as Brettanomyces bruxellensis and Brettanomyces lambicus are common in lambics. In addition, other organisms such as Lactobacillus bacteria produce acids which contribute to the sourness.Webb, Tim; Pollard, Chris; and Pattyn, Joris; Lambicland: Lambikland, Rev Ed. (Cogan and Mater Ltd, 2004), ISBN 0-9547789-0-1.
Lager
Lager is cool fermented beer. Pale lagers are the most commonly consumed beers in the world. The name "lager" comes from the German "lagern" for "to store", as brewers around Bavaria stored beer in cool cellars and caves during the warm summer months. These brewers noticed that the beers continued to ferment, and to also clear of sediment, when stored in cool conditions.Beerhunter.com Michael Jackson, BeerHunter, "The birth of lager", 1 March 1996. Retrieved 16 September 2008.
Lager yeast is a cool bottom-fermenting yeast (Saccharomyces pastorianus) and typically undergoes primary fermentation at (the fermentation phase), and then is given a long secondary fermentation at (the lagering phase). During the secondary stage, the lager clears and mellows. The cooler conditions also inhibit the natural production of esters and other byproducts, resulting in a "cleaner"-tasting beer.Eurekalert.org Gavin Sherlock, PhD, EurekAlert, Brewing better beer: Scientists determine the genomic origins of lager yeasts, 10 September 2008. Retrieved 16 September 2008.
Modern methods of producing lager were pioneered by Gabriel Sedlmayr the Younger, who perfected dark brown lagers at the Spaten Brewery in Bavaria, and Anton Dreher, who began brewing a lager (now known as Vienna lager), probably of amber-red colour, in Vienna in 1840–1841. With improved modern yeast strains, most lager breweries use only short periods of cold storage, typically 1–3 weeks.
Measurement
Beer is measured and assessed by bitterness, by strength and by colour. The perceived bitterness is measured by the International Bitterness Units scale (IBU), defined in co-operation between the American Society of Brewing Chemists and the European Brewery Convention. The international scale was a development of the European Bitterness Units scale, often abbreviated as EBU, and the bitterness values should be identical.
Colour
thumb|upright|Paulaner dunkel – a dark lager
Beer colour is determined by the malt.Google Books Fritz Ullmann, Ullmann's Encyclopedia of Industrial Chemistry Vol A-11 pp455, VCH (1985), ISBN 3-527-20103-3 The most common colour is a pale amber produced from using pale malts. Pale lager and pale ale are terms used for beers made from malt dried with the fuel coke. Coke was first used for roasting malt in 1642, but it was not until around 1703 that the term pale ale was used.British Bitter "A beer style or a way of life?", RateBeer (January 2006). Retrieved 30 September 2008.Martyn Cornell, Beer: The Story of the Pint, Headline (2004), ISBN 0-7553-1165-5
In terms of sales volume, most of today's beer is based on the pale lager brewed in 1842 in the town of Pilsen in the present-day Czech Republic.BeerHunter Michael Jackson, "A Czech-style classic from Belgium", Beer Hunter Online (7 September 1999). Retrieved 20 September 2008. The modern pale lager is light in colour with a noticeable carbonation (fizzy bubbles) and a typical alcohol by volume content of around 5%. The Pilsner Urquell, Bitburger, and Heineken brands of beer are typical examples of pale lager, as are the American brands Budweiser, Coors, and Miller.
Dark beers are usually brewed from a pale malt or lager malt base with a small proportion of darker malt added to achieve the desired shade. Other colourants—such as caramel—are also widely used to darken beers. Very dark beers, such as stout, use dark or patent malts that have been roasted longer. Some have roasted unmalted barley.Google Books Costas Katsigris, Chris Thomas, The Bar and Beverage Book pp320, John Wiley and Sons (2006), ISBN 0-471-64799-3Google Books J. Scott Smith, Y. H. Hui, Food Processing: Principles and Applications pp228, Blackwell Publishing (2004), ISBN 0-8138-1942-3
Strength
Beer ranges from less than 3% alcohol by volume (abv) to around 14% abv, though this strength can be increased to around 20% by re-pitching with champagne yeast, and to 55% abv by the freeze-distilling process. The alcohol content of beer varies by local practice or beer style. The pale lagers that most consumers are familiar with fall in the range of 4–6%, with a typical abv of 5%. The customary strength of British ales is quite low, with many session beers being around 4% abv. Some beers, such as table beer are of such low alcohol content (1%–4%) that they are served instead of soft drinks in some schools.
The alcohol in beer comes primarily from the metabolism of sugars that are produced during fermentation. The quantity of fermentable sugars in the wort and the variety of yeast used to ferment the wort are the primary factors that determine the amount of alcohol in the final beer. Additional fermentable sugars are sometimes added to increase alcohol content, and enzymes are often added to the wort for certain styles of beer (primarily "light" beers) to convert more complex carbohydrates (starches) to fermentable sugars. Alcohol is a by-product of yeast metabolism and is toxic to the yeast; typical brewing yeast cannot survive at alcohol concentrations above 12% by volume. Low temperatures and too little fermentation time decreases the effectiveness of yeasts and consequently decreases the alcohol content.
Weakest beer
The weakest beers are dealcoholized beers, which typically have less than 0.05% alcohol (also called "near beer") and light beers, which usually have 4% alcohol.
Strongest beer
The strength of beers has climbed during the later years of the 20th century. Vetter 33, a 10.5% abv (33 degrees Plato, hence Vetter "33") doppelbock, was listed in the 1994 Guinness Book of World Records as the strongest beer at that time, though Samichlaus, by the Swiss brewer Hürlimann, had also been listed by the Guinness Book of World Records as the strongest at 14% abv. Since then, some brewers have used champagne yeasts to increase the alcohol content of their beers. Samuel Adams reached 20% abv with Millennium, and then surpassed that amount to 25.6% abv with Utopias. The strongest beer brewed in Britain was Baz's Super Brew by Parish Brewery, a 23% abv beer. In September 2011, the Scottish brewery BrewDog produced Ghost Deer, which, at 28%, they claim to be the world's strongest beer produced by fermentation alone.
The product claimed to be the strongest beer made is Schorschbräu's 2011 Schorschbock 57 with 57,5%. It was preceded by The End of History, a 55% Belgian ale, made by BrewDog in 2010. The same company had previously made Sink The Bismarck!, a 41% abv IPA, and Tactical Nuclear Penguin, a 32% abv Imperial stout. Each of these beers are made using the eisbock method of fractional freezing, in which a strong ale is partially frozen and the ice is repeatedly removed, until the desired strength is reached, a process that may class the product as spirits rather than beer. The German brewery Schorschbräu's Schorschbock, a 31% abv eisbock, and Hair of the Dog's Dave, a 29% abv barley wine made in 1994, used the same fractional freezing method. A 60% abv blend of beer with whiskey was jokingly claimed as the strongest beer by a Dutch brewery in July 2010.
Serving
Draught
thumb|A selection of cask beers
Draught beer from a pressurised keg using a lever-style dispenser and a spout is the most common method of dispensing in bars around the world. A metal keg is pressurised with carbon dioxide (CO2) gas which drives the beer to the dispensing tap or faucet. Some beers may be served with a nitrogen/carbon dioxide mixture. Nitrogen produces fine bubbles, resulting in a dense head and a creamy mouthfeel. Some types of beer can also be found in smaller, disposable kegs called beer balls. In traditional pubs, the pull levers for major beer brands may include the beer's logo and trademark.
In the 1980s, Guinness introduced the beer widget, a nitrogen-pressurised ball inside a can which creates a dense, tight head, similar to beer served from a nitrogen system. The words draft and draught can be used as marketing terms to describe canned or bottled beers containing a beer widget, or which are cold-filtered rather than pasteurised.
Cask-conditioned ales (or cask ales) are unfiltered and unpasteurised beers. These beers are termed "real ale" by the CAMRA organisation. Typically, when a cask arrives in a pub, it is placed horizontally on a frame called a "stillage" which is designed to hold it steady and at the right angle, and then allowed to cool to cellar temperature (typically between ), before being tapped and vented—a tap is driven through a (usually rubber) bung at the bottom of one end, and a hard spile or other implement is used to open a hole in the side of the cask, which is now uppermost. The act of stillaging and then venting a beer in this manner typically disturbs all the sediment, so it must be left for a suitable period to "drop" (clear) again, as well as to fully condition — this period can take anywhere from several hours to several days. At this point the beer is ready to sell, either being pulled through a beer line with a hand pump, or simply being "gravity-fed" directly into the glass.
Draught beer's environmental impact can be 68% lower than bottled beer due to packaging differences. A life cycle study of one beer brand, including grain production, brewing, bottling, distribution and waste management, shows that the CO2 emissions from a 6-pack of micro-brew beer is about 3 kilograms (6.6 pounds). The loss of natural habitat potential from the 6-pack of micro-brew beer is estimated to be 2.5 square metres (26 square feet). Downstream emissions from distribution, retail, storage and disposal of waste can be over 45% of a bottled micro-brew beer's CO2 emissions. Where legal, the use of a refillable jug, reusable bottle or other reusable containers to transport draught beer from a store or a bar, rather than buying pre-bottled beer, can reduce the environmental impact of beer consumption.
Packaging
right|thumb|Assortment of beer bottles
Most beers are cleared of yeast by filtering when packaged in bottles and cans.Google books Charles W. Bamforth, Beer: Tap Into the Art and Science of Brewing pp. 58–59, Oxford University Press US (2003), ISBN 0-19-515479-7. Retrieved 29 September 2008. However, bottle conditioned beers retain some yeast—either by being unfiltered, or by being filtered and then reseeded with fresh yeast.Google Books T. Boekhout, Vincent Robert, Yeasts in Food: Beneficial and Detrimental Aspects pp. 370–371, Behr's Verlag DE (2003), ISBN 3-86022-961-3. Retrieved 29 September 2008. It is usually recommended that the beer be poured slowly, leaving any yeast sediment at the bottom of the bottle. However, some drinkers prefer to pour in the yeast; this practice is customary with wheat beers. Typically, when serving a hefeweizen wheat beer, 90% of the contents are poured, and the remainder is swirled to suspend the sediment before pouring it into the glass. Alternatively, the bottle may be inverted prior to opening. Glass bottles are always used for bottle conditioned beers.
Many beers are sold in cans, though there is considerable variation in the proportion between different countries. In Sweden in 2001, 63.9% of beer was sold in cans. People either drink from the can or pour the beer into a glass. A technology developed by Crown Holdings for the 2010 FIFA World Cup is the 'full aperture' can, so named because the entire lid is removed during the opening process, turning the can into a drinking cup. Cans protect the beer from light (thereby preventing "skunked" beer) and have a seal less prone to leaking over time than bottles. Cans were initially viewed as a technological breakthrough for maintaining the quality of a beer, then became commonly associated with less expensive, mass-produced beers, even though the quality of storage in cans is much like bottles. Plastic (PET) bottles are used by some breweries.
Temperature
The temperature of a beer has an influence on a drinker's experience; warmer temperatures reveal the range of flavours in a beer but cooler temperatures are more refreshing. Most drinkers prefer pale lager to be served chilled, a low- or medium-strength pale ale to be served cool, while a strong barley wine or imperial stout to be served at room temperature.RealBeer Beyond the coldest beer in town, 21 September 2000. Retrieved 11 October 2008.
Beer writer Michael Jackson proposed a five-level scale for serving temperatures: well chilled () for "light" beers (pale lagers); chilled () for Berliner Weisse and other wheat beers; lightly chilled () for all dark lagers, altbier and German wheat beers; cellar temperature () for regular British ale, stout and most Belgian specialities; and room temperature () for strong dark ales (especially trappist beer) and barley wine.Michael Jackson, Michael Jackson's Beer Companion, Courage Books; 2 edition (27 February 2000), ISBN 0-7624-0772-7
Drinking chilled beer began with the development of artificial refrigeration and by the 1870s, was spread in those countries that concentrated on brewing pale lager.Google Books Jack S. Blocker, David M. Fahey, Ian R. Tyrrell, Alcohol and Temperance in Modern History pp95, ABC-CLIO (2003), ISBN 978-1-57607-833-4 Chilling beer makes it more refreshing, though below 15.5 °C the chilling starts to reduce taste awarenessGoogle Books Howard Hillman, The New Kitchen Science pp178, Houghton Mifflin Books (2003), ISBN 0-618-24963-X and reduces it significantly below .Google Books Robert J. Harrington, Food and Wine Pairing: A Sensory Experience pp. 27–28, John Wiley and Sons (2007), ISBN 0-471-79407-4 Beer served unchilled—either cool or at room temperature—reveal more of their flavours. Cask Marque, a non-profit UK beer organisation, has set a temperature standard range of 12°–14 °C (53°–57 °F) for cask ales to be served.Cask Marque Standards & Charters. Retrieved 11 October 2008.
Vessels
thumb|Pilsner glass from Brauerei Schloss Eggenberg
Beer is consumed out of a variety of vessels, such as a glass, a beer stein, a mug, a pewter tankard, a beer bottle or a can; or at music festivals and some bars and nightclubs, from a plastic cup. The shape of the glass from which beer is consumed can influence the perception of the beer and can define and accent the character of the style.F. G. Priest, Graham G. Stewart, Handbook of Brewing (2006), 48 Breweries offer branded glassware intended only for their own beers as a marketing promotion, as this increases sales of their product.
The pouring process has an influence on a beer's presentation. The rate of flow from the tap or other serving vessel, tilt of the glass, and position of the pour (in the centre or down the side) into the glass all influence the end result, such as the size and longevity of the head, lacing (the pattern left by the head as it moves down the glass as the beer is drunk), and the release of carbonation.Google Books Ray Foley, Heather Dismore, Running a Bar For Dummies pp. 211–212, For Dummies (2007), ISBN 0-470-04919-7.
A beer tower is a beer dispensing device, usually found in bars and pubs, that consists of a cylinder attached to a beer cooling device at the bottom. Beer is dispensed from the beer tower into a drinking vessel.
Health effects
Short-term effects
Beer contains ethyl alcohol, the same chemical that is present in wine and distilled spirits and as such, beer consumption has short-term psychological and physiological effects on the user. Different concentrations of alcohol in the human body have different effects on a person. The effects of alcohol depend on the amount an individual has drunk, the percentage of alcohol in the beer and the timespan over which the consumption took place, the amount of food eaten and whether an individual has taken other prescription, over-the-counter or street drugs, among other factors. Drinking enough to cause a blood alcohol concentration (BAC) of 0.03%-0.12% typically causes an overall improvement in mood and possible euphoria, increased self-confidence and sociability, decreased anxiety, a flushed, red appearance in the face and impaired judgment and fine muscle coordination. A BAC of 0.09% to 0.25% causes lethargy, sedation, balance problems and blurred vision. A BAC from 0.18% to 0.30% causes profound confusion, impaired speech (e.g., slurred speech), staggering, dizziness and vomiting. A BAC from 0.25% to 0.40% causes stupor, unconsciousness, anterograde amnesia, vomiting (death may occur due to inhalation of vomit (pulmonary aspiration) while unconscious) and respiratory depression (potentially life-threatening). A BAC from 0.35% to 0.80% causes a coma (unconsciousness), life-threatening respiratory depression and possibly fatal alcohol poisoning. As with all alcoholic drinks, drinking while driving, operating an aircraft or heavy machinery increases the risk of an accident; many countries have severe criminal penalties against drunk driving.
Alcohol acts as a magnesium diuretic, causes a prompt, vigorous increase in the urinary excretion of magnesium and other electrolytes.
Long-term effects
The main active ingredient of beer is alcohol, and therefore, the health effects of alcohol apply to beer. Consumption of small quantities of alcohol (less than one drink in women and two in men) is associated with a decreased risk of cardiac disease, stroke and diabetes mellitus. The long term health effects of continuous, moderate or heavy alcohol consumption include the risk of developing alcoholism and alcoholic liver disease.
Alcoholism, also known as "alcohol use disorder", is a broad term for any drinking of alcohol that results in problems. It was previously divided into two types: alcohol abuse and alcohol dependence. In a medical context, alcoholism is said to exist when two or more of the following conditions is present: a person drinks large amounts over a long time period, has difficulty cutting down, acquiring and drinking alcohol takes up a great deal of time, alcohol is strongly desired, usage results in not fulfilling responsibilities, usage results in social problems, usage results in health problems, usage results in risky situations, withdrawal occurs when stopping, and alcohol tolerance has occurred with use. Alcoholism reduces a person's life expectancy by around ten years and alcohol use is the third leading cause of early death in the United States. No professional medical association recommends that people who are nondrinkers should start drinking wine.Alcohol and Heart Health American Heart Association A total of 3.3 million deaths (5.9% of all deaths) are believed to be due to alcohol.
Beers vary in their nutritional content. Brewer's yeast is known to be a rich source of nutrients; therefore, as expected, beer can contain significant amounts of nutrients, including magnesium, selenium, potassium, phosphorus, biotin, chromium and B vitamins. Beer is sometimes referred to as "liquid bread".
It is considered that overeating and lack of muscle tone is the main cause of a beer belly, rather than beer consumption. A 2004 study, however, found a link between binge drinking and a beer belly. But with most overconsumption, it is more a problem of improper exercise and overconsumption of carbohydrates than the product itself. Several diet books quote beer as having an undesirably high glycemic index of 110, the same as maltose; however, the maltose in beer undergoes metabolism by yeast during fermentation so that beer consists mostly of water, hop oils and only trace amounts of sugars, including maltose.
Society and culture
thumb|right|A tent at Munich's Oktoberfest—the world's largest beer festival
In many societies, beer is the most popular alcoholic drink. Various social traditions and activities are associated with beer drinking, such as playing cards, darts, or other pub games; attending beer festivals; engaging in zythology (the study of beer); visiting a series of pubs in one evening; visiting breweries; beer-oriented tourism; or rating beer.Leslie Dunkling & Michael Jackson, The Guinness Drinking Companion, Lyons Press (2003), ISBN 1-58574-617-7 Drinking games, such as beer pong, are also popular.Best Drinking Game Book Ever, Carlton Books (28 October 2002), ISBN 1-85868-560-5 A relatively new profession is that of the beer sommelier, who informs restaurant patrons about beers and food pairings.
Beer is considered to be a social lubricant in many societies and is consumed in countries all over the world. There are breweries in Middle Eastern countries such as Syria, and in some African countries. Sales of beer are four times those of wine, which is the second most popular alcoholic drink.
A study published in the Neuropsychopharmacology journal in 2013 revealed the finding that the flavour of beer alone could provoke dopamine activity in the brain of the male participants, who wanted to drink more as a result. The 49 men in the study were subject to positron emission tomography scans, while a computer-controlled device sprayed minute amounts of beer, water and a sports drink onto their tongues. Compared with the taste of the sports drink, the taste of beer significantly increased the participants desire to drink. Test results indicated that the flavour of the beer triggered a dopamine release, even though alcohol content in the spray was insufficient for the purpose of becoming intoxicated.
Some breweries have developed beers to pair with food. Wine writer Malcolm Gluck disputed the need to pair beer with food, while beer writers Roger Protz and Melissa Cole contested that claim.Protz, Roger, The Guardian: Word of Mouth (15 January 2009). Let's hear it for beerCole, Melissa, The Guardian: Word of Mouth (27 January 2009). The eye of the ale stormThe Guardian: Word of Mouth (6 February 2009). Beer-drinking sadsacks strike back
Related drinks
Around the world, there are many traditional and ancient starch-based drinks classed as beer. In Africa, there are various ethnic beers made from sorghum or millet, such as Oshikundu in Namibia and Tella in Ethiopia. Kyrgyzstan also has a beer made from millet; it is a low alcohol, somewhat porridge-like drink called "Bozo". Bhutan, Nepal, Tibet and Sikkim also use millet in Chhaang, a popular semi-fermented rice/millet drink in the eastern Himalayas. Further east in China are found Huangjiu and Choujiu—traditional rice-based beverages related to beer.
The Andes in South America has Chicha, made from germinated maize (corn); while the indigenous peoples in Brazil have Cauim, a traditional beverage made since pre-Columbian times by chewing manioc so that an enzyme (amylase) present in human saliva can break down the starch into fermentable sugars;Books.google.co.uk, Lewin Louis and Louis Levin, Phantastica: A Classic Survey on the Use and Abuse of Mind-Altering Plants, Inner Traditions / Bear & Company (1998), ISBN 0-89281-783-6 this is similar to Masato in Peru.
Some beers which are made from bread, which is linked to the earliest forms of beer, are Sahti in Finland, Kvass in Russia and Ukraine, and Bouza in Sudan.
Chemistry
Beer contains the phenolic acids 4-hydroxyphenylacetic acid, vanillic acid, caffeic acid, syringic acid, p-coumaric acid, ferulic acid and sinapic acid. Alkaline hydrolysis experiments show that the most of the phenolic acids are present as bound forms and only a small portion can be detected as free compounds. Hops, and beer made with it, contain 8-prenylnaringenin which is a potent phytoestrogen. Hop also contains myrcene, humulene, xanthohumol, isoxanthohumol, myrcenol, linalool, tannins and resin. The alcohol 2M2B is a component of hops brewing.
Barley, in the form of malt, brings the condensed tannins prodelphinidins B3, B9 and C2 into beer. Tryptophol, tyrosol and phenylethanol are aromatic higher alcohols found in beer as secondary products of alcoholic fermentation (Article in French) (products also known as congeners) by Saccharomyces cerevisiae.
See also
List of countries by beer consumption per capita
Beer and breweries by region
Beer jam
Gluten-free beer
List of barley-based beverages
List of beverages
Kegger
Pub
References
Bibliography
Alexander, Jeffrey W. Brewed in Japan: The Evolution of the Japanese Beer Industry (University of British Columbia Press; 2013) 316 pages
.
Archeological Parameters For the Origins of Beer. Thomas W. Kavanagh.
The Barbarian's Beverage: A History of Beer in Ancient Europe, Max Nelson. ISBN 0-415-31121-7.
The World Guide to Beer, Michael Jackson. ISBN 1-85076-000-4
The New World Guide to Beer, Michael Jackson. ISBN 0-89471-884-3
Beer: The Story of the Pint, Martyn Cornell. ISBN 0-7553-1165-5
Beer and Britannia: An Inebriated History of Britain, Peter Haydon. ISBN 0-7509-2748-8
The Book of Beer Knowledge: Essential Wisdom for the Discerning Drinker, a Useful Miscellany, Jeff Evans. ISBN 1-85249-198-1
Country House Brewing in England, 1500–1900, Pamela Sambrook. ISBN 1-85285-127-9
Ale, Beer and Brewsters in England: Women's Work in a Changing World, 1300–1600 , Judith M. Bennett. ISBN 0-19-512650-5
A History of Beer and Brewing, I. Hornsey. ISBN 0-85404-630-5
Beer: an Illustrated History, Brian Glover. ISBN 1-84038-597-9
Beer in America: The Early Years 1587–1840—Beer's Role in the Settling of America and the Birth of a Nation, Gregg Smith. ISBN 0-937381-65-9
Big Book of Beer, Adrian Tierney-Jones. ISBN 1-85249-212-0
Gone for a Burton: Memories from a Great British Heritage, Bob Ricketts. ISBN 1-905203-69-1
Farmhouse Ales: Culture and Craftsmanship in the Belgian Tradition, Phil Marowski. ISBN 0-937381-84-5
The World Encyclopedia of Beer, Brian Glover. ISBN 0-7548-0933-1
The Complete Joy of Homebrewing, Charlie Papazian ISBN 0-380-77287-6
The Brewmaster's Table, Garrett Oliver. ISBN 0-06-000571-8
Bacchus and Civic Order: The Culture of Drink in Early Modern Germany, Ann Tlusty. ISBN 0-8139-2045-0
Further reading
Category:Brewing
Category:Fermented drinks
Category:Alcoholic drinks
Category:Cold drinks | 3,363 | 2017-01 |
Spectre (2015 film) | Spectre is the twenty-fourth James Bond film produced by Eon Productions and the twenty-sixth overall. It features Daniel Craig in his fourth performance as James Bond, and Christoph Waltz as Ernst Stavro Blofeld, with the film marking the character's re-introduction into the series. It was directed by Sam Mendes as his second James Bond film following Skyfall, with a screenplay written by John Logan, Neal Purvis, Robert Wade and Jez Butterworth. It is distributed by Metro-Goldwyn-Mayer and Columbia Pictures. With a budget around $245 million, it is the most expensive Bond film and one of the most expensive films ever made.
The story sees Bond pitted against the global criminal organisation Spectre and against their leader; Ernst Stavro Blofeld, who is revealed to be Bond's stepbrother as he attempts to thwart his plan to launch a global surveillance network, and discovers Spectre and Blofeld were behind the events of the previous three films. The film marks Spectre's first appearance in an Eon Productions film since 1971's Diamonds Are Forever, with Christoph Waltz playing the organisation's leader Ernst Stavro Blofeld. Several recurring James Bond characters, including M, Q and Eve Moneypenny return, with the new additions of Léa Seydoux as Dr. Madeleine Swann, Dave Bautista as Mr. Hinx, Andrew Scott as Max Denbigh and Monica Bellucci as Lucia Sciarra. Spectre was filmed from December 2014 to July 2015, with locations in Austria, the United Kingdom, Italy, Morocco and Mexico.
The film was released on 26 October 2015 in the United Kingdom on the same night as the world premiere at the Royal Albert Hall in London, followed by a worldwide release which included IMAX screenings. It was released in the United States one week later, on 6 November. Upon its release, the film received generally positive reviews from critics, who praised the action sequences, its style, suspenseful atmosphere and acting, with both Waltz and Bautista receiving widespread acclaim for their performances as Blofeld and Hinx respectively, but found the script and theme song lacking. The theme song, "Writing's on the Wall", performed by British Sam Smith won the Academy Award for Best Original Song and the corresponding Golden Globe. Spectre grossed over a total of $880 million worldwide, the second largest unadjusted income for the series after its predecessor Skyfall.
Plot
Following Gareth Mallory's promotion to M, James Bond takes leave from MI6. Receiving a posthumous message from the previous M, Bond carries out an unauthorised mission in Mexico City, killing three men plotting a terrorist bombing on a stadium during Day of the Dead celebrations, before giving chase to their leader, Marco Sciarra. In the ensuing struggle, Bond steals Sciarra's ring, which is emblazoned with a stylised octopus, and kills him. Upon returning to London, Bond is indefinitely suspended from field duty by M. Parallel to this, M is in the midst of a power struggle with Max Denbigh (whom Bond dubs "C"), the head of a privately backed agency, the Joint Intelligence Service. C campaigns for Britain to form "Nine Eyes", a global surveillance and intelligence co-operation initiative and uses his influence to close down the '00' section as he believes it to be outdated.
Bond disobeys M's order and travels to Rome to attend Sciarra's funeral. That evening he seduces Sciarra's widow Lucia, who tells him about Spectre, an organisation of businessmen with criminal and terrorist connections to which her husband belonged. Bond uses Sciarra's ring to infiltrate a Spectre meeting, where he identifies the leader, Franz Oberhauser. When Oberhauser addresses Bond by name, he is pursued across the city by Spectre's assassin, Mr. Hinx. Moneypenny informs Bond that the information he collected leads to Mr. White, a former member of Quantum—a subsidiary of Spectre—who has fallen afoul of Oberhauser and has been marked for assassination. Bond asks her to investigate Oberhauser, who was presumed dead years earlier.
Bond locates White in Austria, where he learns that White is dying of thallium poisoning. He admits to having grown disenchanted with Quantum and tells Bond to find and protect his daughter, Dr. Madeline Swann, who will take him to L'Américain; this will in turn lead him to Spectre. White then commits suicide. Bond approaches Swann, and after rescuing her from Hinx, the two meet Q. Through Sciarra's ring, Q forensically links Oberhauser to Bond's previous missions, identifying Le Chiffre, Dominic Greene and Raoul Silva as Spectre agents. Swann reveals that L'Américain is a hotel in Tangier.
The two travel to the hotel and discover White left evidence directing them to Oberhauser's operations base in the desert. After an encounter with Hinx that sees the assassin killed, Bond and Swann are escorted to Oberhauser's base. There, Oberhauser reveals that Spectre has been funding the Joint Intelligence Service while staging terrorist attacks around the world, creating a need for the Nine Eyes programme. In return C will give Spectre unlimited access to intelligence gathered by Nine Eyes, allowing them to anticipate and counter-act investigations into their operations. Bond is tortured as Oberhauser discusses their shared history: after the younger Bond was orphaned, Oberhauser's father, Hannes, became his temporary guardian. Believing that Bond supplanted his role as son, Oberhauser killed his father and staged his own death, subsequently adopting the name Ernst Stavro Blofeld and going on to form Spectre. Bond and Swann overpower him and escape, destroying the base in an explosion and leaving Blofeld to die.
As the Moroccan facility was one node in a wider network, Bond and Swann return to London where they meet M, Bill Tanner, Q, and Moneypenny with the intention of arresting C and stopping Nine Eyes from being activated. Swann and Bond are abducted separately, while the rest of the group proceed with the plan. After Q succeeds in preventing the Nine Eyes from going online, a brief struggle between M and C ends with C falling to his death. Meanwhile, Bond is taken to the old MI6 building, which is scheduled for demolition. Moving throughout a ruined labyrinth, he encounters a disfigured Blofeld, who tells him that he has a choice between escaping the building before explosives are detonated or die trying to save Swann. Bond finds Swann and the two escape by boat as the building collapses.
Later Bond manages to shoot down Blofeld's helicopter, which crashes onto Westminster Bridge. As Blofeld crawls away from the wreckage, Bond confronts him, but, instead of killing him, he leaves him to be arrested by M before leaving the bridge with Swann.
Cast
thumb|upright|Christoph Waltz as Ernst Stavro Blofeld
Daniel Craig as James Bond, agent 007. The director Sam Mendes has described Bond as being extremely focused in Spectre, likening his new-found dedication to hunting.
Christoph Waltz as Ernst Stavro Blofeld
Léa Seydoux as Dr. Madeleine Swann
Ben Whishaw as Q
Naomie Harris as Eve Moneypenny
Dave Bautista as Mr. Hinx. He is loosely based on the assassin Donald Grant from the 1963 Bond movie From Russia with Love.
Andrew Scott as Max Denbigh
Monica Bellucci as Lucia Sciarra
Ralph Fiennes as M
Rory Kinnear as Bill Tanner, the MI6 Chief of Staff
Jesper Christensen as Mr. White
Alessandro Cremona as Marco Sciarra
Judi Dench as Mallory's predecessor as M.
Production
Copyright status
The ownership of the Spectre organisation—originally stylised "SPECTRE" as an acronym of SPecial Executive for Counter-intelligence, Terrorism, Revenge and Extortion—and its characters, had been at the centre of long-standing litigation starting in 1961 between Ian Fleming and Kevin McClory over the film rights to the novel Thunderball. The dispute began after Fleming incorporated elements of an undeveloped film script written by McClory and screenwriter Jack Whittingham—including characters and plot points—into Thunderball, which McClory contested in court, claiming ownership over elements of the novel. In 1963, Fleming settled out of court with McClory, in an agreement which awarded McClory the film rights. This enabled him to become a producer for the 1965 film Thunderball—with Albert R. Broccoli and Harry Saltzman as executive producers—and the non-Eon film Never Say Never Again, an updated remake of Thunderball, in 1983. A second remake, entitled Warhead 2000 A.D., was planned for production and release in the 1990s before being abandoned. Under the terms of the 1963 settlement, the literary rights stayed with Fleming, allowing the Spectre organisation and associated characters to continue appearing in print.
In November 2013 MGM and the McClory estate formally settled the issue with Danjaq, LLC—sister company of Eon Productions—with MGM acquiring the full copyright film rights to the concept of Spectre and all of the characters associated with it. With the acquisition of the film rights and the organisation's re-introduction to the series' continuity, the SPECTRE acronym was discarded and the organisation reimagined as "Spectre".
Pre-production
thumb|upright|right|Sam Mendes returned as director.
In March 2013 Mendes said he would not return to direct the next film in the series, then known as Bond 24; he later recanted and announced that he would return, as he found the script and the plans for the long-term future of the franchise appealing. In directing Skyfall and Spectre, Mendes became the first director to oversee two consecutive Bond films since John Glen directed The Living Daylights and Licence to Kill in 1987 and 1989. Dennis Gassner returned as the film's production designer, while cinematographer Hoyte van Hoytema took over from Roger Deakins. In July 2015 Mendes noted that the combined crew of Spectre numbered over one thousand, making it a larger production than Skyfall. Craig is listed as co-producer. He considered the credit a high point of his career, saying "I'm just so proud of the fact that my name comes up somewhere else on the titles."
In November 2014, Sony Pictures Entertainment was targeted by hackers who released details of confidential e-mails between Sony executives regarding several high-profile film projects. Included within these were several memos relating to the production of Spectre, claiming that the film was over budget, detailing early drafts of the script written by John Logan, and expressing Sony's frustration with the project. Eon Productions later issued a statement confirming the leak of what they called "an early version of the screenplay".
In July 2016, Nicolas Winding Refn revealed that he turned down the offer to direct the movie.
Writing
Spectre marked the return of many scriptwriters from the previous Bond films, such as Skyfall writer John Logan; Neal Purvis and Robert Wade, who had done work in five previous Bond films; and British playwright Jez Butterworth, who had previously made uncredited contributions to Skyfall. Butterworth was brought in to polish the script, being helped by Mendes and Craig. Butterworth considered that his changes involved adding what he would like to see as a teenager, and limited the scenes with Bond talking to men, as "Bond shoots other men—he doesn’t sit around chatting to them. So you put a line through that.” With the acquisition of the rights to Spectre and its associated characters, Purvis and Wade revealed that the film would provide a minor retcon to the continuity of the previous films, with the Quantum organisation alluded to in Casino Royale and introduced in Quantum of Solace reimagined as a division within Spectre rather than an independent organisation.
Despite being an original story, Spectre draws on Ian Fleming's source material, most notably in the character of Franz Oberhauser, played by Christoph Waltz, and his father Hannes. Hannes Oberhauser is a background character in the short story "Octopussy" from the Octopussy and The Living Daylights collection, and is named in the film as having been a temporary legal guardian of a young Bond in 1983. As Sam Mendes searched for events in young Bond's life to follow the childhood discussed in Skyfall, he came across Hannes Oberhauser, who becomes a father figure to Bond. From there Mendes conceived the idea of "a natural child who had been pushed out, cuckoo in the nest" by Bond, which became Franz. Similarly, Charmian Bond is shown to have been his full-time guardian, observing the back story established by Fleming.
Casting
thumb|upright|right|At the age of 50, Monica Bellucci became the oldest actress to be cast as a Bond girl.
The main cast was revealed in December 2014 at the 007 Stage at Pinewood Studios. Daniel Craig returned for his fourth appearance as James Bond, while Ralph Fiennes, Naomie Harris and Ben Whishaw reprised their roles as M, Eve Moneypenny and Q respectively, having been established in Skyfall. Rory Kinnear also reprised his role as Bill Tanner in his third appearance in the series.
Christoph Waltz was cast in the role of Franz Oberhauser, though he refused to comment on the nature of the part. It was later revealed with the film's release that he is Ernst Stavro Blofeld. Dave Bautista was cast as Mr. Hinx after producers sought an actor with a background in contact sports. After casting Bérénice Lim Marlohe, a relative newcomer, as Sévérine in Skyfall, Mendes consciously sought out a more experienced actor for the role of Madeleine Swann, ultimately casting Léa Seydoux in the role. Monica Bellucci joined the cast as Lucia Sciarra, becoming, at the age of fifty, the oldest actress to be cast as a Bond girl. In a separate interview with Danish website Euroman, Jesper Christensen revealed he would be reprising his role as Mr. White from Casino Royale and Quantum of Solace. Christensen's character was reportedly killed off in a scene intended to be used as an epilogue to Quantum of Solace, before it was removed from the final cut of the film, enabling his return in Spectre.
In addition to the principal cast, Alessandro Cremona was cast as Marco Sciarra, Stephanie Sigman was cast as Estrella, and Detlef Bothe was cast as a villain for scenes shot in Austria. In February 2015 over fifteen hundred extras were hired for the pre-title sequence set in Mexico, though they were duplicated in the film, giving the effect of around ten thousand extras.
Filming
Mendes revealed that production would begin on 8 December 2014 at Pinewood Studios, with filming taking seven months. Mendes also confirmed several filming locations, including London, Mexico City and Rome. Van Hoytema shot the film on Kodak 35 mm film stock. Early filming took place at Pinewood Studios, and around London, with scenes variously featuring Craig and Harris at Bond's flat, and Craig and Kinnear travelling down the River Thames.
Filming started in Austria in December 2014, with production taking in the area around Sölden—including the Ötztal Glacier Road, Rettenbach glacier and the adjacent ski resort and cable car station—and Obertilliach and Lake Altaussee, before concluding in February 2015.. Scenes filmed in Austria centred on the Ice Q Restaurant, standing in for the fictional Hoffler Klinik, a private medical clinic in the Austrian Alps. Filming included an action scene featuring a Land Rover Defender Bigfoot and a Range Rover Sport. Production was temporarily halted first by an injury to Craig, who sprained his knee whilst shooting a fight scene, and later by an accident involving a filming vehicle that saw three crew members injured, at least one of them seriously.
Filming temporarily returned to England to shoot scenes at Blenheim Palace in Oxfordshire, which stood in for a location in Rome, before moving on to the city itself for a five-week shoot across the city, with locations including the Ponte Sisto bridge and the Roman Forum. The production faced opposition from a variety of special interest groups and city authorities, who were concerned about the potential for damage to historical sites around the city, and problems with graffiti and rubbish appearing in the film. A car chase scene set along the banks of the Tiber River and through the streets of Rome featured an Aston Martin DB10 and a Jaguar C-X75. The C-X75 was originally developed as a hybrid electric vehicle with four independent electric engines powered by two jet turbines, before the project was cancelled. The version used for filming was converted to use a conventional internal combustion engine, to minimise the potential for disruption from mechanical problems with the complex hybrid system. The C-X75s used for filming were developed by the engineering division of Formula One racing team Williams, who built the original C-X75 prototype for Jaguar.
With filming completed in Rome, production moved to Mexico City in late March to shoot the film's opening sequence, with scenes to include the Day of the Dead festival filmed in and around the Zócalo and the Centro Histórico district. The planned scenes required the city square to be closed for filming a sequence involving a fight aboard a Messerschmitt-Bölkow-Blohm Bo 105 helicopter flown by stunt pilot Chuck Aaron, which called for modifications to be made to several buildings to prevent damage. This particular scene in Mexico required 1,500 extras, 10 giant skeletons and 250,000 paper flowers. Reports in the Mexican media added that the film's second unit would move to Palenque in the state of Chiapas, to film aerial manoeuvres considered too dangerous to shoot in an urban area.
Following filming in Mexico, and during a scheduled break, Craig was flown to New York to undergo minor surgery to fix his knee injury. It was reported that filming was not affected and he had returned to filming at Pinewood Studios as planned on 22 April.
A brief shoot at London's City Hall was filmed on 18 April 2015, while Mendes was on location. On 17 May 2015 filming took place on the Thames in London. Stunt scenes involving Craig and Seydoux on a speedboat as well as a low flying helicopter near Westminster Bridge were shot at night, with filming temporarily closing both Westminster and Lambeth Bridges. Scenes were also shot on the river near MI6's headquarters at Vauxhall Cross. The crew returned to the river less than a week later to film scenes solely set on Westminster Bridge. The London Fire Brigade was on set to simulate rain as well as monitor smoke used for filming. Craig, Seydoux, and Waltz, as well as Harris and Fiennes, were seen being filmed. Prior to this, scenes involving Fiennes were shot at a restaurant in Covent Garden. Filming then took place in Trafalgar Square. In early June, the crew, as well as Craig, Seydoux, and Waltz, returned to the Thames for a final time to continue filming scenes previously shot on the river.
After wrapping up in England, production travelled to Morocco in June, with filming taking place in Oujda, Tangier and Erfoud, after preliminary work was completed by the production's second unit. The headquarters of Spectre in Morocco was located in Gara Medouar which is a 'crater' caused by erosion and of neither volcanic nor impact origin.http://www.majorforms.com/article_view.php?id=184828 An explosion filmed in Morocco holds a Guinness World Record for the "Largest film stunt explosion" in cinematic history, with the record credited to production designer Chris Corbould. Principal photography concluded on 5 July 2015. A wrap-up party for Spectre was held in commemoration before entering post-production. Filming took 128 days.
Whilst filming in Mexico City, speculation in the media claimed that the script had been altered to accommodate the demands of Mexican authorities—reportedly influencing details of the scene and characters, casting choices, and modifying the script to portray the country in a "positive light"—to secure tax concessions and financial support worth up to $20 million for the film. This was denied by producer Michael G. Wilson, who stated that the scene had always been intended to be shot in Mexico as production had been attracted to the imagery of the Day of the Dead, and that the script had been developed from there. Production of Skyfall had previously faced similar problems while attempting to secure permits to shoot the film's pre-title sequence in India before moving to Istanbul.
Music
Thomas Newman returned as Spectres composer. Rather than composing the score once the film had moved into post-production, Newman worked during filming. The theatrical trailer released in July 2015 contained a rendition of John Barry's On Her Majesty's Secret Service theme. Mendes revealed that the final film would have more than one hundred minutes of music. The soundtrack album was released on 23 October 2015 in the UK and 6 November 2015 in the USA on the Decca Records label.
In September 2015 it was announced that Sam Smith and regular collaborator Jimmy Napes had written the film's title theme, "Writing's on the Wall", with Smith performing it for the film. Smith said the song came together in one session and that he and Napes wrote it in under half an hour before recording a demo. Satisfied with the quality, the filmmakers used the demo in the final release. The English band Radiohead also composed a song for the film, but it was rejected, according to guitarist Jonny Greenwood, for being "too dark".
"Writing's on the Wall" was released as a download on 25 September 2015. It received mixed reviews from critics and fans, particularly in comparison to Adele's "Skyfall", leading to Shirley Bassey trending on Twitter on the day it was released. It became the first Bond theme to reach number one in the UK Singles Chart. The song was nominated for and won the Academy Award for Best Original Song. It was the second time a Bond song had won, and only the fifth time one had been nominated. It also won the Golden Globe Award for Best Original Song at the 73rd Golden Globe Awards.
Marketing
thumb|right|The Williams FW37 of Felipe Massa (front) carrying the 007 logo on its wing mirrors at the 2015 Mexican Grand Prix.
During the December 2014 press conference announcing the start of filming, Aston Martin and Eon unveiled the new DB10 as the official car for the film. The DB10 was designed in collaboration between Aston Martin and the filmmakers, with only 10 being produced especially for Spectre as a celebration of the 50th anniversary of the company's association with the franchise. Only eight of those 10 were used for the film, however; the remaining two were used for promotional work. After modifying the Jaguar C-X75 for the film, Williams F1 carried the 007 logo on their cars at the 2015 Mexican Grand Prix, with the team playing host to the cast and crew ahead of the Mexican premiere of the film.
To promote the film, the film's marketers continued the trend established during Skyfalls production of releasing still images of clapperboards and video blogs on Eon's official social media accounts.
On 13 March 2015, several members of the cast and crew, including Craig, Whishaw, Wilson and Mendes, as well as previous James Bond actor, Sir Roger Moore, appeared in a sketch written by David Walliams and the Dawson Brothers for Comic Relief's Red Nose Day on BBC One. In the sketch, they film a behind-the-scenes mockumentary on the filming of Spectre. The first teaser trailer for Spectre was released worldwide in March 2015, followed by the theatrical trailer in July and the final trailer in October.
Release
Spectre had its world premiere in London on 26 October 2015 at the Royal Albert Hall, the same day as its general release in the United Kingdom and Republic of Ireland. Following the announcement of the start of filming, Paramount Pictures brought forward the release of Mission: Impossible – Rogue Nation to avoid competing with Spectre. In March 2015 IMAX corporation announced that Spectre would be screened in its cinemas, following Skyfalls success with the company. In the UK it received a wider release than Skyfall, with a minimum of 647 cinemas including 40 IMAX screens, compared to Skyfalls 587 locations and 21 IMAX screens.
Reception
Box office
Spectre grossed $880.7 million worldwide; $135.5 million of the takings were generated from the UK market and $200.1 million from North America. Worldwide, this made it the second highest-grossing James Bond film after Skyfall, and the sixth highest-grossing film of 2015. Deadline.com calculated the net profit of the film to be $98.4 million when factoring together all expenses and revenues for the film.
In the United Kingdom, the film grossed £4.1 million ($6.4 million) from its Monday preview screenings. It grossed £6.3 million ($9.2 million) on its opening day and then £5.7 million ($8.8 million) on Wednesday, setting UK records for both days. In the film's first seven days it grossed £41.7 million ($63.8 million), breaking the UK record for highest first-week opening, set by Harry Potter and the Prisoner of Azkabans £23.88 million ($36.9 million) in 2004.<ref>{{cite web|title=Spectre breaks UK Box Office Records: The biggest UK opening of all time|url=http://www.007.com/spectre-breaks-box-office-records/|publisher=007|date=2 November 2015|accessdate=6 November 2015}}</ref> Its Friday–Saturday gross was £20.4 million ($31.2 million) compared to Skyfalls £20.1 million ($31 million). The film also broke the record for the best per-screen opening average with $110,000, a record previously held by The Dark Knight with $100,200. It has grossed a total of $136.3 million there. In the UK, it surpassed Avatar to become the country's highest-grossing IMAX release ever with $10.09 million.Spectre opened in Germany with $22.45 million (including previews), which included a new record for the biggest Saturday of all time, Australia with $8.7 million (including previews) and South Korea opened to $8.2 million (including previews). Despite the 13 November Paris attacks, which led to numerous theatres being closed down, the film opened with $14.6 million (including $2 million in previews) in France. In Mexico, where part of the film was shot, it debuted with more than double that of Skyfall with $4.5 million. It also bested its predecessor's opening in various Nordic regions where MGM is distributing, such as in Finland ($2.66 million) and Norway ($2.91 million), and in other markets like Denmark ($4.2 million), the Netherlands ($3.38 million), and Sweden ($3.1 million). In India, it opened at No. 1 with $4.8 million which is 4% above the opening of Skyfall. It topped the German-speaking Switzerland box office for four weeks and in the Netherlands, it has held the No. 1 spot for seven weeks straight where it has topped Minions to become the top movie of the year. The top earning markets are Germany ($70.3 million) and France ($38.8 million). In Paris, it has the second highest ticket sales of all time with 4.1 million tickets sold only behind Spider-Man 3 which sold over 6.32 million tickets in 2007.
In the United States and Canada the film opened on 6 November 2015, and in its opening weekend, was originally projected to gross $70–75 million from 3,927 screens, the widest release for a Bond film. However, after it grossed $5.25 million from its early Thursday night showings and $28 million on its opening day, weekend projections were increased to $75–80 million. The film ended up grossing $70.4 million in its opening weekend (about $20 million less than Skyfall's $90.6 million debut, including IMAX previews), but nevertheless finished first at the box office. IMAX generated $9.1 million for Spectre at 374 screens, premium large format made $8 million from 429 cinemas, reaping 11% of the film's opening, which means that Spectre earned $17.1 million (23%) of its opening weekend total in large-format venues. Cinemark XD generated $1.85 million in 112 XD locations.
In China, it opened on 12 November and earned $15 million on its opening day, which is the second biggest 2D single day gross for a Hollywood film behind the $18.5 million opening day of Mission: Impossible – Rogue Nation and occupying 43% of all available screens which included $790,000 in advance night screenings. Through its opening weekend, it earned $48.1 million from 14,700 screens which is 198% ahead of Skyfall, a new record for a Hollywood 2D opening. IMAX contributed $4.6 million on 246 screens, also a new record for a three-day opening for a November release (breaking Interstellars record). In its second weekend, it added $12.1 million falling precipitously by 75% which is the second worst second weekend drop for any major Hollywood release in China of 2015. It grossed a total of $84.7 million there after four weekends (foreign films in the Middle Kingdom plays for 30 days only, unless granted special extentions). Albeit a strong opening, it failed to attain the $100 million mark there as projected due to mixed response from critics and audiences as well as facing competitions from local films.
Critical responseSpectre received generally polarised reviews, with critics praising the action sequences, cinematography, score, and acting, but criticising the screenwriting as uneven and formulaic. Rotten Tomatoes sampled 307 reviews and judged 65% of them to be positive, saying that the film "nudges Daniel Craig's rebooted Bond closer to the glorious, action-driven spectacle of earlier entries, although it's admittedly reliant on established 007 formula." On Metacritic, the film has a score of 60 out of 100, based on 48 critics, indicating "mixed or average reviews". Audiences polled by CinemaScore gave the film an average grade of "A−" on an A+ to F scale.
Prior to its UK release, Spectre mostly received positive reviews. Mark Kermode, writing in The Guardian, gave the film four out of five stars, observing that the film did not live up to the standard set by Skyfall, but was able to tap into audience expectations. Writing in the same publication, Peter Bradshaw gave the film a full five stars, calling it "inventive, intelligent and complex", and singling out Craig's performance as the film's highlight. In another five star review, The Daily Telegraphs Robbie Collin described Spectre as "a swaggering show of confidence'", lauding it as "a feat of pure cinematic necromancy." Positive yet critical assessments included Kim Newman of Sight and Sound, who wrote that "for all its wayward plotting (including an unhelpful tie-in with Bond’s childhood that makes very little sense) and off-the-peg elements, Spectre works" as he felt "the audience’s patience gets tested by two and a half hours of set-pieces strung on one of the series’ thinner plots"; and IGN's Chris Tilly, who rated the film 7.2 out of 10, considering Spectre "solid if unspectacular", and concluding that "the film falls frustratingly short of greatness."
Critical appraisal of the film was mixed in the United States. In a lukewarm review for RogerEbert.com, Matt Zoller Seitz gave the film 2.5 stars out of 4, describing Spectre as inconsistent and unable to capitalise on its potential. Kenneth Turan, reviewing the film for Los Angeles Times, concluded that Spectre "comes off as exhausted and uninspired". Manohla Dargis of The New York Times panned the film as having "nothing surprising" and sacrificing its originality for the sake of box office returns. Forbes' Scott Mendelson also heavily criticised the film, denouncing Spectre as "the worst 007 movie in 30 years". Darren Franich of Entertainment Weekly viewed Spectre as "an overreaction to our current blockbuster moment", aspiring "to be a serialized sequel" and proving "itself as a Saga". While noting that "[n]othing that happens in Spectre holds up to even minor logical scrutiny", he had "come not to bury Spectre, but to weirdly praise it. Because the final act of the movie is so strange, so willfully obtuse, that it deserves extra attention." Christopher Orr, writing in The Atlantic, also criticised the film, saying that Spectre "backslides on virtually every [aspect]". Lawrence Toppman of The Charlotte Observer called Craig's performance "Bored, James Bored." Alyssa Rosenberg, writing for The Washington Post, stated that the film turned into "a disappointingly conventional Bond film."
In a positive review published in Rolling Stone, Peter Travers gave the film 3.5 stars out of 4, describing Spectre as "party time for Bond fans, a fierce, funny, gorgeously produced valentine to the longest-running franchise in movies". Mick LaSalle from the San Francisco Chronicle, raved that "One of the great satisfactions of Spectre is that, in addition to all the stirring action, and all the timely references to a secret organization out to steal everyone's personal information, we get to believe in Bond as a person." Stephen Whitty from The New York Daily News, who awarded the film four of five stars, stated that "Craig is cruelly efficient. Dave Bautista makes a good, Oddjob-like assassin. And while Lea Seydoux doesn't leave a huge impression as this film's 'Bond girl', perhaps it's because we've already met — far too briefly — the hypnotic Monica Bellucci, as the first real 'Bond woman' since Diana Rigg." Chicago Sun-Times film reviewer Richard Roeper, who gave the film three stars out of four, considered the film "solidly in the middle of the all-time rankings, which means it's still a slick, beautifully photographed, action-packed, international thriller with a number of wonderfully, ludicrously entertaining set pieces, a sprinkling of dry wit, myriad gorgeous women and a classic psycho-villain who is clearly out of his mind but seems to like it that way." Michael Phillips, reviewing for the Chicago Tribune, stated, "For all its workmanlike devotion to out-of-control helicopters, Spectre works best when everyone's on the ground, doing his or her job, driving expensive fast cars heedlessly, detonating the occasional wisecrack, enjoying themselves and their beautiful clothes." Variety film critic Guy Lodge complained in his review that "What's missing is the unexpected emotional urgency of Skyfall, as the film sustains its predecessor's nostalgia kick with a less sentimental bent."
Home mediaSpectre was released for digital HD on 22 January 2016 and on DVD and Blu-ray on 9 and 22 February 2016 in the US and UK respectively.
Accolades
Award Category Recipient Result Academy Awards Best Original Song "Writing's on the Wall" (Sam Smith & Jimmy Napes) Golden Globe Awards Best Original Song Critics' Choice Awards Best Song Best Actor in an Action Movie Daniel Craig St. Louis Gateway Film Critics Association Awards Best Song "Writing's on the Wall" (Sam Smith & Jimmy Napes) Houston Film Critics Society Awards Best Original Song Art Directors Guild Awards Production Design for a Contemporary Film Dennis Gassner Satellite Awards Best Cinematography Hoyte van Hoytema Best Original Score Thomas Newman Best Original Song "Writing's on the Wall" (Sam Smith & Jimmy Napes) Best Visual Effects Steve Begg & Chris Corbould Best Art Direction and Production Design Dennis Gassner Best Film Editing Lee Smith Best Sound (Editing and Mixing) Per Hallberg, Karen Baker Landers, Scott Millan, Gregg Rudloff & Stuart Wilson Saturn Awards Best Action or Adventure Film Empire Awards Best British Film Best Thriller Teen Choice Awards Choice Movie: Action Choice Movie Actress: Action Léa Seydoux
Influence
The opening sequence featured a Day of the Dead parade in Mexico City. At the time, no such parade took place in Mexico City; one year later, due to the interest in the film and the government's desire to promote the pre-Hispanic Mexican culture, the federal and local authorities decided to organize an actual "Día de los Muertos" parade through Paseo de la Reforma and Centro Historico on 29 October 2016, which was attended by 250,000 people.http://noticieros.televisa.com/fotos/mexico/2016-10-29/desfile-dia-muertos-cdmx.photo-1/
Future
A sequel to Spectre'' began development in spring 2016. Sam Mendes has stated he will not return to direct the next film in the series. Daniel Craig is unsure about returning as Bond, but producer Barbara Broccoli prefers to keep the actor, saying "Daniel is our secret weapon, he has just brought so much to the role". Christoph Waltz has signed on for two more films, but his return depends on whether or not Craig will again portray Bond. In October 2016 Craig stated that he may indeed return for another film, saying, "As far as I'm concerned, I've got the best job in the world. I'll keep doing it as long as I still get a kick out of it. If I were to stop doing it, I would miss it terribly."
Notes and references
Notes
References
External links
– official site
Category:2015 films
Category:2010s action thriller films
Category:2010s spy films
Category:American films
Category:American action thriller films
Category:American sequel films
Category:British films
Category:British thriller films
Category:British action thriller films
Category:British sequel films
Category:Columbia Pictures films
Category:English-language films
Category:Film scores by Thomas Newman
Category:Films about revenge
Category:Films about security and surveillance
Category:Films about terrorism
Category:Films directed by Sam Mendes
Category:Films produced by Barbara Broccoli
Category:Films produced by Michael G. Wilson
Category:Films set in Africa
Category:Films set in Austria
Category:Films set in London
Category:Films set in Mexico City
Category:Films set in Tangier
Category:Films set in Rome
Category:Films set in Tokyo
Category:Films set in Vatican City
Category:Films shot in Austria
Category:Films shot in England
Category:Films shot in London
Category:Films shot in Mexico City
Category:Films shot in Monaco
Category:Films shot in Morocco
Category:Films shot in Rome
Category:Films that won the Best Original Song Academy Award
Category:IMAX films
Category:James Bond films
Category:Metro-Goldwyn-Mayer films
Category:Patricide in fiction
Category:Films shot at Pinewood Studios
Category:Screenplays by John Logan
Category:Spectre | 44,853,982 | 2017-01 |
Apollo | Apollo (Attic, Ionic, and Homeric Greek: , Apollōn ( ); Doric: , Apellōn; Arcadocypriot: , Apeilōn; Aeolic: , Aploun; ) is one of the most important and complex of the Olympian deities in classical Greek and Roman religion and Greek and Roman mythology. The ideal of the kouros (a beardless, athletic youth), Apollo has been variously recognized as a god of music, truth and prophecy, healing, the sun and light, plague, poetry, and more. Apollo is the son of Zeus and Leto, and has a twin sister, the chaste huntress Artemis. Apollo is known in Greek-influenced Etruscan mythology as Apulu.Krauskopf, I. 2006. "The Grave and Beyond." The Religion of the Etruscans. edited by N. de Grummond and E. Simon. Austin: University of Texas Press. p. vii, p. 73-75.
As the patron of Delphi (Pythian Apollo), Apollo was an oracular god—the prophetic deity of the Delphic Oracle. Medicine and healing are associated with Apollo, whether through the god himself or mediated through his son Asclepius, yet Apollo was also seen as a god who could bring ill-health and deadly plague. Amongst the god's custodial charges, Apollo became associated with dominion over colonists, and as the patron defender of herds and flocks. As the leader of the Muses (Apollon Musegetes) and director of their choir, Apollo functioned as the patron god of music and poetry. Hermes created the lyre for him, and the instrument became a common attribute of Apollo. Hymns sung to Apollo were called paeans.
In Hellenistic times, especially during the 3rd century BCE, as Apollo Helios he became identified among Greeks with Helios, Titan god of the sun, and his sister Artemis similarly equated with Selene, Titan goddess of the moon.For the iconography of the Alexander–Helios type, see H. Hoffmann, 1963. "Helios", in Journal of the American Research Center in Egypt 2, pp. 117–23; cf. Yalouris 1980, no. 42. In Latin texts, on the other hand, Joseph Fontenrose declared himself unable to find any conflation of Apollo with Sol among the Augustan poets of the 1st century, not even in the conjurations of Aeneas and Latinus in Aeneid XII (161–215).Joseph Fontenrose, "Apollo and Sol in the Latin poets of the first century BC", Transactions of the American Philological Association 30 (1939), pp 439–55; "Apollo and the Sun-God in Ovid", American Journal of Philology 61 (1940) pp 429–44; and "Apollo and Sol in the Oaths of Aeneas and Latinus" Classical Philology 38.2 (April 1943), pp. 137–138. Apollo and Helios/Sol remained separate beings in literary and mythological texts until the 3rd century CE.
Etymology
thumb|Apollo seated with lyre. Porphyry and marble, 2nd century AD. Farnese collection, Naples, Italy.
The name Apollo—unlike the related older name Paean—is generally not found in the Linear B (Mycenean Greek) texts, although there is a possible attestation in the lacunose form ]pe-rjo-[ (Linear B: ]-[) on the KN E 842 tablet.R. S. P. Beekes, Etymological Dictionary of Greek, Brill, 2009, p. 118..
The etymology of the name is uncertain. The spelling ( in Classical Attic) had almost superseded all other forms by the beginning of the common era, but the Doric form Apellon (), is more archaic, derived from an earlier . It probably is a cognate to the Doric month Apellaios (), and the offerings apellaia () at the initiation of the young men during the family-festival apellai ()."The young men became grown-up kouroi, and Apollon was the "megistos kouros" ( The Great Kouros) : Jane Ellen Harrison (2010): Themis: A study to the Social origins of Greek Religion Cambridge University Press. pp. 439–441, ISBN 1108009492Visible Religion. Volume IV–V. Approaches to Iconology. Leiden, E. J. Brill, 1985 p. 143
According to some scholars the words are derived from the Doric word apella (), which originally meant "wall," "fence for animals" and later "assembly within the limits of the square."The word usually appears in plural: Hesychius: (apellai), ("folds"), ("assemblies"), ("elections"): Nilsson, Vol. I, p. 556Doric Greek verb: ("to assemble"), and the festival (apellai), which surely belonged to Apollo. Nilsson, Vol I, p. 556. Apella () is the name of the popular assembly in Sparta, corresponding to the ecclesia (). R. S. P. Beekes rejected the connection of the theonym with the noun apellai and suggested a Pre-Greek proto-form *Apalyun.Beekes, 2009, pp. 115 and 118–119.
Several instances of popular etymology are attested from ancient authors. Thus, the Greeks most often associated Apollo's name with the Greek verb (apollymi), "to destroy". Plato in Cratylus connects the name with (apolysis), "redemption", with (apolousis), "purification", and with ([h]aploun), "simple",The suggestion is repeated by Plutarch in Moralia in the sense of "unity". in particular in reference to the Thessalian form of the name, , and finally with (aeiballon), "ever-shooting". Hesychius connects the name Apollo with the Doric (apella), which means "assembly", so that Apollo would be the god of political life, and he also gives the explanation (sekos), "fold", in which case Apollo would be the god of flocks and herds. In the Ancient Macedonian language (pella) means "stone," and some toponyms may be derived from this word: (Pella,R. S. P. Beekes, Etymological Dictionary of Greek, Brill, 2009, p. 1168. the capital of Ancient Macedonia) and (Pellēnē/Pallene).
A number of non-Greek etymologies have been suggested for the name,Martin Nilsson, Die Geschichte der Griechische Religion, vol. I (C. H. Beck), 1955:555–564. The Hittite form Apaliunas (d) is attested in the Manapa-Tarhunta letter,The reading of Apaliunas and the possible identification with Apollo is due to Emil Forrer (1931). It was doubted by Kretschmer, Glotta XXIV, p. 250. Martin Nilsson (1967), Vol I, p. 559 perhaps related to Hurrian (and certainly the Etruscan) Aplu, a god of plague, in turn likely from Akkadian Aplu Enlil meaning simply "the son of Enlil", a title that was given to the god Nergal, who was linked to Shamash, Babylonian god of the sun.de Grummond, Nancy Thomson (2006) Etruscan Myth, Sacred History, and Legend. (Philadelphia, Pennsylvania: University of Pennsylvania Museum of Archaeology and Anthropology); Mackenzie, Donald A. (2005) Myths of Babylonia and Assyria (Gutenberg)
The role of Apollo as god of plague is evident in the invocation of Apollo Smintheus ("mouse Apollo") by Chryses, the Trojan priest of Apollo, with the purpose of sending a plague against the Greeks (the reasoning behind a god of the plague becoming a god of healing is of course apotropaic, meaning that the god responsible for bringing the plague must be appeased in order to remove the plague).
The Hittite testimony reflects an early form , which may also be surmised from comparison of Cypriot with Doric .
A Luwian etymology suggested for Apaliunas makes Apollo "The One of Entrapment", perhaps in the sense of "Hunter".
Greco-Roman epithets
Apollo's chief epithet was Phoebus ( ; , Phoibos), literally "bright".R. S. P. Beekes, Etymological Dictionary of Greek, Brill, 2009, p. 1582. It was very commonly used by both the Greeks and Romans for Apollo's role as the god of light. Like other Greek deities, he had a number of others applied to him, reflecting the variety of roles, duties, and aspects ascribed to the god. However, while Apollo has a great number of appellations in Greek myth, only a few occur in Latin literature.
Sun
Aegletes ( ; Αἰγλήτης, Aiglētēs), from , "light of the sun"Apollonius of Rhodes, iv. 1730; Pseudo-Apollodorus, Biblioteca, i. 9. § 26
Helius ( ; , Helios), literally "sun"
Lyceus ( ; , Lykeios, from Proto-Greek *) "light". The meaning of the epithet "Lyceus" later became associated with Apollo's mother Leto, who was the patron goddess of Lycia () and who was identified with the wolf ().Aelian, On the Nature of Animals 4. 4 (A.F. Scholfield, tr.)
Phanaeus ( ; , Phanaios), literally "giving or bringing light"
Phoebus ( ; , Phoibos), literally "bright", his most commonly used epithet by both the Greeks and Romans
Sol (Roman) ( ), "sun" in Latin
Wolf
Lycegenes ( ; , Lukēgenēs), literally "born of a wolf" or "born of Lycia"
Lycoctonus ( ; , Lykoktonos), from , "wolf", and , "to kill"
Origin and birth
Apollo's birthplace was Mount Cynthus on the island of Delos.
Cynthius ( ; , Kunthios), literally "Cynthian"
Cynthogenes ( ; , Kynthogenēs), literally "born of Cynthus"
Delius ( ; Δήλιος, Delios), literally "Delian"
Didymaeus ( ; , Didymaios) from δίδυμος, "twin") as Artemis' twin
250px|thumb|Partial view of the temple of Apollo Epikurios (healer) at Bassae in southern Greece
Place of worship
Delphi and Actium were his primary places of worship.Ovid, Metamorphoses xiii. 715Strabo, x. p. 451
Acraephius ( ; , Akraiphios, literally "Acraephian") or Acraephiaeus ( ; , Akraiphiaios), "Acraephian", from the Boeotian town of Acraephia (), reputedly founded by his son Acraepheus
Actiacus ( ; , Aktiakos), literally "Actian", after Actium ()
Delphinius ( ; , Delphinios), literally "Delphic", after Delphi (Δελφοί). An etiology in the Homeric Hymns associated this with dolphins.
Pythius ( ; , Puthios, from Πυθώ, Pythō), from the region around Delphi
Smintheus ( ; , Smintheus), "Sminthian"—that is, "of the town of Sminthos or Sminthe" near the Troad town of HamaxitusThe epithet "Smintheus" has historically been confused with , "mouse", in association with Apollo's role as a god of disease
thumb|250px|Temple of the Delians at Delos, dedicated to Apollo (478 BC). 19th-century pen-and-wash restoration.
thumb|250px|Temple of Apollo Smintheus at Çanakkale, Turkey
Healing and disease
Acesius ( ; , Akesios), from , "healing". Acesius was the epithet of Apollo worshipped in Elis, where he had a temple in the agora. At the Perseus Project.
Acestor ( ; , Akestōr), literally "healer"
Culicarius (Roman) ( ), from Latin culicārius, "of midges"
Iatrus ( ; , Iātros), literally "physician"Euripides, Andromache 901
Medicus (Roman) ( ), "physician" in Latin. A temple was dedicated to Apollo Medicus at Rome, probably next to the temple of Bellona.
Paean ( ; , Paiān), from , "to touch"
Parnopius ( ; , Parnopios), from , "locust"
Founder and protector
Agyieus ( ; , Aguīeus), from , "street", for his role in protecting roads and homes
Alexicacus ( ; , Alexikakos), literally "warding off evil"
Apotropaeus ( ; , Apotropaios), from , "to avert"
Archegetes ( ; , Arkhēgetēs), literally "founder"
Averruncus (Roman)( ; from Latin āverruncare), "to avert"
Clarius ( ; , Klārios), from Doric , "allotted lot"
Epicurius ( ; , Epikourios), from , "to aid"
Genetor ( ; , Genetōr), literally "ancestor"
Nomius ( ; , Nomios), literally "pastoral"
Nymphegetes ( ; , Numphēgetēs), from , "Nymph", and , "leader", for his role as a protector of shepherds and pastoral life
Prophecy and truth
Coelispex (Roman) ( ), from Latin coelum, "sky", and specere "to look at"
Iatromantis ( ; , Iātromantis,) from , "physician", and , "prophet", referring to his role as a god both of healing and of prophecy
Leschenorius ( ; , Leskhēnorios), from , "converser"
Loxias ( ; , Loxias), from , "to say", historically associated with , "ambiguous"
Manticus ( ; , Mantikos), literally "prophetic"
Music and arts
Musagetes ( ; Doric , Mousāgetās), from , "Muse", and "leader".
Musegetes ( ; , Mousēgetēs), as the preceding
Archery
Aphetor ( ; , Aphētōr), from , "to let loose"
Aphetorus ( ; , Aphētoros), as the preceding
Arcitenens (Roman) ( ), literally "bow-carrying"
Argyrotoxus ( ; , Argyrotoxos), literally "with silver bow"
Hecaërgus ( ; , Hekaergos), literally "far-shooting"
Hecebolus ( ; , Hekēbolos), "far-shooting"
Ismenius ( ; , Ismēnios), literally "of Ismenus", after Ismenus, the son of Amphion and Niobe, whom he struck with an arrow
Celtic epithets and cult titles
Apollo was worshipped throughout the Roman Empire. In the traditionally Celtic lands he was most often seen as a healing and sun god. He was often equated with Celtic gods of similar character.Miranda J. Green, Dictionary of Celtic Myth and Legend, Thames and Hudson Ltd, 1997
Apollo Atepomarus ("the great horseman" or "possessing a great horse"). Apollo was worshipped at Mauvières (Indre). Horses were, in the Celtic world, closely linked to the sun.Corpus Inscriptionum Latinarum XIII, 1863–1986; A. Ross, Pagan Celtic Britain, 1967; M.J. Green, The Gods of the Celts, 1986, London
Apollo Belenus ('bright' or 'brilliant'). This epithet was given to Apollo in parts of Gaul, Northern Italy and Noricum (part of modern Austria). Apollo Belenus was a healing and sun god.J. Zwicker, Fontes Historiae Religionis Celticae, 1934–36, Berlin; Corpus Inscriptionum Latinarum V, XI, XII, XIII; J. Gourcest, "Le culte de Belenos en Provence occidentale et en Gaule", Ogam 6.6 (1954:257–262); E. Thevonot, "Le cheval sacre dans la Gaule de l'Est", Revue archeologique de l'Est et du Centre-Est (vol 2), 1951; [], "Temoignages du culte de l'Apollon gaulois dans l'Helvetie romaine", Revue celtique (vol 51), 1934.
Apollo Cunomaglus ('hound lord'). A title given to Apollo at a shrine at Nettleton Shrub, Wiltshire. May have been a god of healing. Cunomaglus himself may originally have been an independent healing god.W.J. Wedlake, The Excavation of the Shrine of Apollo at Nettleton, Wiltshire, 1956–1971, Society of Antiquaries of London, 1982.
Apollo Grannus. Grannus was a healing spring god, later equated with Apollo.M. Szabo, The Celtic Heritage in Hungary (Budapest 1971)Divinites et sanctuaires de la Gaule, E. Thevonat, 1968, ParisLa religion des Celtes, J. de Vries, 1963, Paris
Apollo Maponus. A god known from inscriptions in Britain. This may be a local fusion of Apollo and Maponus.
Apollo Moritasgus ('masses of sea water'). An epithet for Apollo at Alesia, where he was worshipped as god of healing and, possibly, of physicians.J. Le Gall, Alesia, archeologie et histoire (Paris 1963).
Apollo Vindonnus ('clear light'). Apollo Vindonnus had a temple at Essarois, near Châtillon-sur-Seine in present-day Burgundy. He was a god of healing, especially of the eyes.
Apollo Virotutis ('benefactor of mankind?'). Apollo Virotutis was worshipped, among other places, at Fins d'Annecy (Haute-Savoie) and at Jublains (Maine-et-Loire).Corpus Inscriptionum Latinarum XIII
Origins
thumb|250px|The Omphalos in the Museum of Delphi
The cult centers of Apollo in Greece, Delphi and Delos, date from the 8th century BCE. The Delos sanctuary was primarily dedicated to Artemis, Apollo's twin sister. At Delphi, Apollo was venerated as the slayer of Pytho. For the Greeks, Apollo was all the Gods in one and through the centuries he acquired different functions which could originate from different gods. In archaic Greece he was the prophet, the oracular god who in older times was connected with "healing". In classical Greece he was the god of light and of music, but in popular religion he had a strong function to keep away evil.Martin Nilsson (1967)".Die Geschicte der Giechischen Religion.Vol I".C.F.Beck Verlag.Munchen. p 529 Walter BurkertBurkert, Walter. Greek Religion, 1985:144. discerned three components in the prehistory of Apollo worship, which he termed "a Dorian-northwest Greek component, a Cretan-Minoan component, and a Syro-Hittite component."
From his eastern origin Apollo brought the art of inspection of "symbols and omina" (σημεία και τέρατα : semeia kai terata), and of the observation of the omens of the days. The inspiration oracular-cult was probably introduced from Anatolia. The ritualism belonged to Apollo from the beginning. The Greeks created the legalism, the supervision of the orders of the gods, and the demand for moderation and harmony. Apollo became the god of shining youth, the protector of music, spiritual-life, moderation and perceptible order. The improvement of the old Anatolian god, and his elevation to an intellectual sphere, may be considered an achievement of the Greek people.Martin Nilsson. Die Geschichte der Griechische Religion Vol I, pp. 563–564
Healer and god-protector from evil
The function of Apollo as a "healer" is connected with Paean (), the physician of the Gods in the Iliad, who seems to come from a more primitive religion.Paieon () puts pain-relieving medicines on the wounds of Pluton and Ares ( Ilias E401). This art is related with Egypt: (Odyssey D232): M. Nilsson Vol I, p. 543 Paeοn is probably connected with the Mycenean pa-ja-wo-ne (Linear B: ), At Google Books. but this is not certain. He did not have a separate cult, but he was the personification of the holy magic-song sung by the magicians that was supposed to cure disease. Later the Greeks knew the original meaning of the relevant song "paean" (). The magicians were also called "seer-doctors" (), and they used an ecstatic prophetic art which was used exactly by the god Apollo at the oracles.. Which is sung to stop the plagues and the diseases. Proklos: Chrestom from Photios Bibl. code. 239, p. 321: Martin Nilsson. Die Geschicthe der Griechischen religion. Vol I, p. 543
In the Iliad, Apollo is the healer under the gods, but he is also the bringer of disease and death with his arrows, similar to the function of the Vedic god of disease Rudra."The conception that the diseases come from invisible shots sent by magicians or supernatural beings is common in primitive people and also in European folklore. In North-Europe they speak of the "Elf-shots". In Sweden where the Lapps were called magicians, they speak of the "Lappen-shots". Martin Nilsson (1967). Vol I, p. 541 He sends a plague () to the Achaeans. The god who sends a disease can also prevent it; therefore, when it stops, they make a purifying ceremony and offer him a hecatomb to ward off evil. When the oath of his priest appeases, they pray and with a song they call their own god, the Paean.Ilias A 314. Martin Nilsson (1967). Vol I, p. 543
Some common epithets of Apollo as a healer are "paion" (, literally "healer" or "helper"): Harper's Dictionary of classical antiquity "epikourios" (, "help"), "oulios" (, "healed wound", also a "scar" )Perseus.tufts.edu and "loimios" (, "plague"). In classical times, his strong function in popular religion was to keep away evil, and was therefore called "apotropaios" (, "divert", "deter", "avert") and "alexikakos" (from v. + n. , "defend from evil").Pausanias VIII 41, 8-IV 34, 7-Sittig. Nom P. 48. f-Aristoph. Vesp. V. 61-Paus. I 3, 4. Martin Nilsson (1967) Vol I, p. 540, 544 In later writers, the word, usually spelled "Paean", becomes a mere epithet of Apollo in his capacity as a god of healing.
Homer illustrated Paeon the god, and the song both of apotropaic thanksgiving or triumph.
Such songs were originally addressed to Apollo, and afterwards to other gods: to Dionysus, to Apollo Helios, to Apollo's son Asclepius the healer. About the 4th century BCE, the paean became merely a formula of adulation; its object was either to implore protection against disease and misfortune, or to offer thanks after such protection had been rendered. It was in this way that Apollo had become recognised as the god of music. Apollo's role as the slayer of the Python led to his association with battle and victory; hence it became the Roman custom for a paean to be sung by an army on the march and before entering into battle, when a fleet left the harbour, and also after a victory had been won.
Dorian origin
left|thumb|Apollo Victorious over the Python by the Florentine Pietro Francavilla (dated 1591) depicting Apollo's first triumph, when he slew with his bow and arrows the serpent Python, which lies dead at his feet (The Walters Art Museum).
The connection with Dorians and their initiation festival apellai is reinforced by the month Apellaios in northwest Greek calendars,Graf, Apollo, pp. 104–113; Burkert also notes in this context Archilochus Fr. 94. but it can explain only the Doric type of the name, which is connected with the Ancient Macedonian word "pella" (Pella), stone. Stones played an important part in the cult of the god, especially in the oracular shrine of Delphi (Omphalos).Compare: Baetylus. In Semitic: sacred stoneMartin Nilsson (1967). Vol I. p. 556 The "Homeric hymn" represents Apollo as a Northern intruder. His arrival must have occurred during the "Dark Ages" that followed the destruction of the Mycenaean civilization, and his conflict with Gaia (Mother Earth) was represented by the legend of his slaying her daughter the serpent Python.Herbert W. Park (1956). The delphic oracle. Vol.I, p. 3
The earth deity had power over the ghostly world, and it is believed that she was the deity behind the oracle.Lewis Farnel(1909)The cult of the city states. Clarendon Press. VIII. pp. 8–10 The older tales mentioned two dragons who were perhaps intentionally conflated. A female dragon named Delphyne (, "womb"), who is obviously connected with Delphi and Apollo Delphinios, and a male serpent Typhon (, "to smoke"), the adversary of Zeus in the Titanomachy, who the narrators confused with Python."Many pictures show the serpent Python living in amity with Apollo and guarding the Omphalos. Karl Kerenyi (1951). ed. 1980: The gods of the Greeks, pp. 36–37"In a Pompeian fresco Python is lying peacefully on the ground and the priests with the sacred double axe in their hand bring the bull (bouphronion). Jane. H. Harisson (1912): Themis. A study of the social origins of the Greek religion. Cambridge University Press. pp. 423–424 Python was the good daemon (ἀγαθὸς δαίμων) of the temple as it appears in Minoan religion,In Minoan religion the serpent is the protector of the household (underground stored corn). Also in Greek religion, "snake of the house" () in the temple of Athena at Acropolis, etc., and in Greek folklore. Martin Nilsson, Vol.I, pp. 213–214 but she was represented as a dragon, as often happens in Northern European folklore as well as in the East.Nordig sagas. Hittite myth of Illuyankas. Also in the Bible: Leviathan. W. Porzig (1930). Illuyankas and Typhon. Kleinasiatische Forschung, pp. 379–386
Apollo and his sister Artemis can bring death with their arrows. The conception that diseases and death come from invisible shots sent by supernatural beings, or magicians is common in Germanic and Norse mythology. In Greek mythology Artemis was the leader (, "hegemon") of the nymphs, who had similar functions with the Nordic Elves.. Martin Nilsson (1967), Vol I, pp. 499–500 The "elf-shot" originally indicated disease or death attributed to the elves, but it was later attested denoting stone arrow-heads which were used by witches to harm people, and also for healing rituals.Hall, Alaric. 2005. 'Getting Shot of Elves: Healing, Witchcraft and Fairies in the Scottish Witchcraft Trials', , 116 (2005), pp. 19–36.
The Vedic Rudra has some similar functions with Apollo. The terrible god is called "The Archer", and the bow is also an attribute of Shiva.For as a name of Shiva see: Apte, p. 910. Rudra could bring diseases with his arrows, but he was able to free people of them, and his alternative Shiba is a healer physician god.For association between Rudra and disease, with Rigvedic references, see: Bhandarkar, p. 146. However the Indo-European component of Apollo does not explain his strong relation with omens, exorcisms, and with the oracular cult.
Minoan origin
thumb|250px|Ornamented golden Minoan labrys
It seems an oracular cult existed in Delphi from the Mycenaean ages.Odyssey 8.80 In historical times, the priests of Delphi were called Labryaden, "the double-axe men", which indicates Minoan origin. The double-axe, labrys, was the holy symbol of the Cretan labyrinth.Huxley (1975). Cretan Paewones. Roman and Byzantine studies, pp. 129–134H.G.Wunderlich. The secret of Creta Souvenir Press Ltd. London p. 319 The Homeric hymn adds that Apollo appeared as a dolphin and carried Cretan priests to Delphi, where they evidently transferred their religious practices. Apollo Delphinios was a sea-god especially worshiped in Crete and in the islands, and his name indicates his connection with DelphiMartin Nilsson (1967). Vol I, p. 529 and the holy serpent Delphyne ("womb"). Apollo's sister Artemis, who was the Greek goddess of hunting, is identified with Britomartis (Diktynna), the Minoan "Mistress of the animals". In her earliest depictions she is accompanied by the "Mister of the animals", a male god of hunting who had the bow as his attribute. His original name is unknown, but it seems that he was absorbed by the more popular Apollo, who stood by the virgin "Mistress of the Animals", becoming her brother.
The old oracles in Delphi seem to be connected with a local tradition of the priesthood, and there is not clear evidence that a kind of inspiration-prophecy existed in the temple. This led some scholars to the conclusion that Pythia carried on the rituals in a consistent procedure through many centuries, according to the local tradition. In that regard, the mythical seeress Sibyl of Anatolian origin, with her ecstatic art, looks unrelated to the oracle itself.Hugh Bowden (2005). Classical Athens and the Delphic oracle, pp. 17–18 However, the Greek tradition is referring to the existence of vapours and chewing of laurel-leaves, which seem to be confirmed by recent studies.
Plato describes the priestesses of Delphi and Dodona as frenzied women, obsessed by "mania" (, "frenzy"), a Greek word he connected with mantis (, "prophet").. Frenzied women like Sibyls from whose lips the god speaks are recorded in the Near East as Mari in the second millennium BC.Walter Burkert (1985).The Greek religion. p. 116 Although Crete had contacts with Mari from 2000 BC,F.Schachermeyer (1964). p. 128 there is no evidence that the ecstatic prophetic art existed during the Minoan and Mycenean ages. It is more probable that this art was introduced later from Anatolia and regenerated an existing oracular cult that was local to Delphi and dormant in several areas of Greece.Martin Nilsson (1967). Vol I, pp. 543–545
Anatolian origin
thumb|250px|Illustration of a coin of Apollo Agyieus from Ambracia
A non-Greek origin of Apollo has long been assumed in scholarship. The name of Apollo's mother Leto has Lydian origin, and she was worshipped on the coasts of Asia Minor. The inspiration oracular cult was probably introduced into Greece from Anatolia, which is the origin of Sibyl, and where existed some of the oldest oracular shrines. Omens, symbols, purifications, and exorcisms appear in old Assyro-Babylonian texts, and these rituals were spread into the empire of the Hittites. In a Hittite text is mentioned that the king invited a Babylonian priestess for a certain "purification".
A similar story is mentioned by Plutarch. He writes that the Cretan seer Epimenides purified Athens after the pollution brought by the Alcmeonidae, and that the seer's expertise in sacrifices and reform of funeral practices were of great help to Solon in his reform of the Athenian state.Plutarch, Life of Solon, 12; Aristotle, Ath. Pol. 1. The story indicates that Epimenides was probably heir to the shamanic religions of Asia, and proves, together with the Homeric hymn, that Crete had a resisting religion up to historical times. It seems that these rituals were dormant in Greece, and they were reinforced when the Greeks migrated to Anatolia.
Homer pictures Apollo on the side of the Trojans, fighting against the Achaeans, during the Trojan War. He is pictured as a terrible god, less trusted by the Greeks than other gods. The god seems to be related to Appaliunas, a tutelary god of Wilusa (Troy) in Asia Minor, but the word is not complete.Paul Kretschmer (1936). Glotta XXIV p. 250. Martin Nilsson (1967). Vol I, p. 559. The stones found in front of the gates of Homeric Troy were the symbols of Apollo. The Greeks gave to him the name agyieus as the protector god of public places and houses who wards off evil, and his symbol was a tapered stone or column.Martin Nilsson, Die Geschichte der Griechische Religion. vol. I (C. H. Beck), 1955:563f. However, while usually Greek festivals were celebrated at the full moon, all the feasts of Apollo were celebrated at the seventh day of the month, and the emphasis given to that day (sibutu) indicates a Babylonian origin.Martin Nilsson (1967). Vol I, p. 561.
The Late Bronze Age (from 1700 to 1200 BCE) Hittite and Hurrian Aplu was a god of plague, invoked during plague years. Here we have an apotropaic situation, where a god originally bringing the plague was invoked to end it. Aplu, meaning the son of, was a title given to the god Nergal, who was linked to the Babylonian god of the sun Shamash. Homer interprets Apollo as a terrible god () who brings death and disease with his arrows, but who can also heal, possessing a magic art that separates him from the other Greek gods.Martin Nilsson (1967). Vol I. pp. 559–560. In Iliad, his priest prays to Apollo Smintheus,"You Apollo Smintheus, let my tears become your arrows against the Danaans, for revenge". Iliad 1.33 (A 33). the mouse god who retains an older agricultural function as the protector from field rats.An ancient aetiological myth connects sminthos with mouse and suggests Cretan origin. Apollo is the mouse-god (Strabo 13.1.48)."Sminthia" in several areas of Greece. In Rhodes (Lindos) they belong to Apollo and Dionysos who have destroyed the rats that were swallowing the grapes". Martin Nilsson (1967). pp. 534–535. All these functions, including the function of the healer-god Paean, who seems to have Mycenean origin, are fused in the cult of Apollo.
Oracular cult
thumb|250px|Columns of the Temple of Apollo at Delphi, Greece
Unusually among the Olympic deities, Apollo had two cult sites that had widespread influence: Delos and Delphi. In cult practice, Delian Apollo and Pythian Apollo (the Apollo of Delphi) were so distinct that they might both have shrines in the same locality.Burkert 1985:143. Apollo's cult was already fully established when written sources commenced, about 650 BCE. Apollo became extremely important to the Greek world as an oracular deity in the archaic period, and the frequency of theophoric names such as Apollodorus or Apollonios and cities named Apollonia testify to his popularity. Oracular sanctuaries to Apollo were established in other sites. In the 2nd and 3rd century CE, those at Didyma and Clarus pronounced the so-called "theological oracles", in which Apollo confirms that all deities are aspects or servants of an all-encompassing, highest deity. "In the 3rd century, Apollo fell silent. Julian the Apostate (359 - 61) tried to revive the Delphic oracle, but failed."
Oracular shrines
thumb|250px|Delos lions
Apollo had a famous oracle in Delphi, and other notable ones in Clarus and Branchidae. His oracular shrine in Abae in Phocis, where he bore the toponymic epithet Abaeus (, Apollon Abaios), was important enough to be consulted by Croesus.Herodotus, 1.46.
His oracular shrines include:
Abae in Phocis.
Bassae in the Peloponnese.
At Clarus, on the west coast of Asia Minor; as at Delphi a holy spring which gave off a pneuma, from which the priests drank.
In Corinth, the Oracle of Corinth came from the town of Tenea, from prisoners supposedly taken in the Trojan War.
At Khyrse, in Troad, the temple was built for Apollo Smintheus.
In Delos, there was an oracle to the Delian Apollo, during summer. The Hieron (Sanctuary) of Apollo adjacent to the Sacred Lake, was the place where the god was said to have been born.
In Delphi, the Pythia became filled with the pneuma of Apollo, said to come from a spring inside the Adyton.
In Didyma, an oracle on the coast of Anatolia, south west of Lydian (Luwian) Sardis, in which priests from the lineage of the Branchidae received inspiration by drinking from a healing spring located in the temple. Was believed to have been founded by Branchus, son or lover of Apollo.
In Hierapolis Bambyce, Syria (modern Manbij), according to the treatise De Dea Syria, the sanctuary of the Syrian Goddess contained a robed and bearded image of Apollo. Divination was based on spontaneous movements of this image.Lucian (attrib.), De Dea Syria 35–37.
At Patara, in Lycia, there was a seasonal winter oracle of Apollo, said to have been the place where the god went from Delos. As at Delphi the oracle at Patara was a woman.
In Segesta in Sicily.
Oracles were also given by sons of Apollo.
In Oropus, north of Athens, the oracle Amphiaraus, was said to be the son of Apollo; Oropus also had a sacred spring.
in Labadea, east of Delphi, Trophonius, another son of Apollo, killed his brother and fled to the cave where he was also afterwards consulted as an oracle.
Temples of Apollo
A lot of temples dedicated to Apollo were built in Greece and in the Greek colonies, and they show the spread of the cult of Apollo, and the evolution of the Greek architecture, which was mostly based on the rightness of form, and on mathematical relations. Some of the earliest temples, especially in Crete, don't belong to any Greek order. It seems that the first peripteral temples were rectangle wooden structures. The different wooden elements were considered divine, and their forms were preserved in the marble or stone elements of the temples of Doric order. The Greeks used standard types, because they believed that the world of objects was a series of typical forms which could be represented in several instances. The temples should be canonic, and the architects were trying to achieve the esthetic perfection.To know what a thing is, we must know the look of it": Rhys Carpenter: The esthetic basis of Greek art. Indiana University Press. p. 108 From the earliest times there were certain rules strictly observed in rectangular peripteral and prostyle buildings. The first buildings were narrow to hold the roof, and when the dimensions changed, some mathematical relations became necessary, in order to keep the original forms. This probably influenced the theory of numbers of Pythagoras, who believed that behind the appearance of things, there was the permanent principle of mathematics.C. M. Bowra (1957). The Greek experience, p. 166.
The Doric order dominated during the 6th and the 5th century B.C, but there was a mathematical problem regarding the position of the triglyphs, which couldn’t be solved without changing the original forms. The order was almost abandoned for the Ionic order, but the Ionic capital also posed an insoluble problem at the corner of a temple. Both orders were abandoned for the Corinthian order gradually during the Hellenistic age, and under Rome.
The most important temples are:
Greek temples
Thebes, Greece: The oldest temple probably dedicated to Apollo Ismenius was built in the 9th century B.C It seems that it was a curvilinear building. The Doric temple was built in the early 7th century B.C, but only some small parts have been found William Dinsmoor (1950),The architecture of Ancient Greece, p. 218, ISBN 0-8196-0283-3 A festival called Daphnephoria was celebrated every ninth year in honour of Apollo Ismenius (or Galaxius). The people held laurel branches ( daphnai), and at the head of the procession, walked a youth (chosen priest of Apollo), who was called "daphnephoros".William Smith. A Dictionary of Greek and Roman Antiquities, John Murray, London, 1875. p. 384
Eretria: According to the Homeric hymn to Apollo, the god arrived to the plain, seeking for a location to establish its oracle. The first temple of Apollo Daphnephoros, "Apollo, laurel-bearer", or "carrying off Daphne", is dated to 800 BC. The temple was curvilinear hecatombedon (a hundred feet). In a smaller building were kept the bases of the laurer branches which were used for the first building. Another temple probably peripteral was built in the 7th century B.C, with an inner row of wooden columns over its Geometric predecessor. It was rebuilt peripteral around 510 BC, with the stylobate measuring 21,00 X 43,00 m. The number of pteron column was 6 x 14.Hellenic Ministry of culture, Temple of Apollo Daphnephoros Rufus B. Richardson, "A Temple in Eretria" The American Journal of Archaeology and of the History of the Fine Arts, 10.3 (July – September 1895:326–337)
Dreros (Crete). The temple of Apollo Delphinios dates from the 7th century B.C, or probably from the middle of the 8th century BC. According to the legend, Apollo appeared as a dolphin, and carried Cretan priests to the port of Delphi. `The dimensions of the plan are 10,70 X 24,00 m, and the building was not peripteral. It contains column-bases of the Minoan type, which may be considered as the predecessors of the Doric columns.Robertson pp. 56 and 323
Gortyn (Crete). A temple of Pythian Apollo, was built in the 7th century BC. The plan measured 19,00 X 16,70 m, and it was not peripteral. The walls were solid, made from limestone, and there was single door on the east side.
Thermon (West Greece): The Doric temple of Apollo Thermios, was built in the middle of the 7th century BC. It was built on an older curvilinear building dating perhaps from the 10th century B.C, on which a peristyle was added. The temple was narrow, and the number of pteron columns (probably wooden) was 5 X 15. There was a single row of inner columns. It measures 12.13 X 38.23 m at the stylobate, which was made from stones.Spivey, p. 112
Napes (Lesbos): An Aeolic temple probably of Apollo Napaios was built in the 7th century BC. Some special capitals with floral ornament have been found, which are called Aeolic, and it seems that they were borrowed from the East.D.S Robertson(1945):A handbook of Greek and Roman architecture, Cambridge University Press pp. 324-329
Cyrene, Libya: The oldest Doric temple of Apollo was built in c. 600 BC. The number of pteron columns was 6 x 11, and it measures 16.75 X 30.05 m at the stylobate. There was a double row of sixteen inner columns on stylobates. The capitals were made from stone.
Naukratis: An Ionic temple was a built in the early 6th century BC. Only some fragments have been found, and the earlier made from limestone, are identified among the oldest of the Ionic order.Robertson, p. 98
thumb|left|200px| Floor plan of the temple of Apollo, Corinth
Corinth: A Doric temple was built in the 6th century BC. The temple's stylobate measures 21.36 x 53.30 m, and the number of pteron columns was 6 x 15. There was a double row of inner columns. The style is similar with the Temple of Alcmeonidae at Delphi.Robertson p. 87 The Corinthians were considered to be the inventors of the Doric order
thumb|right|200px|Floor plan of the temple of Apollo, Syracuse
Syracuse, Sicily: A Doric temple was built at the beginning of the 6th century BC. The temple's stylobate measures 21.47 X 55.36 m and the number of pteron columns was 6 x 17. It was the first temple in Greek west built completely out of stone. A second row of columns were added, obtaining the effect of an inner porch.Mertens 2006, pp. 104–109.
Selinus (Sicily):The Doric Temple C dates from 550 BC, and it was probably dedicated to Apollo. The temple's stylobate measures 10.48 X 41.63 m and the number of pteron columns was 6 x 17. There was portico with a second row of columns, which is also attested for the temple at Syracuse.IG XIV 269
Delphi: The first temple dedicated to Apollo, was built in the 7th century BC. According to the legend, it was wooden made of laurel branches. The "Temple of Alcmeonidae" was built in c. 513 BC. and it is the oldest Doric temple with significant marble elements. The temple's stylobate measures 21.65 X 58.00 m, and the number of pteron columns as 6 x 15.Temple of Apollo at Delphi, Ancient-Greece.org A fest similar with Apollo's fest at Thebes, Greece was celebrated every nine years. A boy was sent to the temple, who walked on the sacred road and returned carrying a laurel branch (dopnephoros). The maidens participated with joyful songs.
Chios: An Ionic temple of Apollo Phanaios was built at the end of the 6th century BC. Only some small parts have been found, but the capitals had floral ornament.
Abae (Phocis). The temple was destroyed by the Persians in the invasion of Xerxes in 480 BCE, and later by the Boeotians. It was rebuilt by Hadrian. The oracle was in use from early Mycenaean times to the Roman period, and shows the continuity of Mycenaean and Classical Greek religion.See reports of the German Archaeological Institute in Archaeological Reports for 2008/9 43-45
Delos: A temple probably dedicated to Apollo and not peripteral, was built in the late 7th century B.C, with a plan measuring 10,00 X 15,60 m. The Doric Great temple of Apollo, was built in c. 475 BC. The temple's stylobate measures 13.72 X 29.78 m, and the number of pteron columns as 6 x 13. Marble was extensively used.
Ambracia: A Doric peripteral temple dedicated to Apollo Pythios Sotir was built in 500 B.C, and It is lying at the centre of the Greek city Arta. Only some parts have been found, and it seems that the temple was built on earlier sanctuaries dedicated to Apollo. The temple measures 20,75 X 44,00 m at the stylobate. The foundation which supported the statue of the god, still exists.Ministry of culture.Temple of Apollo Pythios Sotir
left|thumb|200px|Floor plan of the Temple of Apollo at Bassae
Bassae (Peloponnesus):A temple dedicated to Apollo Epikourios ("Apollo the helper"), was built in 430 B.C and it was designed by Iktinos.It combined Doric and Ionic elements, and the earliest use of column with a Corinthian capital in the middle. Hellenic Ministry of Culture: The Temple of Epicurean Apollo. The temple is of a relatively modest size, with the stylobate measuring 14.5 X 38.3 metresTemple of Apollo Epicurius at Bassae, World Heritage Site. containing a Doric peristyle of 6 x 15 columns. The roof left a central space open to admit light and air.
right|thumb|180px|Temple of Apollo, Didyma
Didyma (near Miletus): The gigantic Ionic temple of Apollo Didymaios started around 540 BC. The construction ceased and then it was restarted in 330 BC. The temple is dipteral, with an outer row of 10 X 21 columns, and it measures 28.90 x 80.75 m at the stylobate.Peter Schneider: Neue Funde vom archaischen Apollontempel in Didyma. In: Ernst-Ludwig Schwandner (ed.): Säule und Gebälk. Zu Struktur und Wandlungsprozeß griechisch-römischer Architektur. Bauforschungskolloquium in Berlin vom 16.-18. Juni 1994. Diskussionen zur Archäologischen Bauforschung
Clarus (near ancient Colophon): According to the legend, the famous seer Calchas, on his return from Troy, came to Clarus. He challenged the seer Mopsus, and died when he lost.perseus tufts Clarus The Doric temple of Apollo Clarius was probably built in the 3rd century B.C, and it was peripteral with 6 x 11 columns. It was reconstructed at the end of the Hellenistic period, and later from the emperor Hadrian but Pausanias claims that it was still incomplete in the 2nd century B.C.Prophecy centre of Apollo Clarius
Hamaxitus (Troad): In Iliad, Chryses the priest of Apollo, addresses the god with the epithet Smintheus (Lord of Mice), related with the god’s ancient role as bringer of the disease (plague). Recent excavations indicate that the Hellenistic temple of Apollo Smintheus was constructed at 150–125 BC, but the symbol of the mouse god was used on coinage probably from the 4th century B.C.Bresson (2007) 154-5, citing the excavations reports of Özgünel (2001). The temple measures 40,00 X23,00 m at the stylobate, and the number of pteron columns was 8 X 14.Robertson p.333
Etruscan and Roman temples
Veii (Etruria): The temple of Apollo was built in the late 6th century B.C., and indicates the spread of Apollo’s culture (Aplu) in Etruria. There was a prostyle porch, which is called Tuscan , and a triple cella 18,50 m wide.Robertson pp. 200-201
Falerii Veteres (Etruria): A temple of Apollo was built probably in the 4th-3rd century B.C. Parts of a teraccotta capital, and a teraccotta base have been found. It seems that the Etruscan columns were derived from the archaic Doric. A cult of Apollo Soranus is attested by one inscription found near Falerii.Perseus tufts: Falerii Veteres
thumb|left|180px|Plan of the Temple of Apollo (Pompeii)
Pompeii, (Italy): The cult of Apollo, was widespread in the region of Campania, since the 6th century B.C. The temple was built in 120 B.V, but its beginnings lie in the 6th century BC. It was reconstructed after an earthquake in A.D 63. It demonstrates a mixing of styles, which formed the basis of Roman architecture. The columns in front of the cella formed a Tuscan prostyle porch, and the cella is situated unusually far back. The peripteral colonnade of 48 Ionic columns was placed in such a way, that the emphasis was given to the front side.Davidson CSA : Temple of Apollo, Pompeii
Rome: The temple of Apollo Sosianus and the temple of Apollo Medicus. The first temple building dates to 431 BC, and was dedicated to Apollo Medicus (the doctor), after a plague of 433 BC.Livy 4.25 It was rebuilt by Gaius Sosius, probably in 34 B.C. Only three columns with Corinthian capitals exist today. It seems that the cult of Apollo had existed in this area since at least to the mid-5th century BC.Livy 34.43
Rome:The temple of Apollo Palatinus was located on the Palatine hill within the sacred boundary of the city. It was dedicated by Augustus on 28 B.C. The façade of the original temple was Ionic, and it was constructed from solid blocks of marble. Many famous statues by Greek masters were on display in and around the temple, including a marble statue of the god at the entrance, and a statue of Apollo in the cella. A topographical dictionary of Ancient Rome
Melite (modern Mdina, Malta): A Temple of Apollo was built in the city in the 2nd century AD. Its remains were discovered in the 18th century, and many of its architectural fragments were dispersed among private collections or reworked into new sculptures. Parts of the temple's podium were rediscovered in 2002.
Mythology
Birth
thumb|250px|Apollo (left) and Artemis. Brygos (potter signed), tondo of an Attic red-figure cup c. 470 BC, Musée du Louvre.
When Zeus' wife Hera discovered that Leto was pregnant and that Zeus was the father, she banned Leto from giving birth on "terra firma". In her wanderings, Leto found the newly created floating island of Delos, which was neither mainland nor a real island. She gave birth there and was accepted by the people, offering them her promise that her son would be always favourable toward the city. Afterwards, Zeus secured Delos to the bottom of the ocean. This island later became sacred to Apollo.
It is also stated that Hera kidnapped Eileithyia, the goddess of childbirth, to prevent Leto from going into labor. The other gods tricked Hera into letting her go by offering her a necklace, nine yards (8 m) long, of amber. Mythographers agree that Artemis was born first and then assisted with the birth of Apollo, or that Artemis was born one day before Apollo, on the island of Ortygia and that she helped Leto cross the sea to Delos the next day to give birth to Apollo. Apollo was born on the seventh day (, hebdomagenes). of the month Thargelion —according to Delian tradition—or of the month Bysios—according to Delphian tradition. The seventh and twentieth, the days of the new and full moon, were ever afterwards held sacred to him.
Youth
Four days after his birth, Apollo killed the chthonic dragon Python, which lived in Delphi beside the Castalian Spring. This was the spring which emitted vapors that caused the oracle at Delphi to give her prophecies. Hera sent the serpent to hunt Leto to her death across the world. To protect his mother, Apollo begged Hephaestus for a bow and arrows. After receiving them, Apollo cornered Python in the sacred cave at Delphi.Children of the Gods by Kenneth McLeish, page 32. Apollo killed Python but had to be punished for it, since Python was a child of Gaia.
Hera then sent the giant Tityos to rape Leto. This time Apollo was aided by his sister Artemis in protecting their mother. During the battle Zeus finally relented his aid and hurled Tityos down to Tartarus. There, he was pegged to the rock floor, covering an area of , where a pair of vultures feasted daily on his liver.
Trojan War
thumb|Marble Bust of Apollo after the Apollo Belvedere. Circa 1675
Apollo shot arrows infected with the plague into the Greek encampment during the Trojan War in retribution for Agamemnon's insult to Chryses, a priest of Apollo whose daughter Chryseis had been captured. He demanded her return, and the Achaeans complied, indirectly causing the anger of Achilles, which is the theme of the Iliad.
In the Iliad, when Diomedes injured Aeneas, Apollo rescued him. First, Aphrodite tried to rescue Aeneas but Diomedes injured her as well. Aeneas was then enveloped in a cloud by Apollo, who took him to Pergamos, a sacred spot in Troy.
Apollo aided Paris in the killing of Achilles by guiding the arrow of his bow into Achilles' heel. One interpretation of his motive is that it was in revenge for Achilles' sacrilege in murdering Troilus, the god's own son by Hecuba, on the very altar of the god's own temple.
Admetus
When Zeus struck down Apollo's son Asclepius with a lightning bolt for resurrecting Hippolytus from the dead (transgressing Themis by stealing Hades's subjects), Apollo in revenge killed the Cyclopes, who had fashioned the bolt for Zeus.Pseudo-Apollodorus, Bibliothke iii. 10.4. Apollo would have been banished to Tartarus forever for this, but was instead sentenced to one year of hard labor, due to the intercession of his mother, Leto. During this time he served as shepherd for King Admetus of Pherae in Thessaly. Admetus treated Apollo well, and, in return, the god conferred great benefits on Admetus.
Apollo helped Admetus win Alcestis, the daughter of King Pelias and later convinced the Fates to let Admetus live past his time, if another took his place. But when it came time for Admetus to die, his parents, whom he had assumed would gladly die for him, refused to cooperate. Instead, Alcestis took his place, but Heracles managed to "persuade" Thanatos, the god of death, to return her to the world of the living.
thumb|250px|Artemis and Apollo Piercing Niobe's Children with their Arrows by Jacques-Louis David, Dallas Museum of Art
Niobe
Niobe, the queen of Thebes and wife of Amphion, boasted of her superiority to Leto because she had fourteen children (Niobids), seven male and seven female, while Leto had only two. Apollo killed her sons, and Artemis her daughters. Apollo and Artemis used poisoned arrows to kill them, though according to some versions of the myth, a number of the Niobids were spared (Chloris, usually). Amphion, at the sight of his dead sons, either killed himself or was killed by Apollo after swearing revenge.
A devastated Niobe fled to Mount Sipylos in Asia Minor and turned into stone as she wept. Her tears formed the river Achelous. Zeus had turned all the people of Thebes to stone and so no one buried the Niobids until the ninth day after their death, when the gods themselves entombed them.
Consorts and children
Love affairs ascribed to Apollo are a late development in Greek mythology."The love-stories themselves were not told until later." Karl Kerenyi, The Gods of the Greeks 1951:140. Their vivid anecdotal qualities have made some of them favorites of painters since the Renaissance, the result being that they stand out more prominently in the modern imagination.
Female lovers
thumb|250px|Apollo and Daphne by Bernini in the Galleria Borghese
Daphne was a nymph, daughter of the river god Peneus, who had scorned Apollo. The myth explains the connection of Apollo with δάφνη (daphnē), the laurel whose leaves his priestess employed at Delphi.The ancient Daphne episode is noted in late narratives, notably in Ovid, Metamorphoses, in Hyginus, Fabulae, 203 and by the fourth-century-CE teacher of rhetoric and Christian convert, Libanius, in Narrationes. In Ovid's Metamorphoses, Phoebus Apollo chaffs Cupid for toying with a weapon more suited to a man, whereupon Cupid wounds him with a golden dart; simultaneously, however, Cupid shoots a leaden arrow into Daphne, causing her to be repulsed by Apollo. Following a spirited chase by Apollo, Daphne prays to her father, Peneus, for help, and he changes her into the laurel tree, sacred to Apollo.
Artemis Daphnaia, who had her temple among the Lacedemonians, at a place called HypsoiG. Shipley, "The Extent of Spartan Territory in the Late Classical and Hellenistic Periods", The Annual of the British School at Athens, 2000. in Antiquity, on the slopes of Mount Cnacadion near the Spartan frontier,Pausanias, 3.24.8 (on-line text); Lilius Gregorius Gyraldus, Historiae Deorum Gentilium, Basel, 1548, Syntagma 10, is noted in this connection in Benjamin Hederich, Gründliches mythologisches Lexikon, 1770 had her own sacred laurel trees.Karl Kerenyi, The Gods of the Greeks, 1951:141 At Eretria the identity of an excavated 7th- and 6th-century temple to Apollo Daphnephoros, "Apollo, laurel-bearer", or "carrying off Daphne", a "place where the citizens are to take the oath", is identified in inscriptions.Rufus B. Richardson, "A Temple in Eretria" The American Journal of Archaeology and of the History of the Fine Arts, 10.3 (July - September 1895:326–337); Paul Auberson, Eretria. Fouilles et Recherches I, Temple d'Apollon Daphnéphoros, Architecture (Bern, 1968). See also Plutarch, Pythian Oracle, 16.
Leucothea was daughter of Orchamus and sister of Clytia. She fell in love with Apollo who disguised himself as Leucothea's mother to gain entrance to her chambers. Clytia, jealous of her sister because she wanted Apollo for herself, told Orchamus the truth, betraying her sister's trust and confidence in her. Enraged, Orchamus ordered Leucothea to be buried alive. Apollo refused to forgive Clytia for betraying his beloved, and a grieving Clytia wilted and slowly died. Apollo changed her into an incense plant, either heliotrope or sunflower, which follows the sun every day.
Marpessa was kidnapped by Idas but was loved by Apollo as well. Zeus made her choose between them, and she chose Idas on the grounds that Apollo, being immortal, would tire of her when she grew old.
Castalia was a nymph whom Apollo loved. She fled from him and dove into the spring at Delphi, at the base of Mt. Parnassos, which was then named after her. Water from this spring was sacred; it was used to clean the Delphian temples and inspire the priestesses. In the last oracle is mentioned that the "water which could speak", has been lost for ever.
By Cyrene, Apollo had a son named Aristaeus, who became the patron god of cattle, fruit trees, hunting, husbandry and bee-keeping. He was also a culture-hero and taught humanity dairy skills, the use of nets and traps in hunting, and how to cultivate olives.
Hecuba was the wife of King Priam of Troy, and Apollo had a son with her named Troilus. An oracle prophesied that Troy would not be defeated as long as Troilus reached the age of twenty alive. He was ambushed and killed by Achilleus.
Cassandra, was daughter of Hecuba and Priam, and Troilus' half-sister. Apollo fell in love with Cassandra and promised her the gift of prophecy to seduce her, but she rejected him afterwards. Enraged, Apollo indeed gave her the ability to know the future, with a curse that she could only see the future tragedies and that no one would ever believe her.
Coronis, was daughter of Phlegyas, King of the Lapiths. Pregnant with Asclepius, Coronis fell in love with Ischys, son of Elatus. A crow informed Apollo of the affair. When first informed he disbelieved the crow and turned all crows black (where they were previously white) as a punishment for spreading untruths. When he found out the truth he sent his sister, Artemis, to kill Coronis (in other stories, Apollo himself had killed Coronis). As a result, he also made the crow sacred and gave them the task of announcing important deaths. Apollo rescued the baby and gave it to the centaur Chiron to raise. Phlegyas was irate after the death of his daughter and burned the Temple of Apollo at Delphi. Apollo then killed him for what he did.
In Euripides' play Ion, Apollo fathered Ion by Creusa, wife of Xuthus. Creusa left Ion to die in the wild, but Apollo asked Hermes to save the child and bring him to the oracle at Delphi, where he was raised by a priestess.
Acantha, was the spirit of the acanthus tree, and Apollo had one of his other liaisons with her. Upon her death, Apollo transformed her into a sun-loving herb.
According to the Biblioteca, the "library" of mythology mis-attributed to Apollodorus, he fathered the Corybantes on the Muse Thalia.Apollodorus, Bibliotheca, 1.3.4. Other ancient sources, however, gave the Corybantes different parents; see Sir James Frazer's note on the passage in the Bibliotheca.
Consorts and children: extended list
Acacallis
Amphithemis (Garamas)Apollonius Rhodius, Argonautica, 1491 ff
Naxos, eponym of the island NaxosScholia on Apollonius Rhodius, Argonautica, 1491 ff
Phylacides
PhylanderPausanias, Description of Greece, 10. 16. 5
Acantha
Aethusa
Eleuther
Aganippe
ChiosPseudo-Plutarch, On Rivers, 7. 1
AlciopePhotius, Lexicon s. v. Linos
Linus (possibly)
Amphissa / Isse, daughter of Macareus
Anchiale / Acacallis
OaxesServius on Virgil's Eclogue 1, 65
Areia, daughter of Cleochus / Acacallis / Deione
Miletus
Astycome, nymph
Eumolpus (possibly)Photius, Lexicon, s. v. Eumolpidai
Arsinoe, daughter of Leucippus
Asclepius (possibly)
Eriopis
Babylo
ArabusPliny the Elder, Naturalis Historia, 7. 56 - 57 p. 196
Bolina
Calliope, Muse
Orpheus (possibly)
Linus (possibly)
Ialemus
Cassandra
Castalia
Celaeno, daughter of Hyamus / Melaina / Thyia
Delphus
Chione / Philonis / Leuconoe
Philammon
Chrysorthe
Coronus
Chrysothemis
Parthenos
Coronis
Asclepius
Coryceia
Lycorus (Lycoreus)
Creusa
Ion
Cyrene
Aristaeus
Idmon (possibly)
AutuchusScholia on Apollonius Rhodius, Argonautica, 2. 498
Danais, Cretan nymph
The CuretesTzetzes on Lycophron, 77
Daphne
Dia, daughter of Lycaon
Dryops
Dryope
Amphissus
Euboea (daughter of Macareus of Locris)
Agreus
Evadne, daughter of Poseidon
Iamus
Gryne
Hecate
Scylla (possibly)Scholia on Apollonius Rhodius, Argonautica 4.828, referring to "Hesiod", Megalai Ehoiai fr.
Hecuba
Troilus
Hector (possibly)Tzetzes on Lycophron, 266
Hestia (wooed her unsuccessfully)
Hypermnestra, wife of Oicles
Amphiaraus (possibly)
HypsipyleArnobius, Adversus Nationes, 4. 26; not the same as Hypsipyle of Lemnos
Hyria (Thyria)
Cycnus
Lycia, nymph or daughter of Xanthus
EicadiusServius on Aeneid, 3. 332
PatarusStephanus of Byzantium s. v. Patara
Manto
Mopsus
Marpessa
Melia
IsmenusPausanias, Description of Greece, 9.10.6.
TenerusPausanias, Description of Greece, 9.10.6, 26.1.
Ocyrhoe
Othreis
Phager
Parnethia, nymph
CynnesPhotius, Lexicon, s. v. Kynneios
Parthenope
Lycomedes
Phthia
Dorus
Laodocus
Polypoetes
ProthoeArnobius, Adversus Nationes, 4. 26
Procleia
Tenes (possibly)
Psamathe
Linus
Rhoeo
Anius
Rhodoessa, nymph
Ceos, eponym of the island CeosEtymologicum Magnum 507, 54, under Keios
Rhodope
Cicon, eponym of the tribe CiconesEtymologicum Magnum 513, 37, under Kikones
Sinope
Syrus
Stilbe
Centaurus
Lapithes
Aineus
Syllis / Hyllis
Zeuxippus
Thaleia, Muse / Rhetia, nymph
The Corybantes
Themisto, daughter of Zabius of HyperboreaStephanus of Byzantium, s. v. Galeōtai
Galeotes
Telmessus (?)
Thero
Chaeron
Urania, Muse
Linus (possibly)
Urea, daughter of Poseidon
Ileus (Oileus?)
Wife of Erginus
Trophonius (possibly)
Unknown consorts
Acraepheus, eponym of the city AcraephiaStephanus of Byzantium, s. v. Akraiphia
Chariclo (possibly)Scholia on Pindar, Pythian Ode 4. 181
Erymanthus
Marathus, eponym of MarathonSuda s. v. Marathōn
MegarusStephanus of Byzantium s. v Megara
Melaneus
OnciusPausanias, Description of Greece, 8. 25. 4Stephanus of Byzantium s. v. Ogkeion
Phemonoe
Pisus, founder of Pisa in EtruriaServius on Aeneid, 10. 179
Younger Muses
Cephisso
Apollonis
Borysthenis
Male lovers
thumb|upright|Apollo and Hyacinthus, 16th-century Italian engraving by Jacopo Caraglio
Hyacinth or Hyacinthus was one of Apollo's male lovers. He was a Spartan prince, beautiful and athletic. The pair was practicing throwing the discus when a discus thrown by Apollo was blown off course by the jealous Zephyrus and struck Hyacinthus in the head, killing him instantly. Apollo is said to be filled with grief: out of Hyacinthus' blood, Apollo created a flower named after him as a memorial to his death, and his tears stained the flower petals with the interjection , meaning alas., . The Festival of Hyacinthus was a celebration of Sparta.
Another male lover was Cyparissus, a descendant of Heracles. Apollo gave him a tame deer as a companion but Cyparissus accidentally killed it with a javelin as it lay asleep in the undergrowth. Cyparissus asked Apollo to let his tears fall forever. Apollo granted the request by turning him into the Cypress named after him, which was said to be a sad tree because the sap forms droplets like tears on the trunk.
Other male lovers of Apollo include:
AdmetusCallimachus, Hymn to Apollo, 49.
Atymnius,Nonnus, Dionysiaca, 11. 258; 19. 181. otherwise known as a beloved of Sarpedon
Branchus (alternately, a son of Apollo)
Carnus
ClarusPhilostratus, Letters, 5. 3.
Hippolytus of Sicyon (not the same as Hippolytus, the son of Theseus)Plutarch, Life of Numa, 4. 5.
HymenaiosAntoninus Liberalis, Metamorphoses, 23.
Iapis
Leucates, who threw himself off a rock when Apollo attempted to carry him offServius on Aeneid, 3. 279.
Phorbas (probably the son of Triopas)Plutarch, Life of Numa, 4. 5, cf. also Hyginus, Poetical Astronomy, 2. 14.
PotnieusClement of Rome, Homilia, 5. 15.
Apollo's lyre
thumb|upright|Apollo with his lyre. Statue from Pergamon Museum, Berlin.
Hermes was born on Mount Cyllene in Arcadia. The story is told in the Homeric Hymn to Hermes. His mother, Maia, had been secretly impregnated by Zeus. Maia wrapped the infant in blankets but Hermes escaped while she was asleep.
Hermes ran to Thessaly, where Apollo was grazing his cattle. The infant Hermes stole a number of his cows and took them to a cave in the woods near Pylos, covering their tracks. In the cave, he found a tortoise and killed it, then removed the insides. He used one of the cow's intestines and the tortoise shell and made the first lyre.
Apollo complained to Maia that her son had stolen his cattle, but Hermes had already replaced himself in the blankets she had wrapped him in, so Maia refused to believe Apollo's claim. Zeus intervened and, claiming to have seen the events, sided with Apollo. Hermes then began to play music on the lyre he had invented. Apollo, a god of music, fell in love with the instrument and offered to allow exchange of the cattle for the lyre. Hence, Apollo then became a master of the lyre.
Apollo in the Oresteia
In Aeschylus' Oresteia trilogy, Clytemnestra kills her husband, King Agamemnon because he had sacrificed their daughter Iphigenia to proceed forward with the Trojan war, and Cassandra, a prophetess of Apollo. Apollo gives an order through the Oracle at Delphi that Agamemnon's son, Orestes, is to kill Clytemnestra and Aegisthus, her lover. Orestes and Pylades carry out the revenge, and consequently Orestes is pursued by the Erinyes (Furies, female personifications of vengeance).
Apollo and the Furies argue about whether the matricide was justified; Apollo holds that the bond of marriage is sacred and Orestes was avenging his father, whereas the Erinyes say that the bond of blood between mother and son is more meaningful than the bond of marriage. They invade his temple, and he says that the matter should be brought before Athena. Apollo promises to protect Orestes, as Orestes has become Apollo's supplicant. Apollo advocates Orestes at the trial, and ultimately Athena rules in favor of Apollo.
Other stories
Apollo killed the Aloadae when they attempted to storm Mt. Olympus.
Callimachus sangCallimachus, Hymn to Apollo2.5 that Apollo rode on the back of a swan to the land of the Hyperboreans during the winter months.
Apollo turned Cephissus into a sea monster.
Another contender for the birthplace of Apollo is the Cretan islands of Paximadia.
Musical contests
Pan
Once Pan had the audacity to compare his music with that of Apollo, and to challenge Apollo, the god of the kithara, to a trial of skill. Tmolus, the mountain-god, was chosen to umpire. Pan blew on his pipes, and with his rustic melody gave great satisfaction to himself and his faithful follower, Midas, who happened to be present. Then Apollo struck the strings of his lyre. Tmolus at once awarded the victory to Apollo, and all but Midas agreed with the judgment. He dissented and questioned the justice of the award. Apollo would not suffer such a depraved pair of ears any longer, and caused them to become the ears of a donkey.
Marsyas
thumb|upright|Marsyas under Apollo's punishment, İstanbul Archaeology Museum
Apollo has ominous aspects aside from his plague-bringing, death-dealing arrows: Marsyas was a satyr who challenged Apollo to a contest of music. He had found an aulos on the ground, tossed away after being invented by Athena because it made her cheeks puffy. The contest was judged by the Muses.
After they each performed, both were deemed equal until Apollo decreed they play and sing at the same time. As Apollo played the lyre, this was easy to do. Marsyas could not do this, as he only knew how to use the flute and could not sing at the same time. Apollo was declared the winner because of this. Apollo flayed Marsyas alive in a cave near Celaenae in Phrygia for his hubris to challenge a god. He then nailed Marsyas' shaggy skin to a nearby pine-tree. Marsyas' blood turned into the river Marsyas.
Another variation is that Apollo played his instrument (the lyre) upside down. Marsyas could not do this with his instrument (the flute), and so Apollo hung him from a tree and flayed him alive.Man Myth and Magic by Richard Cavendish
Cinyras
Apollo also had a lyre-playing contest with Cinyras, his son, who committed suicide when he lost.
|thumb|upright=.75|left|Head of Apollo, marble, Roman copy of a Greek original of the 4th century BCE, from the collection of Cardinal Albani
Roman Apollo
The Roman worship of Apollo was adopted from the Greeks. As a quintessentially Greek god, Apollo had no direct Roman equivalent, although later Roman poets often referred to him as Phoebus. There was a tradition that the Delphic oracle was consulted as early as the period of the kings of Rome during the reign of Tarquinius Superbus.Livy 1.56.
On the occasion of a pestilence in the 430s BCE, Apollo's first temple at Rome was established in the Flaminian fields, replacing an older cult site there known as the "Apollinare".Livy 3.63.7, 4.25.3. During the Second Punic War in 212 BCE, the Ludi Apollinares ("Apollonian Games") were instituted in his honor, on the instructions of a prophecy attributed to one Marcius.Livy 25.12. In the time of Augustus, who considered himself under the special protection of Apollo and was even said to be his son, his worship developed and he became one of the chief gods of Rome.
After the battle of Actium, which was fought near a sanctuary of Apollo, Augustus enlarged Apollo's temple, dedicated a portion of the spoils to him, and instituted quinquennial games in his honour.Suetonius, Augustus 18.2; Cassius Dio 51.1.1–3. He also erected a new temple to the god on the Palatine hill.Cassius Dio 53.1.3. Sacrifices and prayers on the Palatine to Apollo and Diana formed the culmination of the Secular Games, held in 17 BCE to celebrate the dawn of a new era.Inscriptiones Latinae Selectae 5050, translated by
Festivals
The chief Apollonian festivals were the Boedromia, Carneia, Carpiae, Daphnephoria, Delia, Hyacinthia, Metageitnia, Pyanepsia, Pythia and Thargelia.
Attributes and symbols
thumb|250px|Gold stater of the Seleucid king Antiochus I Soter (reigned 281–261 BCE) showing on the reverse a nude Apollo holding his key attributes: two arrows and a bow
Apollo's most common attributes were the bow and arrow. Other attributes of his included the kithara (an advanced version of the common lyre), the plectrum and the sword. Another common emblem was the sacrificial tripod, representing his prophetic powers. The Pythian Games were held in Apollo's honor every four years at Delphi. The bay laurel plant was used in expiatory sacrifices and in making the crown of victory at these games.
The palm tree was also sacred to Apollo because he had been born under one in Delos. Animals sacred to Apollo included wolves, dolphins, roe deer, swans, cicadas (symbolizing music and song), hawks, ravens, crows, snakes (referencing Apollo's function as the god of prophecy), mice and griffins, mythical eagle–lion hybrids of Eastern origin.
thumb|250px||Apollo Citharoedus ("Apollo with a kithara"), Musei Capitolini, Rome
As god of colonization, Apollo gave oracular guidance on colonies, especially during the height of colonization, 750–550 BCE. According to Greek tradition, he helped Cretan or Arcadian colonists found the city of Troy. However, this story may reflect a cultural influence which had the reverse direction: Hittite cuneiform texts mention a Minor Asian god called Appaliunas or Apalunas in connection with the city of Wilusa attested in Hittite inscriptions, which is now generally regarded as being identical with the Greek Ilion by most scholars. In this interpretation, Apollo's title of Lykegenes can simply be read as "born in Lycia", which effectively severs the god's supposed link with wolves (possibly a folk etymology).
In literary contexts, Apollo represents harmony, order, and reason—characteristics contrasted with those of Dionysus, god of wine, who represents ecstasy and disorder. The contrast between the roles of these gods is reflected in the adjectives Apollonian and Dionysian. However, the Greeks thought of the two qualities as complementary: the two gods are brothers, and when Apollo at winter left for Hyperborea, he would leave the Delphic oracle to Dionysus. This contrast appears to be shown on the two sides of the Borghese Vase.
Apollo is often associated with the Golden Mean. This is the Greek ideal of moderation and a virtue that opposes gluttony.
Apollo in the arts
thumb|180px|The Louvre Apollo Sauroctonos, Roman copy after Praxiteles (360 BC)
Apollo is a common theme in Greek and Roman art and also in the art of the Renaissance. The earliest Greek word for a statue is "delight" (, agalma), and the sculptors tried to create forms which would inspire such guiding vision. Greek art puts into Apollo the highest degree of power and beauty that can be imagined. The sculptors derived this from observations on human beings, but they also embodied in concrete form, issues beyond the reach of ordinary thought.
The naked bodies of the statues are associated with the cult of the body that was essentially a religious activity. The muscular frames and limbs combined with slim waists indicate the Greek desire for health, and the physical capacity which was necessary in the hard Greek environment. The statues of Apollo embody beauty, balance and inspire awe before the beauty of the world.
The evolution of the Greek sculpture can be observed in his depictions from the almost static formal Kouros type in early archaic period, to the representation of motion in a relative harmonious whole in late archaic period. In classical Greece the emphasis is not given to the illusive imaginative reality represented by the ideal forms, but to the analogies and the interaction of the members in the whole, a method created by Polykleitos. Finally Praxiteles seems to be released from any art and religious conformities, and his masterpieces are a mixture of naturalism with stylization.
Art and Greek philosophy
The evolution of the Greek art seems to go parallel with the Greek philosophical conceptions, which changed from the natural-philosophy of Thales to the metaphysical theory of Pythagoras. Thales searched for a simple material-form directly perceptible by the senses, behind the appearances of things, and his theory is also related to the older animism. This was paralleled in sculpture by the absolute representation of vigorous life, through unnaturally simplified forms.E. Homann-Wedeking. Transl. J.R. Foster (1968). Art of the world. Archaic Greece, Methuen & Co Ltd. London, pp. 63–65, 193.
Pythagoras believed that behind the appearance of things, there was the permanent principle of mathematics, and that the forms were based on a transcendental mathematical relation. The forms on earth, are imperfect imitations (, eikones, "images") of the celestial world of numbers. His ideas had a great influence on post-Archaic art, and the Greek architects and sculptors were always trying to find the mathematical relation, that would lead to the esthetic perfection.R. Carpenter (1975). The esthetic basis of Greek art. Indiana University Press. pp. 55–58. (canon).
In classical Greece, Anaxagoras asserted that a divine reason (mind) gave order to the seeds of the universe, and Plato extended the Greek belief of ideal forms to his metaphysical theory of forms (ideai, "ideas"). The forms on earth are imperfect duplicates of the intellectual celestial ideas. The Greek words oida (, "(I) know") and eidos (, "species"), a thing seen, have the same root as the word idea (), a thing ἰδείν to see. indicating how the Greek mind moved from the gift of the senses, to the principles beyond the senses. The artists in Plato's time moved away from his theories and art tends to be a mixture of naturalism with stylization. The Greek sculptors considered the senses more important, and the proportions were used to unite the sensible with the intellectual.
Archaic sculpture
thumb|180px|left|Sacred Gate Kouros, marble (610–600 BC), Kerameikos Archaeological Museum in Athens
Kouros (male youth) is the modern term given to those representations of standing male youths which first appear in the archaic period in Greece. This type served certain religious needs and was first proposed for what was previously thought to be depictions of Apollo.V.I. Leonardos(1895). Archaelogiki Ephimeris, Col 75, n 1.Lechat (1904). La sculpture Attic avant Phidias, p. 23. The first statues are certainly still and formal. The formality of their stance seems to be related with the Egyptian precedent, but it was accepted for a good reason. The sculptors had a clear idea of what a young man is, and embodied the archaic smile of good manners, the firm and springy step, the balance of the body, dignity, and youthful happiness. When they tried to depict the most abiding qualities of men, it was because men had common roots with the unchanging gods.C. M. Bowra (1957). The Greek experience, pp. 144–152. The adoption of a standard recognizable type for a long time, is probably because nature gives preference in survival of a type which has long be adopted by the climatic conditions, and also due to the general Greek belief that nature expresses itself in ideal forms that can be imagined and represented. These forms expressed immortality. Apollo was the immortal god of ideal balance and order. His shrine in Delphi, that he shared in winter with Dionysius had the inscriptions: (gnōthi seautón="know thyself") and (mēdén ágan, "nothing in excess"), and (eggýa pára d'atē, "make a pledge and mischief is nigh").See .
thumb|200px|right|New York Kouros, Met. Mus. 32.11.1, marble (620–610 BC), Metropolitan Museum of Art
In the first large-scale depictions during the early archaic period (640–580 BC), the artists tried to draw one's attention to look into the interior of the face and the body which were not represented as lifeless masses, but as being full of life. The Greeks maintained, until late in their civilization, an almost animistic idea that the statues are in some sense alive. This embodies the belief that the image was somehow the god or man himself.C.M. Bowra. The Greek experience, p. 159. A fine example is the statue of the Sacred gate Kouros which was found at the cemetery of Dipylon in Athens (Dipylon Kouros). The statue is the "thing in itself", and his slender face with the deep eyes express an intellectual eternity. According to the Greek tradition the Dipylon master was named Daedalus, and in his statues the limbs were freed from the body, giving the impression that the statues could move. It is considered that he created also the New York kouros, which is the oldest fully preserved statue of Kouros type, and seems to be the incarnation of the god himself.
thumb|left|180px|Piraeus Apollo, archaic-style bronze, Archaeological Museum of Piraeus
The animistic idea as the representation of the imaginative reality, is sanctified in the Homeric poems and in Greek myths, in stories of the god Hephaestus (Phaistos) and the mythic Daedalus (the builder of the labyrinth) that made images which moved of their own accord. This kind of art goes back to the Minoan period, when its main theme was the representation of motion in a specific moment.F. Schachermeyer (1964). Die Minoische Kultur des alten Creta, Kohlhammer Stuttgart, pp. 242–244. These free-standing statues were usually marble, but also the form rendered in limestone, bronze, ivory and terracotta.
The earliest examples of life-sized statues of Apollo, may be two figures from the Ionic sanctuary on the island of Delos. Such statues were found across the Greek speaking world, the preponderance of these were found at the sanctuaries of Apollo with more than one hundred from the sanctuary of Apollo Ptoios, Boeotia alone.J. Ducat (1971). Les Kouroi des Ptoion. The last stage in the development of the Kouros type is the late archaic period (520–485 BC), in which the Greek sculpture attained a full knowledge of human anatomy and used to create a relative harmonious whole. Ranking from the very few bronzes survived to us is the masterpiece bronze Piraeus Apollo. It was found in Piraeus, the harbour of Athens. The statue originally held the bow in its left hand, and a cup of pouring libation in its right hand. It probably comes from north-eastern Peloponnesus. The emphasis is given in anatomy, and it is one of the first attempts to represent a kind of motion, and beauty relative to proportions, which appear mostly in post-Archaic art. The statue throws some light on an artistic centre which, with an independently developed harder, simpler, and heavier style, restricts Ionian influence in Athens. Finally, this is the germ from which the art of Polykleitos was to grow two or three generations later.Homann-Wedeking (1966). Art of the World. Archaic Greece, pp. 144–150.
Classical sculpture
thumb|left|180px|Apollo of the "Mantoua type", marble Roman copy after a 5th-century BCE Greek original attributed to Polykleitos, Musée du Louvre
In the next century which is the beginning of the Classical period, it was considered that beauty in visible things as in everything else, consisted of symmetry and proportions. The artists tried also to represent motion in a specific moment (Myron), which may be considered as the reappearance of the dormant Minoan element. Anatomy and geometry are fused in one, and each does something to the other. The Greek sculptors tried to clarify it by looking for mathematical proportions, just as they sought some reality behind appearances. Polykleitos in his Canon wrote that beauty consists in the proportion not of the elements (materials), but of the parts, that is the interrelation of parts with one another and with the whole. It seems that he was influenced by the theories of Pythagoras."Each part (finger, palm, arm, etc.) transmitted its individual existence to the next, and then to the whole" : Canon of Polykleitos, also Plotinus, Ennead I vi. i: Nigel Spivey (1997). Greek art, Phaidon Press Ltd. London. pp. 290–294.
The famous Apollo of Mantua and its variants are early forms of the Apollo Citharoedus statue type, in which the god holds the cithara in his left arm. The type is represented by neo-Attic Imperial Roman copies of the late 1st or early 2nd century, modelled upon a supposed Greek bronze original made in the second quarter of the 5th century BCE, in a style similar to works of Polykleitos but more archaic. The Apollo held the cythara against his extended left arm, of which in the Louvre example, a fragment of one twisting scrolling horn upright remains against his biceps.
Though the proportions were always important in Greek art, the appeal of the Greek sculptures eludes any explanation by proportion alone. The statues of Apollo were thought to incarnate his living presence, and these representations of illusive imaginative reality had deep roots in the Minoan period, and in the beliefs of the first Greek speaking people who entered the region during the bronze-age. Just as the Greeks saw the mountains, forests, sea and rivers as inhabited by concrete beings, so nature in all of its manifestations possesses clear form, and the form of a work of art. Spiritual life is incorporated in matter, when it is given artistic form. Just as in the arts the Greeks sought some reality behind appearances, so in mathematics they sought permanent principles which could be applied wherever the conditions were the same. Artists and sculptors tried to find this ideal order in relation with mathematics, but they believed that this ideal order revealed itself not so much to the dispassionate intellect, as to the whole sentient self. Things as we see them, and as they really are, are one, that each stresses the nature of the other in a single unity.
Pediments and friezes
thumb|upright|Apollo, West Pediment Olympia. Munich, copy from original, 460 BCE at the Temple of Zeus, Olympia, Greece.
In the archaic pediments and friezes of the temples, the artists had a problem to fit a group of figures into an isosceles triangle with acute angles at the base.
The Siphnian Treasury in Delphi was one of the first Greek buildings utilizing the solution to put the dominating form in the middle, and to complete the descending scale of height with other figures sitting or kneeling. The pediment shows the story of Heracles stealing Apollo's tripod that was strongly associated with his oracular inspiration. Their two figures hold the centre. In the pediment of the temple of Zeus in Olympia, the single figure of Apollo is dominating the scene.
thumb|left|Part of the Bassae Frieze at the British Museum. Apollo and Artemis in the northeast corner.
thumb|Head of the Apollo Belvedere
These representations rely on presenting scenes directly to the eye for their own visible sake. They care for the schematic arrangements of bodies in space, but only as parts in a larger whole. While each scene has its own character and completeness it must fit into the general sequence to which it belongs. In these archaic pediments the sculptors use empty intervals, to suggest a passage to and fro a busy battlefield. The artists seem to have been dominated by geometrical pattern and order, and this was improved when classical art brought a greater freedom and economy.
Hellenistic Greece-Rome
Apollo as a handsome beardless young man, is often depicted with a kithara (as Apollo Citharoedus) or bow in his hand, or reclining on a tree (the Apollo Lykeios and Apollo Sauroctonos types). The Apollo Belvedere is a marble sculpture that was rediscovered in the late 15th century; for centuries it epitomized the ideals of Classical Antiquity for Europeans, from the Renaissance through the 19th century. The marble is a Hellenistic or Roman copy of a bronze original by the Greek sculptor Leochares, made between 350 and 325 BCE.
The life-size so-called "Adonis" found in 1780 on the site of a villa suburbana near the Via Labicana in the Roman suburb of Centocelle is identified as an Apollo by modern scholars. In the late 2nd century CE floor mosaic from El Djem, Roman Thysdrus, he is identifiable as Apollo Helios by his effulgent halo, though now even a god's divine nakedness is concealed by his cloak, a mark of increasing conventions of modesty in the later Empire.
Another haloed Apollo in mosaic, from Hadrumentum, is in the museum at Sousse. The conventions of this representation, head tilted, lips slightly parted, large-eyed, curling hair cut in locks grazing the neck, were developed in the 3rd century BCE to depict Alexander the Great.Bieber 1964, Yalouris 1980. Some time after this mosaic was executed, the earliest depictions of Christ would also be beardless and haloed.
Modern reception
thumb|The Overthrow of Apollo and the Pagan Gods, watercolour from William Blake's illustrations of On the Morning of Christ's Nativity (1809)
Apollo has often featured in postclassical art and literature. Percy Bysshe Shelley composed a "Hymn of Apollo" (1820), and the god's instruction of the Muses formed the subject of Igor Stravinsky's Apollon musagète (1927–1928). In 1978, the Canadian band Rush released an album with songs "Apollo: Bringer of Wisdom"/"Dionysus: Bringer of Love".
In discussion of the arts, a distinction is sometimes made between the Apollonian and Dionysian impulses where the former is concerned with imposing intellectual order and the latter with chaotic creativity. Friedrich Nietzsche argued that a fusion of the two was most desirable. Carl Jung's Apollo archetype represents what he saw as the disposition in people to over-intellectualise and maintain emotional distance.
Charles Handy, in Gods of Management (1978) uses Greek gods as a metaphor to portray various types of organisational culture. Apollo represents a 'role' culture where order, reason and bureaucracy prevail.British Library: Management and Business Studies Portal, Charles Handy, accessed 12 November 2016
In spaceflight, the NASA program for landing astronauts on the Moon was named Apollo.
Genealogy
See also
Dryad
Epirus
Pasiphaë
Sibylline oracles
Tegyra
Temple of Apollo (disambiguation)
Notes
References
Primary sources
Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library.
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Homer; The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1919. Online version at the Perseus Digital Library.
Sophocles, Oedipus Rex
Palaephatus, On Unbelievable Tales 46. Hyacinthus (330 BCE)
Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library.
Ovid, Metamorphoses 10. 162–219 (1–8 CE)
Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library.
Philostratus the Elder, Images i.24 Hyacinthus (170–245 CE)
Philostratus the Younger, Images 14. Hyacinthus (170–245 CE)
Lucian, Dialogues of the Gods 14 (170 CE)
First Vatican Mythographer, 197. Thamyris et Musae
Secondary sources
M. Bieber, 1964. Alexander the Great in Greek and Roman Art. Chicago.
Hugh Bowden, 2005. Classical Athens and the Delphic Oracle: Divination and Democracy. Cambridge University Press.
Walter Burkert, 1985. Greek Religion (Harvard University Press) III.2.5 passim
Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: ISBN 978-0-8018-5360-9 (Vol. 1), ISBN 978-0-8018-5362-3 (Vol. 2).
Robert Graves, 1960. The Greek Myths, revised edition. Penguin.
Miranda J. Green, 1997. Dictionary of Celtic Myth and Legend, Thames and Hudson.
Karl Kerenyi, 1953. Apollon: Studien über Antiken Religion und Humanität revised edition.
Karl Kerenyi, 1951. The Gods of the Greeks
Mertens, Dieter; Schutzenberger, Margareta. Città e monumenti dei Greci d'Occidente: dalla colonizzazione alla crisi di fine V secolo a.C.. Roma L'Erma di Bretschneider, 2006. ISBN 88-8265-367-6.
Martin Nilsson, 1955. Die Geschichte der Griechische Religion, vol. I. C.H. Beck.
Pauly–Wissowa, Realencyclopädie der klassischen Altertumswissenschaft: II, "Apollon". The best repertory of cult sites (Burkert).
Pfeiff, K.A., 1943. Apollon: Wandlung seines Bildes in der griechischen Kunst. Traces the changing iconography of Apollo.
D.S.Robertson (1945) A handbook of Greek and Roman Architecture Cambridge University Press
Smith, William; Dictionary of Greek and Roman Biography and Mythology, London (1873). "Apollo"
Spivey Nigel (1997) Greek art Phaedon Press Ltd.
External links
Apollo at the Greek Mythology Link, by Carlos Parada
The Warburg Institute Iconographic Database: ca 1650 images of Apollo
Category:Arts gods
Category:Deities in the Iliad
Category:Dragonslayers
Category:Health gods
Category:Knowledge gods
Category:LGBT themes in mythology
Category:Muses
Category:Temples of Apollo
Category:Mythological Greek archers
Category:Mythological rapists
Category:Oracular gods
Category:Roman gods
Category:Solar gods | 594 | 2017-01 |
Energy | thumb|right|The Sun is the source of energy for most of life on Earth. It derives its energy mainly from nuclear fusion in its core and releases it into space mainly in the form of radiant (light) energy.
In physics, energy is a property whose form can be converted, and whose quantity is transferred by work or heating, thus constituting a capacity limit on work or heat.Additionally, the second law of thermodynamics can impose further limitations on the capacity of a system to perform work on its surroundings, since some of the system's energy might necessarily be "wasted" in the form of heat instead. See e.g. The SI unit of energy is the joule, which is the energy transferred to an object by the mechanical work of moving it a distance of 1 metre against a force of 1 newton.
Common energy forms include the kinetic energy of a moving object, the potential energy stored by an object's position in a force field (gravitational, electric or magnetic), the elastic energy stored by stretching solid objects, the chemical energy released when a fuel burns, the radiant energy carried by light, and the thermal energy due to an object's temperature. All of the many forms of energy are convertible to other kinds of energy. In Newtonian physics, there is a universal law of conservation of energy which says that energy can be neither created nor be destroyed; however, it can change from one form to another.
For "closed systems" with no external source or sink of energy, the first law of thermodynamics states that a system's energy is constant unless energy is transferred in or out by work, heat, or via a transfer of matter, and that no energy is lost in transfer. While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system.
Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. Our Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that in itself (since it still contains the same total energy even if in different forms), but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
Mass and energy are closely related. Due to mass–energy equivalence, any object that has mass when stationary in a frame of reference (called rest mass) also has an equivalent amount of energy whose form is called rest energy in that frame, and any additional energy acquired by the object above that rest energy will increase an object's mass. For example, with a sensitive enough scale, one could measure an increase in mass after heating an object.
Because energy exists in many interconvertible forms, and yet can't be created or destroyed, its measurement may be equivalently "defined" and quantified via its transfer or conversions into various forms that may be found to be convenient or pedagogic or to facilitate accurate measurement; for example by energy transfer in the form of work (as measured via forces and acceleration) or heat (as measured via temperature changes of materials) or into particular forms such as kinetic (as measured via mass and speed) or by its equivalent mass.
Living organisms require available energy to stay alive, such as the energy humans get from food. Civilisation gets the energy it needs from energy resources such as fossil fuels, nuclear fuel, or renewable energy. The processes of Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth.
Forms
thumb|In a typical lightning strike, 500 megajoules of electric potential energy is converted into the same amount of energy in other forms, mostly light energy, sound energy and thermal energy.
thumb|Thermal energy is energy of microscopic constituents of matter, which may include both kinetic and potential energy.
The total energy of a system can be subdivided and classified in various ways. For example, classical mechanics distinguishes between kinetic energy, which is determined by an object's movement through space, and potential energy, which is a function of the position of an object within a field. It may also be convenient to distinguish gravitational energy, thermal energy, several types of nuclear energy (which utilize potentials from the nuclear force and the weak force), electric energy (from the electric field), and magnetic energy (from the magnetic field), among others. Many of these classifications overlap; for instance, thermal energy usually consists partly of kinetic and partly of potential energy.
Some types of energy are a varying mix of both potential and kinetic energy. An example is mechanical energy which is the sum of (usually macroscopic) kinetic and potential energy in a system. Elastic energy in materials is also dependent upon electrical potential energy (among atoms and molecules), as is chemical energy, which is stored and released from a reservoir of electrical potential energy between electrons, and the molecules or atomic nuclei that attract them. .The list is also not necessarily complete. Whenever physical scientists discover that a certain phenomenon appears to violate the law of energy conservation, new forms are typically added that account for the discrepancy.
Heat and work are special cases in that they are not properties of systems, but are instead properties of processes that transfer energy. In general we cannot measure how much heat or work are present in an object, but rather only how much energy is transferred among objects in certain ways during the occurrence of a given process. Heat and work are measured as positive or negative depending on which side of the transfer we view them from.
Potential energies are often measured as positive or negative depending on whether they are greater or less than the energy of a specified base state or configuration such as two interacting bodies being infinitely far apart. Wave energies (such as radiant or sound energy), kinetic energy, and rest energy are each greater than or equal to zero because they are measured in comparison to a base state of zero energy: "no wave", "no motion", and "no inertia", respectively.
The distinctions between different kinds of energy is not always clear-cut. As Richard Feynman points out:
Some examples of different kinds of energy:
History
thumb|Thomas Young – the first to use the term "energy" in the modern sense.
The word energy derives from the , which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure.
In the late 17th century, Gottfried Leibniz proposed the idea of the , or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter, a view shared by Isaac Newton, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two.
In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat.
These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
Units of measure
thumb|right|Joule's apparatus for measuring the mechanical equivalent of heat. A descending weight attached to a string causes a paddle immersed in water to rotate.
In 1843 James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the "Joule apparatus": a descending weight, attached to a string, caused rotation of a paddle immersed in water, practically insulated from heat transfer. It showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle.
In the International System of Units (SI), the unit of energy is the joule, named after James Prescott Joule. It is a derived unit. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units not part of the SI, such as ergs, calories, British Thermal Units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units.
The SI unit of energy rate (energy per unit time) is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce.
Scientific use
Classical mechanics
In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept.
Work, a form of energy, is force times distance.
This says that the work () is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball.
The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics.The Hamiltonian MIT OpenCourseWare website 18.013A Chapter 16.3 Accessed February 2007
Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction).
Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law.
Chemistry
In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor e−E/kTthat is the probability of molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation.The activation energy necessary for a chemical reaction can be in the form of thermal energy.
Biology
thumb|Basic overview of energy and human life.
In biology, energy is an attribute of all biological systems from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or an organelle of a biological organism. Energy is thus often said to be stored by cells in the structures of molecules of substances such as carbohydrates (including sugars), lipids, and proteins, which release energy when reacted with oxygen in respiration. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, assuming an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.Bicycle calculator - speed, weight, wattage etc. .
Sunlight is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into the high-energy compounds carbohydrates, lipids, and proteins. Plants also release oxygen during photosynthesis, which is utilized by living organisms as an electron acceptor, to release the energy of carbohydrates, lipids, and proteins. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark, in a forest fire, or it may be made available more slowly for animal or human metabolism, when these molecules are ingested, and catabolism is triggered by enzyme action.
Any living organism relies on an external source of energy—radiation from the Sun in the case of green plants, chemical energy in some form in the case of animals—to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidised to carbon dioxide and water in the mitochondria
C6H12O6 + 6O2 → 6CO2 + 6H2O
C57H110O6 + 81.5O2 → 57CO2 + 55H2O
and some of the energy is used to convert ADP into ATP.
ADP + HPO42− → ATP + H2O
The rest of the chemical energy in O2 and the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:These examples are solely for illustration, as it is not the energy available for work which limits the performance of the athlete but the power output of the sprinter and the force of the weightlifter. A worker stacking shelves in a supermarket does more work (in the physical sense) than either of the athletes, but does it more slowly.
gain in kinetic energy of a sprinter during a 100 m race: 4 kJ
gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ
Daily food intake of a normal adult: 6–8 MJ
It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical energy or radiation), and it is true that most real machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings").Crystals are another example of highly ordered systems that exist in nature: in this case too, the order is associated with the transfer of a large amount of heat (known as the lattice energy) to the surroundings. Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants,Ito, Akihito; Oikawa, Takehisa (2004). "Global Mapping of Terrestrial Primary Productivity and Light-Use Efficiency with a Process-Based Model." in Shiyomi, M. et al. (Eds.) Global Environmental Change in the Ocean and on Land. pp. 343–58. i.e. reconverted into carbon dioxide and heat.
Earth sciences
In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior, while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations brought about by solar energy on the atmosphere of the planet Earth.
Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives many weather phenomena, save those generated by volcanic events. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement.
In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may be later released to active kinetic energy in landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars created these atoms.
Cosmology
In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight.
Quantum mechanics
In quantum mechanics, energy is defined in terms of the energy operator
as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: (where is Planck's constant and the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons.
Relativity
When calculating kinetic energy (work to accelerate a mass from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest mass energy: energy which every mass must possess even when being at rest. The amount of energy is directly proportional to the mass of body:
,
where
m is the mass,
c is the speed of light in vacuum,
E is the rest mass energy.
For example, consider electron–positron annihilation, in which the rest mass of individual particles is destroyed, but the inertia equivalent of the system of the two particles (its invariant mass) remains (since all energy is associated with mass), and this inertia and invariant mass is carried off by photons which individually are massless, but as a system retain their mass. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from energy of two (or more) annihilating photons. In this system the matter (electrons and positrons) is destroyed and changed to non-matter energy (the photons). However, the total system mass and energy do not change during this interaction.
In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.
It is not uncommon to hear that energy is "equivalent" to mass. It would be more accurate to state that every energy has an inertia and gravity equivalent, and because mass is a form of energy, then mass too has inertia and gravity associated with it.
In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts).
Transformation
thumb|300px| A turbo generator transforms the energy of pressurised steam into electrical energy
Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery, from chemical energy to electric energy; a dam: gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator; or a heat engine, from heat to work.
There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces.
Energy transformations in the universe over time are characterized by various kinds of potential energy that has been available since the Big Bang later being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nuclear decay, in which energy is released that was originally "stored" in heavy isotopes (such as uranium and thorium), by nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae, to store energy in the creation of these heavy elements before they were incorporated into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal to the decrease of potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.
Energy is also transferred from potential energy () to kinetic energy () and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following:
The equation can then be simplified further since (mass times acceleration due to gravity times the height) and (half mass times velocity squared). Then the total amount of energy can be found by adding .
Conservation of energy and mass in transformation
Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass-energy equivalence. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between rest-mass and rest-energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J. J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass-energy equivalence#History for further information).
Matter may be converted to energy (and vice versa), but mass cannot ever be destroyed; rather, mass/energy equivalence remains a constant for both the matter and the energy, during any process when they are converted into each other. However, since is extremely large relative to ordinary human scales, the conversion of ordinary amount of matter (for example, 1 kg) to other forms of energy (such as heat, light, and other radiation) can liberate tremendous amounts of energy (~ joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of a unit of energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure by weight, unless the energy loss is very large. Examples of energy transformation into matter (i.e., kinetic energy into particles with rest mass) are found in high-energy nuclear physics.
Reversible and non-reversible transformations
Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another, is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomisation in a crystal).
As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), grows less and less.
Conservation of energy
According to conservation of energy, energy can neither be created (produced) nor destroyed by itself. It can only be transformed. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Energy is subject to a strict global conservation law; that is, whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.Berkeley Physics Course Volume 1. Charles Kittel, Walter D Knight and Malvin A Ruderman
Richard Feynman said during a 1961 lecture:
Most kinds of energy (with gravitational energy being a notable exception) are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.The Laws of Thermodynamics including careful definitions of energy, free energy, et cetera.
This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle - it is impossible to define the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation - rather it provides mathematical limits to which energy can in principle be defined and measured.
Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appears as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it.
In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by
which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics).
In particle physics, this inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which and with real particles, is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons (which are simply lowest quantum mechanical energy state of photons) are also responsible for electrostatic interaction between electric charges (which results in Coulomb law), for spontaneous radiative decay of exited atomic and nuclear states, for the Casimir force, for van der Waals bond forces and some other observable phenomena.
Energy transfer
Closed systems
Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat.Although heat is "wasted" energy for a specific energy transfer,(see: waste heat) it can often be harnessed to do useful work in subsequent interactions. However, the maximum energy that can be "recycled" from such recovery processes is limited by the second law of thermodynamics. Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy,The mechanism for most macroscopic physical collisions is actually electromagnetic, but it is very common to simplify the interaction by ignoring the mechanism of collision and just calculate the beginning and end result. and the conductive transfer of thermal energy.
Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:There are several sign conventions for this equation. Here, the signs in this equation follow the IUPAC convention.
where is the amount of energy transferred, represents the work done on the system, and represents the heat flow into the system. As a simplification, the heat term, , is sometimes ignored, especially when the thermal efficiency of the transfer is high.
This simplified equation is the one used to define the joule, for example.
Open systems
Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by , one may write
Thermodynamics
Internal energy
Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.I. Klotz, R. Rosenberg, Chemical Thermodynamics - Basic Concepts and Methods, 7th ed., Wiley (2008), p.39
First law of thermodynamics
The first law of thermodynamics asserts that energy (but not necessarily thermodynamic free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as
,
where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and the change dS is positive when the system is heated), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system).
This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and pV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by
where is the heat supplied to the system and is the work applied to the system.
Equipartition of energy
The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential. At two points in the oscillation cycle it is entirely kinetic, and alternatively at two other points it is entirely potential. Over the whole cycle, or over many cycles, net energy is thus equally split between kinetic and potential. This is called equipartition principle; total energy of a system with many degrees of freedom is equally split among all available degrees of freedom.
This principle is vitally important to understanding the behaviour of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is called the second law of thermodynamics.
See also
Combustion
Index of energy articles
Index of wave articles
Orders of magnitude (energy)
Transfer energy
Notes
References
Further reading
External links
Differences between Heat and Thermal energy - BioCab
Category:State functions | 9,649 | 2017-01 |
Avicenna | Avicenna or Ibn-Sīnā (; – June 1037) was a Persian polymath who is regarded as one of the most significant thinkers and writers of the Islamic Golden Age..
.
.
"He was born in 370/980 in Afshana, his mother's home, near Bukhara. His native language was Persian" (from "Ibn Sina ("Avicenna")", Encyclopedia of Islam, Brill, second edition (2009). Accessed via Brill Online at www.encislam.brill.nl).
"Avicenna was the greatest of all Persian thinkers; as physician and metaphysician ..." (excerpt from A.J. Arberry, Avicenna on Theology, KAZI PUBN INC, 1995).
"Whereas the name of Avicenna (Ibn Sina, died 1037) is generally listed as chronologically first among noteworthy Iranian philosophers, recent evidence has revealed previous existence of Ismaili philosophical systems with a structure no less complete than of Avicenna" (from p. 74 of Henry Corbin, The Voyage and the messenger: Iran and philosophy, North Atlantic Books, 1998.
Of the 450 works he is known to have written, around 240 have survived, including 150 on philosophy and 40 on medicine.
His most famous works are The Book of Healing, a philosophical and scientific encyclopedia, and The Canon of Medicine, a medical encyclopediaEdwin Clarke, Charles Donald O'Malley (1996), The human brain and spinal cord: a historical study illustrated by writings from antiquity to the twentieth century, Norman Publishing, p. 20 (ISBN 0-930405-25-0).Iris Bruijn (2009), Ship's Surgeons of the Dutch East India Company: Commerce and the progress of medicine in the eighteenth century, Amsterdam University Press, p. 26 (ISBN 90-8728-051-3). which became a standard medical text at many medieval universities and remained in use as late as 1650.e.g. at the universities of Montpellier and Leuven (see ). In 1973, Avicenna's Canon Of Medicine was reprinted in New York.Avicenna's Canon Of Medicine, by Cibeles Jolivette Gonzalez
Besides philosophy and medicine, Avicenna's corpus includes writings on astronomy, alchemy, geography and geology, psychology, Islamic theology, logic, mathematics, physics and poetry.
Name
Avicenna is a Latin corruption of the Arabic patronym Ibn-Sīnā,. meaning "Son of Sina", a Persian masculine given name of uncertain etymology. However, Avicenna was not the son. but the great-great-grandson of a man named Sina. His full name was Abū ʿAlī al-Ḥusayn ibn ʿAbd Allāh ibn Al-Hasan ibn Ali ibn Sīnā ().
Circumstances
Ibn Sina created an extensive corpus of works during what is commonly known as the Islamic Golden Age, in which the translations of Greco-Roman, Persian, and Indian texts were studied extensively. Greco-Roman (Mid- and Neo-Platonic, and Aristotelian) texts translated by the Kindi school were commented, redacted and developed substantially by Islamic intellectuals, who also built upon Persian and Indian mathematical systems, astronomy, algebra, trigonometry and medicine. The Samanid dynasty in the eastern part of Persia, Greater Khorasan and Central Asia as well as the Buyid dynasty in the western part of Persia and Iraq provided a thriving atmosphere for scholarly and cultural development. Under the Samanids, Bukhara rivaled Baghdad as a cultural capital of the Islamic world.
The study of the Quran and the Hadith thrived in such a scholarly atmosphere. Philosophy, Fiqh and theology (kalaam) were further developed, most noticeably by Avicenna and his opponents. Al-Razi and Al-Farabi had provided methodology and knowledge in medicine and philosophy. Avicenna had access to the great libraries of Balkh, Khwarezm, Gorgan, Rey, Isfahan and Hamadan. Various texts (such as the 'Ahd with Bahmanyar) show that he debated philosophical points with the greatest scholars of the time. Aruzi Samarqandi describes how before Avicenna left Khwarezm he had met Al-Biruni (a famous scientist and astronomer), Abu Nasr Iraqi (a renowned mathematician), Abu Sahl Masihi (a respected philosopher) and Abu al-Khayr Khammar (a great physician).
Biography
Early life
Avicenna was born in Afshana, a village near Bukhara (in present-day Uzbekistan), the capital of the Samanids, a Persian dynasty in Central Asia and Greater Khorasan. His mother, named Setareh, was from Bukhara;"Avicenna"Encyclopædia Britannica, Concise Online Version, 2006 (); D. Gutas, "Avicenna", in Encyclopædia Iranica, Online Version 2006, (LINK); Avicenna in (Encyclopedia of Islam: © 1999 Koninklijke Brill NV, Leiden, The Netherlands) his father, Abdullah, was a respected Ismaili scholar from Balkh, an important town of the Samanid Empire, in what is today Balkh Province, Afghanistan. His father worked in the government of Samanid in the village Kharmasain, a Sunni regional power. After five years, his younger brother, Mahmoud, was born. Avicenna first began to learn the Quran and literature in such a way that when he was ten years old he had essentially learned all of them.Khorasani,Sharaf Addin Sharaf, Islamic Great Encyclopedia.p1.1367 solar
A number of theories have been proposed regarding Avicenna's madhab (school of thought within Islamic jurisprudence). Medieval historian Ẓahīr al-dīn al-Bayhaqī (d. 1169) considered Avicenna to be a follower of the Brethren of Purity. excerpt: "<expand>... [Dimitri Gutas's Avicenna's maḏhab] convincingly demonstrates that I.S. was a sunnî-Ḥanafî." On the other hand, Dimitri Gutas along with Aisha Khan and Jules J. Janssens demonstrated that Avicenna was a Sunni Hanafi. However, the 14th century Shia faqih Nurullah Shushtari according to Seyyed Hossein Nasr, maintained that he was most likely a Twelver Shia.<ref>Seyyed Hossein Nasr, An introduction to Islamic cosmological doctrines",Published by State University of New York press, ISBN 0-7914-1515-5 Page 183</ref> Conversely, Sharaf Khorasani, citing a rejection of an invitation of the Sunni Governor Sultan Mahmoud Ghazanavi by Avicenna to his court, believes that Avicenna was an Ismaili.Sharaf Khorasani,Islamic Great encyclopedia,vol.1.p.3.1367 solar Similar disagreements exist on the background of Avicenna's family, whereas some writers considered them Sunni, some more recent writers contested that they were Shia.
According to his autobiography, Avicenna had memorised the entire Quran by the age of 10. He learned Indian arithmetic from an Indian greengrocer,ءMahmoud MassahiKhorasani Sharaf, Islamic Great Encyclopedia,vol.1.p.1.1367 solar and he began to learn more from a wandering scholar who gained a livelihood by curing the sick and teaching the young. He also studied Fiqh (Islamic jurisprudence) under the Sunni Hanafi scholar Ismail al-Zahid.Jorge J. E. Gracia and Timothy B. Noone (2003), A Companion to Philosophy in the Middle Ages, p. 196, Blackwell Publishing, ISBN 0-631-21673-1. Avicenna was taught some extent of philosophy books such as Introduction (Isagoge)'s Porphyry (philosopher), Euclid's Elements, Ptolemy's Almagest by an unpopular philosopher, Abu Abdullah Nateli, who claimed philosophizing.Sharaf Khorasani, Islamic Graet encyclopedia,vo.1.p.1. 1367 solar
As a teenager, he was greatly troubled by the Metaphysics of Aristotle, which he could not understand until he read al-Farabi's commentary on the work. For the next year and a half, he studied philosophy, in which he encountered greater obstacles. In such moments of baffled inquiry, he would leave his books, perform the requisite ablutions, then go to the mosque, and continue in prayer till light broke on his difficulties. Deep into the night, he would continue his studies, and even in his dreams problems would pursue him and work out their solution. Forty times, it is said, he read through the Metaphysics of Aristotle, till the words were imprinted on his memory; but their meaning was hopelessly obscure, until one day they found illumination, from the little commentary by Farabi, which he bought at a bookstall for the small sum of three dirhams. So great was his joy at the discovery, made with the help of a work from which he had expected only mystery, that he hastened to return thanks to God, and bestowed alms upon the poor.
He turned to medicine at 16, and not only learned medical theory, but also by gratuitous attendance of the sick had, according to his own account, discovered new methods of treatment. The teenager achieved full status as a qualified physician at age 18, and found that "Medicine is no hard and thorny science, like mathematics and metaphysics, so I soon made great progress; I became an excellent doctor and began to treat patients, using approved remedies." The youthful physician's fame spread quickly, and he treated many patients without asking for payment.
Adulthood
thumb|upright|A drawing of Avicenna from 1271
Ibn Sina's first appointment was that of physician to the emir, Nuh II, who owed him his recovery from a dangerous illness (997). Ibn Sina's chief reward for this service was access to the royal library of the Samanids, well-known patrons of scholarship and scholars. When the library was destroyed by fire not long after, the enemies of Ibn Sina accused him of burning it, in order for ever to conceal the sources of his knowledge. Meanwhile, he assisted his father in his financial labors, but still found time to write some of his earliest works.
When Ibn Sina was 22 years old, he lost his father. The Samanid dynasty came to its end in December 1004. Ibn Sina seems to have declined the offers of Mahmud of Ghazni, and proceeded westwards to Urgench in modern Turkmenistan, where the vizier, regarded as a friend of scholars, gave him a small monthly stipend. The pay was small, however, so Ibn Sina wandered from place to place through the districts of Nishapur and Merv to the borders of Khorasan, seeking an opening for his talents. Qabus, the generous ruler of Tabaristan, himself a poet and a scholar, with whom Ibn Sina had expected to find asylum, was on about that date (1012) starved to death by his troops who had revolted. Ibn Sina himself was at this time stricken by a severe illness. Finally, at Gorgan, near the Caspian Sea, Ibn Sina met with a friend, who bought a dwelling near his own house in which Ibn Sina lectured on logic and astronomy. Several of Ibn Sina's treatises were written for this patron; and the commencement of his Canon of Medicine also dates from his stay in Hyrcania.
Ibn Sina subsequently settled at Rey, in the vicinity of modern Tehran, the home town of Rhazes; where Majd Addaula, a son of the last Buwayhid emir, was nominal ruler under the regency of his mother (Seyyedeh Khatun). About thirty of Ibn Sina's shorter works are said to have been composed in Rey. Constant feuds which raged between the regent and her second son, Shams al-Daula, however, compelled the scholar to quit the place. After a brief sojourn at Qazvin he passed southwards to Hamadãn where Shams al-Daula, another Buwayhid emir, had established himself. At first, Ibn Sina entered into the service of a high-born lady; but the emir, hearing of his arrival, called him in as medical attendant, and sent him back with presents to his dwelling. Ibn Sina was even raised to the office of vizier. The emir decreed that he should be banished from the country. Ibn Sina, however, remained hidden for forty days in sheikh Ahmed Fadhel's house, until a fresh attack of illness induced the emir to restore him to his post. Even during this perturbed time, Ibn Sina persevered with his studies and teaching. Every evening, extracts from his great works, the Canon and the Sanatio, were dictated and explained to his pupils. On the death of the emir, Ibn Sina ceased to be vizier and hid himself in the house of an apothecary, where, with intense assiduity, he continued the composition of his works.
Meanwhile, he had written to Abu Ya'far, the prefect of the dynamic city of Isfahan, offering his services. The new emir of Hamadan, hearing of this correspondence and discovering where Ibn Sina was hiding, incarcerated him in a fortress. War meanwhile continued between the rulers of Isfahan and Hamadãn; in 1024 the former captured Hamadan and its towns, expelling the Tajik mercenaries. When the storm had passed, Ibn Sina returned with the emir to Hamadan, and carried on his literary labors. Later, however, accompanied by his brother, a favorite pupil, and two slaves, Ibn Sina escaped from the city in the dress of a Sufi ascetic. After a perilous journey, they reached Isfahan, receiving an honorable welcome from the prince.
Later life and death
thumb|upright|The first page of a manuscript of Avicenna's Canon, dated 1596/7 (Yale, Medical Historical Library, Cushing Arabic ms. 5)
thumb|upright|Gravestone of Avicenna, Hamedan, Iran
The remaining ten or twelve years of Ibn Sīnā's life were spent in the service of the Kakuyid ruler Muhammad ibn Rustam Dushmanziyar (also known as Ala al-Dawla), whom he accompanied as physician and general literary and scientific adviser, even in his numerous campaigns.
During these years he began to study literary matters and philology, instigated, it is asserted, by criticisms on his style. A severe colic, which seized him on the march of the army against Hamadan, was checked by remedies so violent that Ibn Sina could scarcely stand. On a similar occasion the disease returned; with difficulty he reached Hamadan, where, finding the disease gaining ground, he refused to keep up the regimen imposed, and resigned himself to his fate.
His friends advised him to slow down and take life moderately. He refused, however, stating that: "I prefer a short life with width to a narrow one with length". On his deathbed remorse seized him; he bestowed his goods on the poor, restored unjust gains, freed his slaves, and read through the Quran every three days until his death. He died in June 1037, in his fifty-eighth year, in the month of Ramadan and was buried in Hamadan, Iran.
Philosophy
Ibn Sīnā wrote extensively on early Islamic philosophy, especially the subjects logic, ethics, and metaphysics, including treatises named Logic and Metaphysics. Most of his works were written in Arabic – then the language of science in the Middle East – and some in Persian. Of linguistic significance even to this day are a few books that he wrote in nearly pure Persian language (particularly the Danishnamah-yi 'Ala', Philosophy for Ala' ad-Dawla'). Ibn Sīnā's commentaries on Aristotle often criticized the philosopher, encouraging a lively debate in the spirit of ijtihad.
Avicenna's Neoplatonic scheme of "emanations" became fundamental in the Kalam (school of theological discourse) in the 12th century.Nahyan A. G. Fancy (2006), p. 80–81, "Pulmonary Transit and Bodily Resurrection: The Interaction of Medicine, Philosophy and Religion in the Works of Ibn al-Nafīs (d. 1288)", Electronic Theses and Dissertations, University of Notre Dame
His Book of Healing became available in Europe in partial Latin translation some fifty years after its composition, under the title Sufficientia, and some authors have identified a "Latin Avicennism" as flourishing for some time, paralleling the more influential Latin Averroism, but suppressed by the Parisian decrees of 1210 and 1215.c.f. e.g.
Henry Corbin, History Of Islamic Philosophy, Routledge, 2014, p. 174.
Henry Corbin, Avicenna and the Visionary Recital, Princeton University Press, 2014, p. 103.
Avicenna's psychology and theory of knowledge influenced William of Auvergne, Bishop of Paris and Albertus Magnus, while his metaphysics influenced the thought of Thomas Aquinas.
Metaphysical doctrine
Early Islamic philosophy and Islamic metaphysics, imbued as it is with Islamic theology, distinguishes more clearly than Aristotelianism between essence and existence. Whereas existence is the domain of the contingent and the accidental, essence endures within a being beyond the accidental. The philosophy of Ibn Sīnā, particularly that part relating to metaphysics, owes much to al-Farabi. The search for a definitive Islamic philosophy separate from Occasionalism can be seen in what is left of his work.
Following al-Farabi's lead, Avicenna initiated a full-fledged inquiry into the question of being, in which he distinguished between essence (Mahiat) and existence (Wujud). He argued that the fact of existence can not be inferred from or accounted for by the essence of existing things, and that form and matter by themselves cannot interact and originate the movement of the universe or the progressive actualization of existing things. Existence must, therefore, be due to an agent-cause that necessitates, imparts, gives, or adds existence to an essence. To do so, the cause must be an existing thing and coexist with its effect.
Avicenna's consideration of the essence-attributes question may be elucidated in terms of his ontological analysis of the modalities of being; namely impossibility, contingency, and necessity. Avicenna argued that the impossible being is that which cannot exist, while the contingent in itself (mumkin bi-dhatihi) has the potentiality to be or not to be without entailing a contradiction. When actualized, the contingent becomes a 'necessary existent due to what is other than itself' (wajib al-wujud bi-ghayrihi). Thus, contingency-in-itself is potential beingness that could eventually be actualized by an external cause other than itself. The metaphysical structures of necessity and contingency are different. Necessary being due to itself (wajib al-wujud bi-dhatihi) is true in itself, while the contingent being is 'false in itself' and 'true due to something else other than itself'. The necessary is the source of its own being without borrowed existence. It is what always exists.Avicenna, Kitab al-shifa', Metaphysics II, (eds.) G. C. Anawati, Ibrahim Madkour, Sa'id Zayed (Cairo, 1975), p. 36Nader El-Bizri, "Avicenna and Essentialism," Review of Metaphysics, Vol. 54 (2001), pp. 753–778
The Necessary exists 'due-to-Its-Self', and has no quiddity/essence (mahiyya) other than existence (wujud). Furthermore, It is 'One' (wahid ahad)Avicenna, Metaphysica of Avicenna, trans. Parviz Morewedge (New York, 1973), p. 43. since there cannot be more than one 'Necessary-Existent-due-to-Itself' without differentia (fasl) to distinguish them from each other. Yet, to require differentia entails that they exist 'due-to-themselves' as well as 'due to what is other than themselves'; and this is contradictory. However, if no differentia distinguishes them from each other, then there is no sense in which these 'Existents' are not one and the same.Nader El-Bizri, The Phenomenological Quest between Avicenna and Heidegger (Binghamton, N.Y.: Global Publications SUNY, 2000) Avicenna adds that the 'Necessary-Existent-due-to-Itself' has no genus (jins), nor a definition (hadd), nor a counterpart (nadd), nor an opposite (did), and is detached (bari) from matter (madda), quality (kayf), quantity (kam), place (ayn), situation (wad), and time (waqt).Avicenna, Kitab al-Hidaya, ed. Muhammad 'Abdu (Cairo, 1874), pp. 262–3Salem Mashran, al-Janib al-ilahi 'ind Ibn Sina (Damascus, 1992), p. 99Nader El-Bizri, "Being and Necessity: A Phenomenological Investigation of Avicenna's Metaphysics and Cosmology," in Islamic Philosophy and Occidental Phenomenology on the Perennial Issue of Microcosm and Macrocosm, ed. Anna-Teresa Tymieniecka (Dordrecht: Kluwer Academic Publishers, 2006), pp. 243–261
Avicenna's theology on metaphysical issues (ilāhiyyāt) has been criticized by some Islamic scholars, among them al-Ghazali, Ibn Taymiyya, and Ibn al-Qayyim.Ibn al-Qayyim, Eghaathat al-Lahfaan, Published: Al Ashqar University (2003) Printed by International Islamic Publishing House: Riyadh. While discussing the views of the theists among the Greek philosophers, namely Socrates, Plato, and Aristotle in Al-Munqidh min ad-Dalal ("Deliverance from Error"), al-Ghazali noted that the Greek philosophers "must be taxed with unbelief, as must their partisans among the Muslim philosophers, such as Ibn Sina and al-Farabi and their likes." He added that "None, however, of the Muslim philosophers engaged so much in transmitting Aristotle's lore
as did the two men just mentioned. [...] The sum of what we regard as the authentic philosophy of Aristotle, as transmitted by al-Farabi and Ibn Sina, can be reduced to three parts: a part which must be branded as unbelief; a part which must be stigmatized as innovation; and a part which need not be repudiated at all.
Al-Biruni correspondence
Correspondence between Ibn Sina (with his student Ahmad ibn 'Ali al-Ma'sumi) and Al-Biruni has survived in which they debated Aristotelian natural philosophy and the Peripatetic school. Abu Rayhan began by asking Avicenna eighteen questions, ten of which were criticisms of Aristotle's On the Heavens.Rafik Berjak and Muzaffar Iqbal, "Ibn Sina—Al-Biruni correspondence", Islam & Science, June 2003.
Theology
Avicenna was a devout Muslim and sought to reconcile rational philosophy with Islamic theology. His aim was to prove the existence of God and His creation of the world scientifically and through reason and logic.Lenn Evan Goodman (2003), Islamic Humanism, p. 8–9, Oxford University Press, ISBN 0-19-513580-6. Avicenna's views on Islamic theology (and philosophy) were enormously influential, forming part of the core of the curriculum at Islamic religious schools until the 19th century.James W. Morris (1992), "The Philosopher-Prophet in Avicenna's Political Philosophy", in C. Butterworth (ed.), The Political Aspects of Islamic Philosophy, ISBN 978-0-932885-07-4, Chapter 4, Cambridge Harvard University Press, pp.152–198 [p.156]. Avicenna wrote a number of short treatises dealing with Islamic theology. These included treatises on the prophets (whom he viewed as "inspired philosophers"), and also on various scientific and philosophical interpretations of the Quran, such as how Quranic cosmology corresponds to his own philosophical system. In general these treatises linked his philosophical writings to Islamic religious ideas; for example, the body's afterlife.
There are occasional brief hints and allusions in his longer works however that Avicenna considered philosophy as the only sensible way to distinguish real prophecy from illusion. He did not state this more clearly because of the political implications of such a theory, if prophecy could be questioned, and also because most of the time he was writing shorter works which concentrated on explaining his theories on philosophy and theology clearly, without digressing to consider epistemological matters which could only be properly considered by other philosophers.James W. Morris (1992), "The Philosopher-Prophet in Avicenna's Political Philosophy", in C. Butterworth (ed.), The Political Aspects of Islamic Philosophy, Chapter 4, Cambridge Harvard University Press, pp.152–198 [pp. 160–161].
Later interpretations of Avicenna's philosophy split into three different schools; those (such as al-Tusi) who continued to apply his philosophy as a system to interpret later political events and scientific advances; those (such as al-Razi) who considered Avicenna's theological works in isolation from his wider philosophical concerns; and those (such as al-Ghazali) who selectively used parts of his philosophy to support their own attempts to gain greater spiritual insights through a variety of mystical means. It was the theological interpretation championed by those such as al-Razi which eventually came to predominate in the madrasahs.James W. Morris (1992), "The Philosopher-Prophet in Avicenna's Political Philosophy", in C. Butterworth (ed.), The Political Aspects of Islamic Philosophy, Chapter 4, Cambridge Harvard University Press, pp.152–198 [pp. 156–158].
Avicenna memorized the Quran by the age of ten, and as an adult, he wrote five treatises commenting on suras from the Quran. One of these texts included the Proof of Prophecies, in which he comments on several Quranic verses and holds the Quran in high esteem. Avicenna argued that the Islamic prophets should be considered higher than philosophers.Jules Janssens (2004), "Avicenna and the Qur'an: A Survey of his Qur'anic commentaries", MIDEO 25, p. 177–192.
Thought experiments
While he was imprisoned in the castle of Fardajan near Hamadhan, Avicenna wrote his famous "Floating Man" – literally falling man – thought experiment to demonstrate human self-awareness and the substantiality and immateriality of the soul. Avicenna believed his "Floating Man" thought experiment demonstrated that the soul is a substance, and claimed humans cannot doubt their own consciousness, even in a situation that prevents all sensory data input. The thought experiment told its readers to imagine themselves created all at once while suspended in the air, isolated from all sensations, which includes no sensory contact with even their own bodies. He argued that, in this scenario, one would still have self-consciousness. Because it is conceivable that a person, suspended in air while cut off from sense experience, would still be capable of determining his own existence, the thought experiment points to the conclusions that the soul is a perfection, independent of the body, and an immaterial substance. The conceivability of this "Floating Man" indicates that the soul is perceived intellectually, which entails the soul's separateness from the body. Avicenna referred to the living human intelligence, particularly the active intellect, which he believed to be the hypostasis by which God communicates truth to the human mind and imparts order and intelligibility to nature. Following is an English translation of the argument:
However, Avicenna posited the brain as the place where reason interacts with sensation. Sensation prepares the soul to receive rational concepts from the universal Agent Intellect. The first knowledge of the flying person would be "I am," affirming his or her essence. That essence could not be the body, obviously, as the flying person has no sensation. Thus, the knowledge that "I am" is the core of a human being: the soul exists and is self-aware. Avicenna thus concluded that the idea of the self is not logically dependent on any physical thing, and that the soul should not be seen in relative terms, but as a primary given, a substance. The body is unnecessary; in relation to it, the soul is its perfection. In itself, the soul is an immaterial substance.
The Canon of Medicine
thumb|12th-century manuscript of the Canon, kept at the Azerbaijan National Academy of Sciences.
Avicenna authored a five-volume medical encyclopedia: The Canon of Medicine (Al-Qanun fi't-Tibb). It was used as the standard medical textbook in the Islamic world and Europe up to the 18th century. The Canon still plays an important role in Unani medicine.Indian Studies on Ibn Sina's Works by Hakim Syed Zillur Rahman, Avicenna (Scientific and Practical International Journal of Ibn Sino International Foundation, Tashkent/Uzbekistan. 1–2; 2003: 40–42
The Book of Healing
Earth sciences
Ibn Sīnā wrote on Earth sciences such as geology in The Book of Healing.Stephen Toulmin and June Goodfield (1965), The Ancestry of Science: The Discovery of Time, p. 64, University of Chicago Press (cf. The Contribution of Ibn Sina to the development of Earth sciences) While discussing the formation of mountains, he explained:
Philosophy of science
In the Al-Burhan (On Demonstration) section of The Book of Healing, Avicenna discussed the philosophy of science and described an early scientific method of inquiry. He discusses Aristotle's Posterior Analytics and significantly diverged from it on several points. Avicenna discussed the issue of a proper methodology for scientific inquiry and the question of "How does one acquire the first principles of a science?" He asked how a scientist would arrive at "the initial axioms or hypotheses of a deductive science without inferring them from some more basic premises?" He explains that the ideal situation is when one grasps that a "relation holds between the terms, which would allow for absolute, universal certainty." Avicenna then adds two further methods for arriving at the first principles: the ancient Aristotelian method of induction (istiqra), and the method of examination and experimentation (tajriba). Avicenna criticized Aristotelian induction, arguing that "it does not lead to the absolute, universal, and certain premises that it purports to provide." In its place, he develops a "method of experimentation as a means for scientific inquiry."
Logic
An early formal system of temporal logic was studied by Avicenna.History of logic: Arabic logic, Encyclopædia Britannica. Although he did not develop a real theory of temporal propositions, he did study the relationship between temporalis and the implication. Avicenna's work was further developed by Najm al-Dīn al-Qazwīnī al-Kātibī and became the dominant system of Islamic logic until modern times. Avicennian logic also influenced several early European logicians such as Albertus MagnusRichard F. Washell (1973), "Logic, Language, and Albert the Great", Journal of the History of Ideas 34 (3), p. 445–450 [445]. and William of Ockham.Kneale p. 229Kneale: p. 266; Ockham: Summa Logicae i. 14; Avicenna: Avicennae Opera Venice 1508 f87rb Avicenna endorsed the law of noncontradiction proposed by Aristotle, that a fact could not be both true and false at the same time and in the same sense of the terminology used. He stated, "Anyone who denies the law of noncontradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned."Avicenna, Metaphysics, I; commenting on Aristotle, Topics I.11.105a4–5
Physics
In mechanics, Ibn Sīnā, in The Book of Healing, developed a theory of motion, in which he made a distinction between the inclination (tendency to motion) and force of a projectile, and concluded that motion was a result of an inclination (mayl) transferred to the projectile by the thrower, and that projectile motion in a vacuum would not cease.Fernando Espinoza (2005). "An analysis of the historical development of ideas about motion and its implications for teaching", Physics Education 40 (2), p. 141. He viewed inclination as a permanent force whose effect is dissipated by external forces such as air resistance.A. Sayili (1987), "Ibn Sīnā and Buridan on the Motion of the Projectile", Annals of the New York Academy of Sciences 500 (1), p. 477 – 482: "It was a permanent force whose effect got dissipated only as a result of external agents such as air resistance. He is apparently the first to conceive such a permanent type of impressed virtue for non-natural motion."
The theory of motion presented by Avicenna was probably influenced by the 6th-century Alexandrian scholar John Philoponus. Avicenna's is a less sophisticated variant of the theory of impetus developed by Buridan in the 14th century. It is unclear if Buridan was influenced by Avicenna, or by Philoponus directly.Jack Zupko, "John Buridan" in Stanford Encyclopedia of Philosophy, 2014
(fn. 48)
"We do not know precisely where Buridan got the idea of impetus, but a less sophisticated notion of impressed forced can be found in Avicenna's doctrine of mayl (inclination). In this he was possibly influenced by Philoponus, who was developing the Stoic notion of hormé (impulse). For discussion, see Zupko (1997) ['What Is the Science of the Soul? A Case Study in the Evolution of Late Medieval Natural Philosophy,' Synthese, 110(2): 297–334]."
In optics, Ibn Sina was among those who argued that light had a speed, observing that "if the perception of light is due to the emission of some sort of particles by a luminous source, the speed of light must be finite."George Sarton, Introduction to the History of Science, Vol. 1, p. 710. He also provided a wrong explanation of the rainbow phenomenon. Carl Benjamin Boyer described Avicenna's ("Ibn Sīnā") theory on the rainbow as follows:
In 1253, a Latin text entitled Speculum Tripartitum stated the following regarding Avicenna's theory on heat:
Psychology
Avicenna's legacy in classical psychology is primarily embodied in the Kitab al-nafs parts of his Kitab al-shifa (The Book of Healing) and Kitab al-najat (The Book of Deliverance). These were known in Latin under the title De Anima (treatises "on the soul"). Notably, Avicenna develops what is called the "flying man" argument in the Psychology of The Cure I.1.7 as defense of the argument that the soul is without quantitative extension, which has an affinity with Descartes's cogito argument (or what phenomenology designates as a form of an "epoche").Nader El-Bizri, The Phenomenological Quest between Avicenna and Heidegger (Binghamton, N.Y.: Global Publications SUNY, 2000), pp. 149–171.Nader El-Bizri, "Avicenna's De Anima between Aristotle and Husserl," in The Passions of the Soul in the Metamorphosis of Becoming, ed. Anna-Teresa Tymieniecka (Dordrecht: Kluwer Academic Publishers, 2003), pp. 67–89.
Avicenna's psychology requires that connection between the body and soul be strong enough to ensure the soul's individuation, but weak enough to allow for its immortality. Avicenna grounds his psychology on physiology, which means his account of the soul is one that deals almost entirely with the natural science of the body and its abilities of perception. Thus, the philosopher's connection between the soul and body is explained almost entirely by his understanding of perception; in this way, bodily perception interrelates with the immaterial human intellect. In sense perception, the perceiver senses the form of the object; first, by perceiving features of the object by our external senses. This sensory information is supplied to the internal senses, which merge all the pieces into a whole, unified conscious experience. This process of perception and abstraction is the nexus of the soul and body, for the material body may only perceive material objects, while the immaterial soul may only receive the immaterial, universal forms. The way the soul and body interact in the final abstraction of the universal from the concrete particular is the key to their relationship and interaction, which takes place in the physical body.
The soul completes the action of intellection by accepting forms that have been abstracted from matter. This process requires a concrete particular (material) to be abstracted into the universal intelligible (immaterial). The material and immaterial interact through the Active Intellect, which is a "divine light" containing the intelligible forms. The Active Intellect reveals the universals concealed in material objects much like the sun makes color available to our eyes.
Other contributions
Astronomy and astrology
Avicenna wrote an attack on astrology titled Resāla fī ebṭāl aḥkām al-nojūm, in which he cited passages from the Quran to dispute the power of astrology to foretell the future.George Saliba (1994), A History of Arabic Astronomy: Planetary Theories During the Golden Age of Islam, p. 60, 67–69. New York University Press, ISBN 0-8147-8023-7. He believed that each planet had some influence on the earth, but argued against astrologers being able to determine the exact effects.
Avicenna's astronomical writings had some influence on later writers, although in general his work could be considered less developed than Alhazen or Al-Biruni. One important feature of his writing is that he considers mathematical astronomy as a separate discipline to astrology. He criticized Aristotle's view of the stars receiving their light from the Sun, stating that the stars are self-luminous, and believed that the planets are also self-luminous. He claimed to have observed Venus as a spot on the Sun. This is possible, as there was a transit on May 24, 1032, but Avicenna did not give the date of his observation, and modern scholars have questioned whether he could have observed the transit from his location at that time; he may have mistaken a sunspot for Venus. He used his transit observation to help establish that Venus was, at least sometimes, below the Sun in Ptolemaic cosmology, i.e. the sphere of Venus comes before the sphere of the Sun when moving out from the Earth in the prevailing geocentric model.
He also wrote the Summary of the Almagest, (based on Ptolemy's Almagest), with an appended treatise "to bring that which is stated in the Almagest and what is understood from Natural Science into conformity". For example, Avicenna considers the motion of the solar apogee, which Ptolemy had taken to be fixed.
Chemistry
Ibn Sīnā used distillation to produce essential oils such as rose essence, forming the foundation of what later became aromatherapy.Marlene Ericksen (2000). Healing with Aromatherapy, p. 9. McGraw-Hill Professional. ISBN 0-658-00382-8.
Unlike, for example, al-Razi, Ibn Sīnā explicitly disputed the theory of the transmutation of substances commonly believed by alchemists:
Four works on alchemy attributed to Avicenna were translated into Latin as:Georges C. Anawati (1996), "Arabic alchemy", in Roshdi Rashed, ed., Encyclopedia of the History of Arabic Science, Vol. 3, p. 853–885 [875]. Routledge, London and New York.Liber Aboali Abincine de Anima in arte AlchemiaeDeclaratio Lapis physici Avicennae filio sui AboaliAvicennae de congelatione et conglutinatione lapidumAvicennae ad Hasan Regem epistola de Re rectaLiber Aboali Abincine de Anima in arte Alchemiae was the most influential, having influenced later medieval chemists and alchemists such as Vincent of Beauvais. However Anawati argues (following Ruska) that the de Anima is a fake by a Spanish author. Similarly the Declaratio is believed not to be actually by Avicenna. The third work (The Book of Minerals) is agreed to be Avicenna's writing, adapted from the Kitab al-Shifa (Book of the Remedy).
Ibn Sina classified minerals into stones, fusible substances, sulfurs, and salts, building on the ideas of Aristotle and Jabir. The epistola de Re recta is somewhat less sceptical of alchemy; Anawati argues that it is by Avicenna, but written earlier in his career when he had not yet firmly decided that transmutation was impossible.
Poetry
Almost half of Ibn Sīnā's works are versified.E.G. Browne, Islamic Medicine (sometimes also printed under the title Arabian medicine), 2002, Goodword Pub., ISBN 81-87570-19-9, p61 His poems appear in both Arabic and Persian. As an example, Edward Granville Browne claims that the following Persian verses are incorrectly attributed to Omar Khayyám, and were originally written by Ibn Sīnā:E.G. Browne, Islamic Medicine (sometimes also printed under the title Arabian medicine), 2002, Goodword Pub., ISBN 81-87570-19-9, p 60–61)
Legacy
Middle Ages and Renaissance
thumb|Inside view of the Avicenna Mausoleum, designed by Hooshang Seyhoun in 1945–1950.
As early as the 13th century when Dante Alighieri depicted him in Limbo alongside the virtuous non-Christian thinkers in his Divine Comedy such as Virgil, Averroes, Homer, Horace, Ovid, Lucan, Socrates, Plato, and Saladin, Avicenna has been recognized by both East and West, as one of the great figures in intellectual history.
George Sarton, the author of The History of Science, described Ibn Sīnā as "one of the greatest thinkers and medical scholars in history"George Sarton, Introduction to the History of Science.(cf. Dr. A. Zahoor and Dr. Z. Haq (1997). Quotations From Famous Historians of Science, Cyberistan.) and called him "the most famous scientist of Islam and one of the most famous of all races, places, and times." He was one of the Islamic world's leading writers in the field of medicine.
Along with Rhazes, Abulcasis, Ibn al-Nafis, and al-Ibadi, Ibn Sīnā is considered an important compiler of early Muslim medicine. He is remembered in the Western history of medicine as a major historical figure who made important contributions to medicine and the European Renaissance. His medical texts were unusual in that where controversy existed between Galen and Aristotle's views on medical matters (such as anatomy), he preferred to side with Aristotle, where necessary updating Aristotle's position to take into account post-Aristotelian advances in anatomical knowledge. Aristotle's dominant intellectual influence among medieval European scholars meant that Avicenna's linking of Galen's medical writings with Aristotle's philosophical writings in the Canon of Medicine (along with its comprehensive and logical organisation of knowledge) significantly increased Avicenna's importance in medieval Europe in comparison to other Islamic writers on medicine. His influence following translation of the Canon was such that from the early fourteenth to the mid-sixteenth centuries he was ranked with Hippocrates and Galen as one of the acknowledged authorities, ("prince of physicians").
Modern reception
In modern Iran, he is considered a national icon, and is often regarded as one of the greatest Persians to have ever lived. A monument was erected outside the Bukhara museum. The Avicenna Mausoleum and Museum in Hamadan was built in 1952. Bu-Ali Sina University in Hamadan (Iran),
Avicenna Research Institute in Tehran (Iran), the ibn Sīnā Tajik State Medical University in Dushanbe, Ibn Sina Academy of Medieval Medicine and Sciences at Aligarh, India, Avicenna School in Karachi and Avicenna Medical College in Lahore, Pakistan Ibne Sina Balkh Medical School in his native province of Balkh in Afghanistan, Ibni Sina Faculty Of Medicine of Ankara University Ankara, Turkey and Ibn Sina Integrated School in Marawi City (Philippines) are all named in his honour. His portrait hangs in the Hall of the Avicenna Faculty of Medicine in the University of Paris. There is also a crater on the Moon named Avicenna and a plant genus Avicennia.
thumb|A monument to Avicenna in Qakh (city), Azerbaijan
In 1980, the Soviet Union, which then ruled his birthplace Bukhara, celebrated the thousandth anniversary of Avicenna's birth by circulating various commemorative stamps with artistic illustrations, and by erecting a bust of Avicenna based on anthropological research by Soviet scholars.Thought Experiments: Popular Thought Experiments in Philosophy, Physics, Ethics, Computer Science & Mathematics by Fredrick Kennard, p114
Near his birthplace in Qishlak Afshona, some north of Bukhara, a training college for medical staff has been named for him.
On the grounds is a museum dedicated to his life, times and work.
thumb|left|Image of Avicenna on the Tajikistani somoni
The Avicenna Prize for Ethics in Science is awarded every two years by UNESCO and rewards individuals and groups in the field of ethics in science. The prize was established in 2003 and named after Avicenna.
The aim of the award is to promote ethical reflection on issues raised by advances in science and technology, and to raise global awareness of the importance of ethics in science.
In March 2008, it was announced that Avicenna's name would be used for new Directories of education institutions for health care professionals, worldwide. The Avicenna Directories will list universities and schools where doctors, public health practitioners, pharmacists and others, are educated. The project team stated "Why Avicenna? Avicenna ... was ... noted for his synthesis of knowledge from both east and west. He has had a lasting influence on the development of medicine and health sciences. The use of Avicenna's name symbolises the worldwide partnership that is needed for the promotion of health services of high quality.""Educating health professionals: the Avicenna project" The Lancet, March 2008. Volume 371 pp 966–967.
thumb|The statue of Avicenna in United Nations Office in Vienna as a part of the "Persian Scholars Pavilion" donated by Iran
In June 2009 Iran donated a "Persian Scholars Pavilion" to United Nations Office in Vienna which is placed in the central Memorial Plaza of the Vienna International Center. The "Persian Scholars Pavilion" at United Nations in Vienna, Austria is featuring the statues of four prominent Iranian figures.
thumb|left|Avicenna statue in Milad Tower
Highlighting the Iranian architectural features, the pavilion is adorned with Persian art forms and includes the statues of renowned Iranian scientists Avicenna, Al-Biruni, Zakariya Razi (Rhazes) and Omar Khayyam.
The 1982 Soviet film Youth of Genius () by recounts Avicenna's younger years. The film is set in Bukhara at the turn of the millennium."Youth of Genius" (USSR, Uzbekfilm and Tajikfilm, 1982): 1984 – State Prize of the USSR (Elyer Ishmuhamedov); 1983 – VKF (All-Union Film Festival) Grand Prize (Elyer Ishmuhamedov); 1983 – VKF (All-Union Film Festival) Award for Best Cinematography (Tatiana Loginov). See annotation on kino-teatr.ru.
In Louis L'Amour's 1985 historical novel The Walking Drum, Kerbouchard studies and discusses Avicenna's The Canon of Medicine.
In his book The Physician (1988) Noah Gordon tells the story of a young English medical apprentice who disguises himself as a Jew to travel from England to Persia and learn from Avicenna, the great master of his time. The novel was adapted into a feature film, The Physician, in 2013. Avicenna was played by Ben Kingsley.
Arabic works
The treatises of Ibn Sīnā influenced later Muslim thinkers in many areas including theology, philology, mathematics, astronomy, physics, and music. His works numbered almost 450 volumes on a wide range of subjects, of which around 240 have survived. In particular, 150 volumes of his surviving works concentrate on philosophy and 40 of them concentrate on medicine.
His most famous works are The Book of Healing, and The Canon of Medicine.
Ibn Sīnā wrote at least one treatise on alchemy, but several others have been falsely attributed to him. His Logic, Metaphysics, Physics, and De Caelo, are treatises giving a synoptic view of Aristotelian doctrine, though Metaphysics demonstrates a significant departure from the brand of Neoplatonism known as Aristotelianism in Ibn Sīnā's world;
Arabic philosophers have hinted at the idea that Ibn Sīnā was attempting to "re-Aristotelianise" Muslim philosophy in its entirety, unlike his predecessors, who accepted the conflation of Platonic, Aristotelian, Neo- and Middle-Platonic works transmitted into the Muslim world.
The Logic and Metaphysics have been extensively reprinted, the latter, e.g., at Venice in 1493, 1495, and 1546. Some of his shorter essays on medicine, logic, etc., take a poetical form (the poem on logic was published by Schmoelders in 1836).Thought Experiments: Popular Thought Experiments in Philosophy, Physics, Ethics, Computer Science & Mathematics by Fredrick Kennard, p115 Two encyclopaedic treatises, dealing with philosophy, are often mentioned. The larger, Al-Shifa' (Sanatio), exists nearly complete in manuscript in the Bodleian Library and elsewhere; part of it on the De Anima appeared at Pavia (1490) as the Liber Sextus Naturalium, and the long account of Ibn Sina's philosophy given by Muhammad al-Shahrastani seems to be mainly an analysis, and in many places a reproduction, of the Al-Shifa'. A shorter form of the work is known as the An-najat (Liberatio). The Latin editions of part of these works have been modified by the corrections which the monastic editors confess that they applied. There is also a (hikmat-al-mashriqqiyya, in Latin Philosophia Orientalis), mentioned by Roger Bacon, the majority of which is lost in antiquity, which according to Averroes was pantheistic in tone.
List of works
This is the list of some of Avicenna's well-known works:Tasaneef lbn Sina by Hakim Syed Zillur Rahman, Tabeeb Haziq, Gujarat, Pakistan, 1986, p. 176–198
Sirat al-shaykh al-ra'is (The Life of Ibn Sina), ed. and trans. WE. Gohlman, Albany, NY: State University of New York Press, 1974. (The only critical edition of Ibn Sina's autobiography, supplemented with material from a biography by his student Abu 'Ubayd al-Juzjani. A more recent translation of the Autobiography appears in D. Gutas, Avicenna and the Aristotelian Tradition: Introduction to Reading Avicenna's Philosophical Works, Leiden: Brill, 1988; second edition 2014.)
Al-isharat wa al-tanbihat (Remarks and Admonitions), ed. S. Dunya, Cairo, 1960; parts translated by S.C. Inati, Remarks and Admonitions, Part One: Logic, Toronto, Ont.: Pontifical Institute for Mediaeval Studies, 1984, and Ibn Sina and Mysticism, Remarks and Admonitions: Part 4, London: Kegan Paul International, 1996.
Al-Qanun fi'l-tibb (The Canon of Medicine), ed. I. a-Qashsh, Cairo, 1987. (Encyclopedia of medicine.) manuscript, Latin translation, Flores Avicenne, Michael de Capella, 1508, Modern text. Ahmed Shawkat Al-Shatti, Jibran Jabbur.
Risalah fi sirr al-qadar (Essay on the Secret of Destiny), trans. G. Hourani in Reason and Tradition in Islamic Ethics, Cambridge: Cambridge University Press, 1985.
Danishnama-i 'ala'i (The Book of Scientific Knowledge), ed. and trans. P. Morewedge, The Metaphysics of Avicenna, London: Routledge and Kegan Paul, 1973.
Kitab al-Shifa' (The Book of Healing). (Ibn Sina's major work on philosophy. He probably began to compose al-Shifa' in 1014, and completed it in 1020.) Critical editions of the Arabic text have been published in Cairo, 1952–83, originally under the supervision of I. Madkour.
Kitab al-Najat (The Book of Salvation), trans. F. Rahman, Avicenna's Psychology: An English Translation of Kitab al-Najat, Book II, Chapter VI with Historical-philosophical Notes and Textual Improvements on the Cairo Edition, Oxford: Oxford University Press, 1952. (The psychology of al-Shifa'.)
Hayy ibn Yaqdhan a Persian myth. A novel called Hayy ibn Yaqdhan, based on Avicenna's story, was later written by Ibn Tufail (Abubacer) in the 12th century and translated into Latin and English as Philosophus Autodidactus in the 17th and 18th centuries respectively. In the 13th century, Ibn al-Nafis wrote his own novel Fadil ibn Natiq, known as Theologus Autodidactus in the West, as a critical response to Hayy ibn Yaqdhan.Nahyan A. G. Fancy (2006), "Pulmonary Transit and Bodily Resurrection: The Interaction of Medicine, Philosophy and Religion in the Works of Ibn al-Nafīs (d. 1288)", pp. 95–102, Electronic Theses and Dissertations, University of Notre Dame.
Persian works
Avicenna's most important Persian work is the Danishnama-i 'Alai "the Book of Knowledge for [Prince] 'Ala ad-Daulah". Avicenna created new scientific vocabulary that had not previously existed in Persian. The Dāneš-nāma covers such topics as logic, metaphysics, music theory and other sciences of his time. It has been translated into English by Parwiz Morewedge in 1977.Avicenna, Danish Nama-i 'Alai. trans. Parviz Morewedge as The Metaphysics of Avicenna (New York: Columbia University Press), 1977. The book is also important in respect to Persian scientific works.Andar Danesh-e-Rag "On the science of the pulse" contains nine chapters on the science of the pulse and is a condensed synopsis.
Persian poetry from Ibn Sina is recorded in various manuscripts and later anthologies such as Nozhat al-Majales.
See also
Abu al-Qasim al-Zahrawi
Al-Qumri
Abdol Hamid Khosro Shahi
Avicennia, a genus of mangrove named after Ibn Sīnā
Avicenna Research Institute, a biotechnology research institute named after Ibn Sīnā
Avicenna Prize
Ibn Sina Peak – named after the Scientist
Islamic scholars
Mumijo
Philosophy
Eastern philosophy
Iranian philosophy
Islamic philosophy
Contemporary Islamic philosophy
Science in medieval Islam
List of Muslim scientists
Sufi philosophy
Science and technology in Iran
Ancient Iranian Medicine
List of Iranian scientists and scholars
References
Sources
Further reading
Encyclopedic articles
(PDF version)
Avicenna entry by Sajjad H. Rizvi in the Internet Encyclopedia of PhilosophyPrimary literature
For an old list of other extant works, C. Brockelmann's Geschichte der arabischen Litteratur (Weimar, 1898), vol. i. pp. 452–458. (XV. W.; G. W. T.)
For a current list of his works see A. Bertolacci (2006) and D. Gutas (2014) in the section "Philosophy".
Avicenne: Réfutation de l'astrologie. Edition et traduction du texte arabe, introduction, notes et lexique par Yahya Michot. Préface d'Elizabeth Teissier (Beirut-Paris: Albouraq, 2006) ISBN 2-84161-304-6.
William E. Gohlam (ed.), The Life of Ibn Sina. A Critical Edition and Annotated Translation, Albany, State of New York University Press, 1974.
For Ibn Sina's life, see Ibn Khallikan's Biographical Dictionary, translated by de Slane (1842); F. Wüstenfeld's Geschichte der arabischen Aerzte und Naturforscher (Göttingen, 1840).
Madelung, Wilferd and Toby Mayer (ed. and tr.), Struggling with the Philosopher: A Refutation of Avicenna's Metaphysics. A New Arabic Edition and English Translation of Shahrastani's Kitab al-Musara'a.
Secondary literature
This is, on the whole, an informed and good account of the life and accomplishments of one of the greatest influences on the development of thought both Eastern and Western. ... It is not as philosophically thorough as the works of D. Saliba, A. M. Goichon, or L. Gardet, but it is probably the best essay in English on this important thinker of the Middle Ages. (Julius R. Weinberg, The Philosophical Review, Vol. 69, No. 2, Apr. 1960, pp. 255–259)
This is a distinguished work which stands out from, and above, many of the books and articles which have been written in this century on Avicenna (Ibn Sīnā) (A.D. 980–1037). It has two main features on which its distinction as a major contribution to Avicennan studies may be said to rest: the first is its clarity and readability; the second is the comparative approach adopted by the author. ... (Ian Richard Netton, Journal of the Royal Asiatic Society, Third Series, Vol. 4, No. 2, July 1994, pp. 263–264)
Y. T. Langermann (ed.), Avicenna and his Legacy. A Golden Age of Science and Philosophy, Brepols Publishers, 2010, ISBN 978-2-503-52753-6
For a new understanding of his early career, based on a newly discovered text, see also: Michot, Yahya, Ibn Sînâ: Lettre au vizir Abû Sa'd. Editio princeps d'après le manuscrit de Bursa, traduction de l'arabe, introduction, notes et lexique (Beirut-Paris: Albouraq, 2000) ISBN 2-84161-150-7.
This German publication is both one of the most comprehensive general introductions to the life and works of the philosopher and physician Avicenna (Ibn Sīnā, d. 1037) and an extensive and careful survey of his contribution to the history of science. Its author is a renowned expert in Greek and Arabic medicine who has paid considerable attention to Avicenna in his recent studies. ... (Amos Bertolacci, Isis, Vol. 96, No. 4, December 2005, p. 649)
Shaikh al Rais Ibn Sina (Special number) 1958–59, Ed. Hakim Syed Zillur Rahman, Tibbia College Magazine, Aligarh Muslim University, Aligarh, India.
Medicine
Browne, Edward G.. Islamic Medicine. Fitzpatrick Lectures Delivered at the Royal College of Physicians in 1919–1920, reprint: New Delhi: Goodword Books, 2001. ISBN 81-87570-19-9
Pormann, Peter & Savage-Smith, Emilie. Medieval Islamic Medicine, Washington: Georgetown University Press, 2007.
Prioreschi, Plinio. Byzantine and Islamic Medicine, A History of Medicine, Vol. 4, Omaha: Horatius Press, 2001.
Philosophy
Amos Bertolacci, The Reception of Aristotle's Metaphysics in Avicenna's Kitab al-Sifa'. A Milestone of Western Metaphysical Thought, Leiden: Brill 2006, (Appendix C contains an Overview of the Main Works by Avicenna on Metaphysics in Chronological Order).
Dimitri Gutas, Avicenna and the Aristotelian Tradition: Introduction to Reading Avicenna's Philosophical Works, Leiden, Brill 2014, second revised and expanded edition (first edition: 1988), including an inventory of Avicenna' Authentic Works.
Jon Mc Ginnis and David C. Reisman (eds.) Interpreting Avicenna: Science and Philosophy in Medieval Islam: Proceedings of the Second Conference of the Avicenna Study Group, Leiden: Brill, 2004.
Michot, Jean R., La destinée de l'homme selon Avicenne, Louvain: Aedibus Peeters, 1986, ISBN 978-90-6831-071-9.
Nader El-Bizri, The Phenomenological Quest between Avicenna and Heidegger, Binghamton, N.Y.: Global Publications SUNY, 2000 (reprinted by SUNY Press in 2014 with a new Preface).
Nader El-Bizri, "Avicenna and Essentialism," Review of Metaphysics, Vol. 54 (June 2001), pp. 753–778.
Nader El-Bizri, "Avicenna's De Anima between Aristotle and Husserl," in The Passions of the Soul in the Metamorphosis of Becoming, ed. Anna-Teresa Tymieniecka, Dordrecht: Kluwer, 2003, pp. 67–89.
Nader El-Bizri, "Being and Necessity: A Phenomenological Investigation of Avicenna's Metaphysics and Cosmology," in Islamic Philosophy and Occidental Phenomenology on the Perennial Issue of Microcosm and Macrocosm, ed. Anna-Teresa Tymieniecka, Dordrecht: Kluwer, 2006, pp. 243–261.
Nader El-Bizri, 'Ibn Sīnā's Ontology and the Question of Being', Ishrāq: Islamic Philosophy Yearbook 2 (2011), 222–237
Nader El-Bizri, 'Philosophising at the Margins of 'Sh'i Studies': Reflections on Ibn Sīnā's Ontology', in The Study of Sh'i Islam. History, Theology and Law, eds. F. Daftary and G. Miskinzoda (London: I. B. Tauris, 2014), pp. 585–597.
Reisman, David C. (ed.), Before and After Avicenna: Proceedings of the First Conference of the Avicenna Study Group'', Leiden: Brill, 2003.
External links
Avicenna (Ibn-Sina) on the Subject and the Object of Metaphysics with a list of translations of the logical and philosophical works and an annotated bibliography
Category:980 births
Category:1037 deaths
Category:11th-century philosophers
Category:Alchemists of medieval Islam
Category:Aristotelian philosophers
Category:Classical humanists
Category:Arabic commentators on Aristotle
Category:Physicians of medieval Islam
Category:Persian philosophers
Category:Unani medicine
Category:Ethicists
Category:Islamic philosophers
Category:11th-century physicians
Category:Medieval Persian physicians
Category:11th-century astronomers
Category:Musical theorists of medieval Islam
Category:10th-century Iranian people
Category:Samanid scholars
Category:Buyid viziers | 1,130 | 2017-01 |
Gothic architecture | thumb|300px|Façade of Reims Cathedral, France
thumb|300px|The interior of the western end of Reims Cathedral
thumb|300px|The choir of Reims Cathedral
thumb|300px|Overview of Reims Cathedral from north-east
Gothic architecture is a style of architecture that flourished in Europe during the high and late medieval period. It evolved from Romanesque architecture and was succeeded by Renaissance architecture. Originating in 12th-century France and lasting into the 16th century, Gothic architecture was known during the period as ("French work") with the term Gothic first appearing during the later part of the Renaissance. Its characteristics include the pointed arch, the ribbed vault (which evolved from the joint vaulting of romanesque architecture) and the flying buttress. Gothic architecture is most familiar as the architecture of many of the great cathedrals, abbeys and churches of Europe. It is also the architecture of many castles, palaces, town halls, guild halls, universities and to a less prominent extent, private dwellings, such as dorms and rooms.
It is in the great churches and cathedrals and in a number of civic buildings that the Gothic style was expressed most powerfully, its characteristics lending themselves to appeals to the emotions, whether springing from faith or from civic pride. A great number of ecclesiastical buildings remain from this period, of which even the smallest are often structures of architectural distinction while many of the larger churches are considered priceless works of art and are listed with UNESCO as World Heritage Sites. For this reason a study of Gothic architecture is largely a study of cathedrals and churches.
A series of Gothic revivals began in mid-18th-century England, spread through 19th-century Europe and continued, largely for ecclesiastical and university structures, into the 20th century.
Terminology
thumb|250px|left|South side of Chartres Cathedral.
The term "Gothic architecture" originated as a pejorative description. Giorgio Vasari used the term "barbarous German style" in his Lives of the Artists to describe what is now considered the Gothic style,Vasari, G. The Lives of the Artists. Translated with an introduction and notes by J.C. and P. Bondanella. Oxford: Oxford University Press (Oxford World’s Classics), 1991, pp. 117 & 527. ISBN 9780199537198 and in the introduction to the Lives he attributes various architectural features to "the Goths" whom he holds responsible for destroying the ancient buildings after they conquered Rome, and erecting new ones in this style.Vasari, Giorgio. (1907) Vasari on technique: being the introduction to the three arts of design, architecture, sculpture and painting, prefixed to the Lives of the most excellent painters, sculptors and architects. G. Baldwin Brown Ed. Louisa S. Maclehose Trans. London: Dent, pp. b & 83. At the time in which Vasari was writing, Italy had experienced a century of building in the Classical architectural vocabulary revived in the Renaissance and seen as evidence of a new Golden Age of learning and refinement.
The Renaissance had then overtaken Europe, overturning a system of culture that, prior to the advent of printing, was almost entirely focused on the Church and was perceived, in retrospect, as a period of ignorance and superstition. Hence, François Rabelais, also of the 16th century, imagines an inscription over the door of his utopian Abbey of Thélème, "Here enter no hypocrites, bigots..." slipping in a slighting reference to "Gotz" and "Ostrogotz.""Gotz" is rendered as "Huns" in Thomas Urquhart's English translation.
In English 17th-century usage, "Goth" was an equivalent of "vandal", a savage despoiler with a Germanic heritage, and so came to be applied to the architectural styles of northern Europe from before the revival of classical types of architecture.
According to a 19th-century correspondent in the London Journal Notes and Queries:
There can be no doubt that the term 'Gothic' as applied to pointed styles of ecclesiastical architecture was used at first contemptuously, and in derision, by those who were ambitious to imitate and revive the Grecian orders of architecture, after the revival of classical literature. Authorities such as Christopher Wren lent their aid in deprecating the old medieval style, which they termed Gothic, as synonymous with everything that was barbarous and rude.Notes and Queries, No. 9. 29 December 1849Christopher Wren, 17th-century architect of St. Paul's Cathedral.
On 21 July 1710, the Académie Royale d'Architecture met in Paris, and among the subjects they discussed, the assembled company noted the new fashions of bowed and cusped arches on chimneypieces being employed "to finish the top of their openings. The Company disapproved of several of these new manners, which are defective and which belong for the most part to the Gothic.""pour terminer le haut de leurs ouvertures. La Compagnie a désapprové plusieurs de ces nouvelles manières, qui sont défectueuses et qui tiennent la plupart du gothique." Quoted in Fiske Kimball, The Creation of the Rococo, 1943, p 66.
Definition and scope
Gothic architecture is the architecture of the late medieval period, characterised by use of the pointed arch. Other features common to Gothic architecture are the rib vault, buttresses, including flying buttresses; large windows which are often grouped, or have tracery; rose windows, towers, spires and pinnacles; and ornate façades.
As an architectural style, Gothic developed primarily in ecclesiastical architecture, and its principles and characteristic forms were applied to other types of buildings. Buildings of every type were constructed in the Gothic style, with evidence remaining of simple domestic buildings, elegant town houses, grand palaces, commercial premises, civic buildings, castles, city walls, bridges, village churches, abbey churches, abbey complexes and large cathedrals.
The greatest number of surviving Gothic buildings are churches. These range from tiny chapels to large cathedrals, and although many have been extended and altered in different styles, a large number remain either substantially intact or sympathetically restored, demonstrating the form, character and decoration of Gothic architecture. The Gothic style is most particularly associated with the great cathedrals of Northern France, the Low Countries, England and Spain, with other fine examples occurring across Europe.
Influences
Political
At the end of the 12th century, Europe was divided into a multitude of city states and kingdoms. The area encompassing modern Germany, southern Denmark, the Netherlands, Belgium, Luxembourg, Switzerland, Austria, Slovakia, Czech Republic and much of northern Italy (excluding Venice and Papal State) was nominally part of the Holy Roman Empire, but local rulers exercised considerable autonomy. France, Denmark, Poland, Hungary, Portugal, Scotland, Castile, Aragon, Navarre, Sicily and Cyprus were independent kingdoms, as was the Angevin Empire, whose Plantagenet kings ruled England and large domains in what was to become modern France."L'art Gothique", section: "L'architecture Gothique en Angleterre" by Ute Engel: L'Angleterre fut l'une des premieres régions à adopter, dans la deuxième moitié du XIIeme siècle, la nouvelle architecture gothique née en France. Les relations historiques entre les deux pays jouèrent un rôle prépondérant: en 1154, Henri II (1154–1189), de la dynastie Française des Plantagenêt, accéda au thrône d'Angleterre." (England was one of the first regions to adopt, during the first half of the 12th century, the new Gothic architecture born in France. Historic relationships between the two countries played a determining role: in 1154, Henry II (1154–1189) became the first of the Anjou Plantagenet kings to ascend to the throne of England). Norway came under the influence of England, while the other Scandinavian countries and Poland were influenced by trading contacts with the Hanseatic League. Angevin kings brought the Gothic tradition from France to Southern Italy, while Lusignan kings introduced French Gothic architecture to Cyprus.
Throughout Europe at this time there was a rapid growth in trade and an associated growth in towns.Banister Fletcher, A History of Architecture on the Comparative Method. Germany and the Lowlands had large flourishing towns that grew in comparative peace, in trade and competition with each other, or united for mutual weal, as in the Hanseatic League. Civic building was of great importance to these towns as a sign of wealth and pride. England and France remained largely feudal and produced grand domestic architecture for their kings, dukes and bishops, rather than grand town halls for their burghers.
Religious
The Catholic Church prevailed across Europe at this time, influencing not only faith but also wealth and power. Bishops were appointed by the feudal lords (kings, dukes and other landowners) and they often ruled as virtual princes over large estates. The early Medieval periods had seen a rapid growth in monasticism, with several different orders being prevalent and spreading their influence widely. Foremost were the Benedictines whose great abbey churches vastly outnumbered any others in France and England. A part of their influence was that towns developed around them and they became centers of culture, learning and commerce. The Cluniac and Cistercian Orders were prevalent in France, the great monastery at Cluny having established a formula for a well planned monastic site which was then to influence all subsequent monastic building for many centuries.
In the 13th century St. Francis of Assisi established the Franciscans, or so-called "Grey Friars", a mendicant order. The Dominicans, another mendicant order founded during the same period but by St. Dominic in Toulouse and Bologna, were particularly influential in the building of Italy's Gothic churches.John Harvey, The Gothic World
Geographic
From the 10th to the 13th century, Romanesque architecture had become a pan-European style and manner of construction, affecting buildings in countries as far apart as Ireland, Croatia, Sweden and Sicily. The same wide geographic area was then affected by the development of Gothic architecture, but the acceptance of the Gothic style and methods of construction differed from place to place, as did the expressions of Gothic taste. The proximity of some regions meant that modern country borders did not define divisions of style. On the other hand, some regions such as England and Spain produced defining characteristics rarely seen elsewhere, except where they have been carried by itinerant craftsmen, or the transfer of bishops. Regional differences that are apparent in the great abbey churches and cathedrals of the Romanesque period often become even more apparent in the Gothic.
The local availability of materials affected both construction and style. In France, limestone was readily available in several grades, the very fine white limestone of Caen being favoured for sculptural decoration. England had coarse limestone and red sandstone as well as dark green Purbeck marble which was often used for architectural features.
In Northern Germany, Netherlands, northern Poland, Denmark, and the Baltic countries local building stone was unavailable but there was a strong tradition of building in brick. The resultant style, Brick Gothic, is called "Backsteingotik" in Germany and Scandinavia and is associated with the Hanseatic League. In Italy, stone was used for fortifications, but brick was preferred for other buildings. Because of the extensive and varied deposits of marble, many buildings were faced in marble, or were left with undecorated façade so that this might be achieved at a later date.
The availability of timber also influenced the style of architecture, with timber buildings prevailing in Scandinavia. Availability of timber affected methods of roof construction across Europe. It is thought that the magnificent hammer-beam roofs of England were devised as a direct response to the lack of long straight seasoned timber by the end of the Medieval period, when forests had been decimated not only for the construction of vast roofs but also for ship building.Alec Clifton-Taylor, The Cathedrals of England
Architectural background
Gothic architecture grew out of the previous architectural genre, Romanesque. For the most part, there was not a clean break, as there was to be later in Renaissance Florence with the revival of the Classical style by Filippo Brunelleschi in the early 15th century, and the sudden abandonment in Renaissance Italy of both the style and the structural characteristics of Gothic.
Romanesque tradition
By the 12th century, Romanesque architecture (termed Norman architecture in England because of its association with the Norman invasion), was established throughout Europe and provided the basic architectural forms and units that were to remain in evolution throughout the Medieval period. The important categories of building: the cathedral church, the parish church, the monastery, the castle, the palace, the great hall, the gatehouse, the civic building, had been established in the Romanesque period.
Many architectural features that are associated with Gothic architecture had been developed and used by the architects of Romanesque buildings. These include ribbed vaults, buttresses, clustered columns, ambulatories, wheel windows, spires,stained glass windows, and richly carved door tympana. These were already features of ecclesiastical architecture before the development of the Gothic style, and all were to develop in increasingly elaborate ways.Nikolaus Pevsner, An Outline of European Architecture.
It was principally the widespread introduction of a single feature, the pointed arch, which was to bring about the change that separates Gothic from Romanesque. The technological change permitted a stylistic change which broke the tradition of massive masonry and solid walls penetrated by small openings, replacing it with a style where light appears to triumph over substance. With its use came the development of many other architectural devices, previously put to the test in scattered buildings and then called into service to meet the structural, aesthetic and ideological needs of the new style. These include the flying buttresses, pinnacles and traceried windows which typify Gothic ecclesiastical architecture. But while pointed arch is so strongly associated with the Gothic style, it was first used in Western architecture in buildings that were in other ways clearly Romanesque, notably Durham Cathedral in the north of England, Monreale Cathedral and Cathedral of Cefalù in Sicily, Autun Cathedral in France.
Possible Oriental influence
The pointed arch, one of the defining attributes of Gothic, was earlier incorporated into Islamic architecture following the Islamic conquests of Roman Syria and the Sassanid Empire in the Seventh Century. The pointed arch and its precursors had been employed in Late Roman and Sassanian architecture; within the Roman context, evidenced in early church building in Syria and occasional secular structures, like the Roman Karamagara Bridge; in Sassanid architecture, in the parabolic and pointed arches employed in palace and sacred construction.Petersen, Andrew (2002-03-11). Dictionary of Islamic Architecture at pp. 295-296. Routledge. ISBN 978-0-203-20387-3. Retrieved 2013-03-16.
Increasing military and cultural contacts with the Muslim world, including the Norman conquest of Islamic Sicily in 1090, the Crusades, beginning 1096, and the Islamic presence in Spain, may have influenced Medieval Europe's adoption of the pointed arch, although this hypothesis remains controversial.Scott, Robert A.: The Gothic enterprise: a guide to understanding the Medieval cathedral, Berkeley 2003, University of California Press, p. 113 ISBN 0-520-23177-5Cf. Bony (1983), especially p.17 Certainly, in those parts of the Western Mediterranean subject to Islamic control or influence, rich regional variants arose, fusing Romanesque and later Gothic traditions with Islamic decorative forms, as seen, for example, in Monreale and Cefalù Cathedrals, the Alcázar of Seville, and Teruel Cathedral.Le genie architectural des Normands a su s’adapter aux lieux en prenant ce qu’il y a de meilleur dans le savoir-faire des batisseurs arabes et byzantins”, Les Normands en Sicile, pp.14, 53-57.Harvey, L. P. (1992). "Islamic Spain, 1250 to 1500". Chicago : University of Chicago Press. ISBN 0-226-31960-1; Boswell, John (1978). Royal Treasure: Muslim Communities Under the Crown of Aragon in the Fourteenth Century. Yale University Press. ISBN 0-300-02090-2.
Architectural development
Transition from Romanesque to Gothic architecture
The characteristic forms that were to define Gothic architecture grew out of Romanesque architecture and developed at several different geographic locations, as the result of different influences and structural requirements.
While barrel vaults and groin vaults are typical of Romanesque architecture, ribbed vaults were used in the naves of two Romanesque churches in Caen, Abbey of Saint-Étienne and Abbaye aux Dames in 1120. Another early example is the nave and apse area of the Cathedral of Cefalù in 1131. The ribbed vault over the north transept at Durham Cathedral in England, built from 1128 to 1133, is probably earlier still and was the first time pointed arches were used in a high vault.
Other characteristics of early Gothic architecture, such as vertical shafts, clustered columns, compound piers, plate tracery and groups of narrow openings had evolved during the Romanesque period. The west front of Ely Cathedral exemplifies this development. Internally the three tiered arrangement of arcade, gallery and clerestory was established. Interiors had become lighter with the insertion of more and larger windows.
The Basilica of Saint Denis is generally cited as the first truly Gothic building, however the distinction is best reserved for the choir, of which the ambulatory remains intact. Noyon Cathedral, also in France, saw the earliest completion of a rebuilding of an entire cathedral in the new style from 1150 to 1231. While using all those features that came to be known as Gothic, including pointed arches, flying buttresses and ribbed vaulting, the builders continued to employ many of the features and much of the character of Romanesque architecture including round-headed arch throughout the building, varying the shape to pointed where it was functionally practical to do so.
At the Abbey Saint-Denis, Noyon Cathedral, Notre Dame de Paris and at the eastern end of Canterbury Cathedral in England, simple cylindrical columns predominate over the Gothic forms of clustered columns and shafted piers. Wells Cathedral in England, commenced at the eastern end in 1175, was the first building in which the designer broke free from Romanesque forms. The architect entirely dispensed with the round arch in favour of the pointed arch and with cylindrical columns in favour of piers composed of clusters of shafts which lead into the mouldings of the arches. The transepts and nave were continued by Adam Locke in the same style and completed in about 1230. The character of the building is entirely Gothic. Wells Cathedral is thus considered the first truly Gothic cathedral.Cannon, J. 2007. Cathedral: The Great English Cathedrals and the World that Made Them
Abbot Suger
The eastern end of the Basilica Church of Saint-Denis, built by Abbot Suger and completed in 1144, is often cited as the first truly Gothic building, as it draws together many of architectural forms which had evolved from Romanesque and typify the Gothic style.
Suger, friend and confidant of the French Kings, Louis VI and Louis VII, decided in about 1137, to rebuild the great Church of Saint-Denis, attached to an abbey which was also a royal residence. He began with the West Front, reconstructing the original Carolingian façade with its single door. He designed the façade of Saint-Denis to be an echo of the Roman Arch of Constantine with its three-part division and three large portals to ease the problem of congestion. The rose window is the earliest-known example above the West portal in France. The façade combines both round arches and pointed arches of the Gothic style.
At the completion of the west front in 1140, Abbot Suger moved on to the reconstruction of the eastern end, leaving the Carolingian nave in use. He designed a choir that would be suffused with light.Erwin Panofsky argued that Suger was inspired to create a physical representation of the Heavenly Jerusalem, although the extent to which Suger had any aims higher than aesthetic pleasure has been called into doubt by more recent art historians on the basis of Suger's own writings. To achieve his aims, his masons drew on the several new features which evolved or had been introduced to Romanesque architecture, the pointed arch, the ribbed vault, the ambulatory with radiating chapels, the clustered columns supporting ribs springing in different directions and the flying buttresses which enabled the insertion of large clerestory windows.
The new structure was finished and dedicated on 11 June 1144, in the presence of the King. The choir and west front of the Abbey of Saint-Denis both became the prototypes for further building in the royal domain of northern France and in the Duchy of Normandy. Through the rule of the Angevin dynasty, the new style was introduced to England and spread throughout France, the Low Countries, Germany, Spain, northern Italy and Sicily.
Characteristics of Gothic cathedrals and great churches
While many secular buildings exist from the Late Middle Ages, it is in the buildings of cathedrals and great churches that Gothic architecture displays its pertinent structures and characteristics to the fullest advantage. A Gothic cathedral or abbey was, prior to the 20th century, generally the landmark building in its town, rising high above all the domestic structures and often surmounted by one or more towers and pinnacles and perhaps tall spires.Wim Swaan, The Gothic Cathedral These cathedrals were the skyscrapers of that day and would have been the largest buildings by far that Europeans would ever have seen. It is in the architecture of these Gothic churches that a unique combination of existing technologies established the emergence of a new building style. Those technologies were the ogival or pointed arch, the ribbed vault, and the buttress.
The Gothic style, when applied to an ecclesiastical building, emphasizes verticality and light. This appearance was achieved by the development of certain architectural features, which together provided an engineering solution. The structural parts of the building ceased to be its solid walls, and became a stone skeleton comprising clustered columns, pointed ribbed vaults and flying buttresses.
Plan
Most large Gothic churches and many smaller parish churches are of the Latin cross (or "cruciform") plan, with a long nave making the body of the church, a transverse arm called the transept and, beyond it, an extension which may be called the choir, chancel or presbytery. There are several regional variations on this plan.
The nave is generally flanked on either side by aisles, usually single, but sometimes double. The nave is generally considerably taller than the aisles, having clerestory windows which light the central space. Gothic churches of the Germanic tradition, like St. Stephen of Vienna, often have nave and aisles of similar height and are called Hallenkirche. In the South of France there is often a single wide nave and no aisles, as at Sainte-Marie in Saint-Bertrand-de-Comminges.
In some churches with double aisles, like Notre Dame, Paris, the transept does not project beyond the aisles. In English cathedrals transepts tend to project boldly and there may be two of them, as at Salisbury Cathedral, though this is not the case with lesser churches.
The eastern arm shows considerable diversity. In England it is generally long and may have two distinct sections, both choir and presbytery. It is often square ended or has a projecting Lady Chapel, dedicated to the Virgin Mary. In France the eastern end is often polygonal and surrounded by a walkway called an ambulatory and sometimes a ring of chapels called a "chevet". While German churches are often similar to those of France, in Italy, the eastern projection beyond the transept is usually just a shallow apsidal chapel containing the sanctuary, as at Florence Cathedral.
Structure: the pointed arch
History
thumb|Norman blind-arcading at Canterbury Cathedral.
One of the defining characteristics of Gothic architecture is the pointed or ogival arch. Arches of a similar type were used in the Near East in pre-Islamic* as well as Islamic architecture before they were structurally employed in medieval architecture. It is thought by some architectural historians that this was the inspiration for the use of the pointed arch in Sicily and France, in otherwise Romanesque buildings, as at Autun Cathedral.
Contrary to the diffusionist theory, it appears that there was simultaneously a structural evolution towards the pointed arch, for the purpose of vaulting spaces of irregular plan, or to bring transverse vaults to the same height as diagonal vaults. This latter occurs at Durham Cathedral in the nave aisles in 1093. Pointed arches also occur extensively in Romanesque decorative blind arcading, where semi-circular arches overlap each other in a simple decorative pattern, and the points are accidental to the design.
Functions
The Gothic vault, unlike the semi-circular vault of Roman and Romanesque buildings, can be used to roof rectangular and irregularly shaped plans such as trapezoids. The other structural advantage is that the pointed arch channels the weight onto the bearing piers or columns at a steep angle. This enabled architects to raise vaults much higher than was possible in Romanesque architecture. While, structurally, use of the pointed arch gave a greater flexibility to architectural form, it also gave Gothic architecture a very different and more vertical visual character than Romanesque.
In Gothic architecture the pointed arch is used in nearly all locations where a vaulted shape is called for, both structural and decorative. Gothic openings such as doorways, windows, arcades and galleries have pointed arches. Gothic vaulting above spaces both large and small is usually supported by richly moulded ribs.
Rows of pointed arches upon delicate shafts form a typical wall decoration known as blind arcading. Niches with pointed arches and containing statuary are a major external feature. The pointed arch lent itself to elaborate intersecting shapes which developed within window spaces into complex Gothic tracery forming the structural support of the large windows that are characteristic of the style.
thumb|left|Salisbury Cathedral has the tallest spire in England.
Height
A characteristic of Gothic church architecture is its height, both absolute and in proportion to its width, the verticality suggesting an aspiration to Heaven. A section of the main body of a Gothic church usually shows the nave as considerably taller than it is wide. In England the proportion is sometimes greater than 2:1, while the greatest proportional difference achieved is at Cologne Cathedral with a ratio of 3.6:1. The highest internal vault is at Beauvais Cathedral at .
Externally, towers and spires are characteristic of Gothic churches both great and small, the number and positioning being one of the greatest variables in Gothic architecture. In Italy, the tower, if present, is almost always detached from the building, as at Florence Cathedral, and is often from an earlier structure. In France and Spain, two towers on the front is the norm. In England, Germany and Scandinavia this is often the arrangement, but an English cathedral may also be surmounted by an enormous tower at the crossing. Smaller churches usually have just one tower, but this may also be the case at larger buildings, such as Salisbury Cathedral or Ulm Minster, which has the tallest spire in the world,The open-work spire was completed in 1890 to the original design. slightly exceeding that of Lincoln Cathedral, the tallest which was actually completed during the medieval period, at .
thumb||The Gothic east end of Cologne Cathedral represents the extreme of verticality. (choir dating to 13th century, nave dating to the 19th century).
Vertical emphasis
The pointed arch lends itself to a suggestion of height. This appearance is characteristically further enhanced by both the architectural features and the decoration of the building.
On the exterior, the verticality is emphasised in a major way by the towers and spires and in a lesser way by strongly projecting vertical buttresses, by narrow half-columns called attached shafts which often pass through several storeys of the building, by long narrow windows, vertical mouldings around doors and figurative sculpture which emphasises the vertical and is often attenuated. The roofline, gable ends, buttresses and other parts of the building are often terminated by small pinnacles, Milan Cathedral being an extreme example in the use of this form of decoration.
On the interior of the building attached shafts often sweep unbroken from floor to ceiling and meet the ribs of the vault, like a tall tree spreading into branches. The verticals are generally repeated in the treatment of the windows and wall surfaces. In many Gothic churches, particularly in France, and in the Perpendicular period of English Gothic architecture, the treatment of vertical elements in gallery and window tracery creates a strongly unifying feature that counteracts the horizontal divisions of the interior structure.
thumb|left|250px|Sainte Chapelle.
Light
Expansive interior light has been a feature of Gothic cathedrals since the first structure was opened. The metaphysics of light in the Middle Ages led to clerical belief in its divinity and the importance of its display in holy settings. Much of this belief was based on the writings of Pseudo-Dionysius, a sixth-century mystic whose book, The Celestial Hierarchy, was popular among monks in France. Pseudo-Dionysius held that all light, even light reflected from metals or streamed through windows, was divine. To promote such faith, the abbot in charge of the Saint-Denis church on the north edge of Paris, the Abbot Suger, encouraged architects remodeling the building to make the interior as bright as possible.
Ever since the remodeled Basilica of Saint-Denis opened in 1144, Gothic architecture has featured expansive windows, such as at Sainte Chapelle, York Minster, Gloucester Cathedral. The increase in size between windows of the Romanesque and Gothic periods is related to the use of the ribbed vault, and in particular, the pointed ribbed vault which channeled the weight to a supporting shaft with less outward thrust than a semicircular vault. Walls did not need to be so weighty.
A further development was the flying buttress which arched externally from the springing of the vault across the roof of the aisle to a large buttress pier projecting well beyond the line of the external wall. These piers were often surmounted by a pinnacle or statue, further adding to the downward weight, and counteracting the outward thrust of the vault and buttress arch as well as stress from wind loading.
The internal columns of the arcade with their attached shafts, the ribs of the vault and the flying buttresses, with their associated vertical buttresses jutting at right-angles to the building, created a stone skeleton. Between these parts, the walls and the infill of the vaults could be of lighter construction. Between the narrow buttresses, the walls could be opened up into large windows.
Through the Gothic period, thanks to the versatility of the pointed arch, the structure of Gothic windows developed from simple openings to immensely rich and decorative sculptural designs. The windows were very often filled with stained glass which added a dimension of colour to the light within the building, as well as providing a medium for figurative and narrative art.
thumb|left|Notre Dame de Paris
Majesty
The façade of a large church or cathedral, often referred to as the West Front, is generally designed to create a powerful impression on the approaching worshipper, demonstrating both the might of God and the might of the institution that it represents. One of the best known and most typical of such façades is that of Notre Dame de Paris.
Central to the façade is the main portal, often flanked by additional doors. In the arch of the door, the tympanum, is often a significant piece of sculpture, most frequently Christ in Majesty and Judgment Day. If there is a central doorjamb or a trumeau, then it frequently bears a statue of the Madonna and Child. There may be much other carving, often of figures in niches set into the mouldings around the portals, or in sculptural screens extending across the façade.
Above the main portal there is generally a large window, like that at York Minster, or a group of windows such as those at Ripon Cathedral. In France there is generally a rose window like that at Reims Cathedral. Rose windows are also often found in the façades of churches of Spain and Italy, but are rarer elsewhere and are not found on the façades of any English Cathedrals. The gable is usually richly decorated with arcading or sculpture or, in the case of Italy, may be decorated with the rest of the façade, with polychrome marble and mosaic, as at Orvieto Cathedral.
The West Front of a French cathedral and many English, Spanish and German cathedrals generally have two towers, which, particularly in France, express an enormous diversity of form and decoration. However some German cathedrals have only one tower located in the middle of the façade (such as Freiburg Münster).
thumb|The façade of Ripon Cathedral presents a composition in untraceried pointed arches.
Basic shapes of Gothic arches and stylistic character
The way in which the pointed arch was drafted and utilised developed throughout the Gothic period. There were fairly clear stages of development, which did not, however, progress at the same rate, or in the same way in every country. Moreover, the names used to define various periods or styles within Gothic architecture differs from country to country.
Lancet arch
The simplest shape is the long opening with a pointed arch known in England as the lancet. Lancet openings are often grouped, usually as a cluster of three or five. Lancet openings may be very narrow and steeply pointed. Lancet arches are typically defined as two-centered arches whose radii are larger than the arch's span.
Salisbury Cathedral is famous for the beauty and simplicity of its Lancet Gothic, known in England as the Early English Style. York Minster has a group of lancet windows each fifty feet high and still containing ancient glass. They are known as the Five Sisters. These simple undecorated grouped windows are found at Chartres and Laon Cathedrals and are used extensively in Italy.
thumb|left|Windows in the Chapter House at York Minster show the equilateral arch with typical circular motifs in the tracery.
Equilateral arch
thumb|right|Equilateral Arch
Many Gothic openings are based upon the equilateral form. In other words, when the arch is drafted, the radius is exactly the width of the opening and the centre of each arch coincides with the point from which the opposite arch springs. This makes the arch higher in relation to its width than a semi-circular arch which is exactly half as high as it is wide.
The Equilateral Arch gives a wide opening of satisfying proportion useful for doorways, decorative arcades and large windows.
The structural beauty of the Gothic arch means, however, that no set proportion had to be rigidly maintained. The Equilateral Arch was employed as a useful tool, not as a principle of design. This meant that narrower or wider arches were introduced into a building plan wherever necessity dictated. In the architecture of some Italian cities, notably Venice, semi-circular arches are interspersed with pointed ones.This does not happen in French or English Gothic and so to the British or French eye, to be a strange disregard for style.
The Equilateral Arch lends itself to filling with tracery of simple equilateral, circular and semi-circular forms. The type of tracery that evolved to fill these spaces is known in England as Geometric Decorated Gothic and can be seen to splendid effect at many English and French Cathedrals, notably Lincoln and Notre Dame in Paris. Windows of complex design and of three or more lights or vertical sections, are often designed by overlapping two or more equilateral arches.
Flamboyant arch
thumb|Flamboyant tracery at Limoges Cathedral.
The Flamboyant Arch is one that is drafted from four points, the upper part of each main arc turning upwards into a smaller arc and meeting at a sharp, flame-like point. These arches create a rich and lively effect when used for window tracery and surface decoration. The form is structurally weak and has very rarely been used for large openings except when contained within a larger and more stable arch. It is not employed at all for vaulting.
Some of the most beautiful and famous traceried windows of Europe employ this type of tracery. It can be seen at St Stephen's Vienna, Sainte Chapelle in Paris, at the Cathedrals of Limoges and Rouen in France. In England the most famous examples are the West Window of York Minster with its design based on the Sacred Heart, the extraordinarily rich nine-light East Window at Carlisle Cathedral and the exquisite East window of Selby Abbey.
Doorways surmounted by Flamboyant mouldings are very common in both ecclesiastical and domestic architecture in France. They are much rarer in England. A notable example is the doorway to the Chapter Room at Rochester Cathedral.
The style was much used in England for wall arcading and niches. Prime examples in are in the Lady Chapel at Ely, the Screen at Lincoln and externally on the façade of Exeter Cathedral. In German and Spanish Gothic architecture it often appears as openwork screens on the exterior of buildings. The style was used to rich and sometimes extraordinary effect in both these countries, notably on the famous pulpit in Vienna Cathedral.
thumb|left|The depressed arch supported by fan vaulting at King's College Chapel, England.
Depressed arch
The depressed or four-centred arch is much wider than its height and gives the visual effect of having been flattened under pressure. Its structure is achieved by drafting two arcs which rise steeply from each springing point on a small radius and then turn into two arches with a wide radius and much lower springing point.
This type of arch, when employed as a window opening, lends itself to very wide spaces, provided it is adequately supported by many narrow vertical shafts. These are often further braced by horizontal transoms. The overall effect produces a grid-like appearance of regular, delicate, rectangular forms with an emphasis on the perpendicular. It is also employed as a wall decoration in which arcade and window openings form part of the whole decorative surface.
The style, known as Perpendicular, that evolved from this treatment is specific to England, although very similar to contemporary Spanish style in particular, and was employed to great effect through the 15th century and first half of the 16th as Renaissance styles were much slower to arrive in England than in Italy and France.
It can be seen notably at the East End of Gloucester Cathedral where the East Window is said to be as large as a tennis court. There are three very famous royal chapels and one chapel-like Abbey which show the style at its most elaborate: King's College Chapel, Cambridge; St George's Chapel, Windsor; Henry VII's Chapel at Westminster Abbey and Bath Abbey. However very many simpler buildings, especially churches built during the wool boom in East Anglia, are fine examples of the style.
Symbolism and ornamentation
thumb|250px|The Royal Portal of Chartres Cathedral.
The Gothic cathedral represented the universe in microcosm and each architectural concept, including the loftiness and huge dimensions of the structure, were intended to convey a theological message: the great glory of God.
The building becomes a microcosm in two ways. Firstly, the mathematical and geometrical nature of the construction is an image of the orderly universe, in which an underlying rationality and logic can be perceived.
Secondly, the statues, sculptural decoration, stained glass and murals incorporate the essence of creation in depictions of the Labours of the Months and the ZodiacThe Zodiac comprises a sequence of twelve constellations which appear overhead in the Northern Hemisphere at fixed times of year. In a rural community with neither clock nor calendar, these signs in the heavens were crucial in knowing when crops were to be planted and certain rural activities performed. and sacred history from the Old and New Testaments and Lives of the Saints, as well as reference to the eternal in the Last Judgment and Coronation of the Virgin.
thumb|left|250px|The Devil tempting the Foolish Virgins at Strasbourg.
The decorative schemes usually incorporated Biblical stories, emphasizing visual typological allegories between Old Testament prophecy and the New Testament.
Many churches were very richly decorated, both inside and out. Sculpture and architectural details were often bright with coloured paint of which traces remain at the Cathedral of Chartres. Wooden ceilings and panelling were usually brightly coloured. Sometimes the stone columns of the nave were painted, and the panels in decorative wall arcading contained narratives or figures of saints. These have rarely remained intact, but may be seen at the Chapterhouse of Westminster Abbey.
Some important Gothic churches could be severely simple such as the Basilica of Mary Magdalene in Saint-Maximin, Provence where the local traditions of the sober, massive, Romanesque architecture were still strong.
Regional differences
thumb|Interior of Amiens Cathedral, France.
Wherever Gothic architecture is found, it is subject to local influences, and frequently the influence of itinerant stonemasons and artisans, carrying ideas between cities and sometimes between countries. Certain characteristics are typical of particular regions and often override the style itself, appearing in buildings hundreds of years apart.
France
The distinctive characteristic of French cathedrals, and those in Germany and Belgium that were strongly influenced by them, is their height and their impression of verticality. Each French cathedral tends to be stylistically unified in appearance when compared with an English cathedral where there is great diversity in almost every building. They are compact, with slight or no projection of the transepts and subsidiary chapels. The west fronts are highly consistent, having three portals surmounted by a rose window, and two large towers. Sometimes there are additional towers on the transept ends. The east end is polygonal with ambulatory and sometimes a chevette of radiating chapels. In the south of France, many of the major churches are without transepts and some are without aisles.
thumb|left|250px|The longitudinal emphasis in the nave of Wells is typically English.
England
The distinctive characteristic of English cathedrals is their extreme length, and their internal emphasis upon the horizontal, which may be emphasised visually as much or more than the vertical lines. Each English cathedral (with the exception of Salisbury) has an extraordinary degree of stylistic diversity, when compared with most French, German and Italian cathedrals. It is not unusual for every part of the building to have been built in a different century and in a different style, with no attempt at creating a stylistic unity. Unlike French cathedrals, English cathedrals sprawl across their sites, with double transepts projecting strongly and Lady Chapels tacked on at a later date, such as at Westminster Abbey. In the west front, the doors are not as significant as in France, the usual congregational entrance being through a side porch. The West window is very large and never a rose, which are reserved for the transept gables. The west front may have two towers like a French Cathedral, or none. There is nearly always a tower at the crossing and it may be very large and surmounted by a spire. The distinctive English east end is square, but it may take a completely different form. Both internally and externally, the stonework is often richly decorated with carvings, particularly the capitals.
Germany, Poland and the Czech Republic
thumb|upright|Interior of the Vladislav Hall at the Prague Castle.
Romanesque architecture in Germany, Poland and the Czech Republic (earlier called Bohemia) is characterised by its massive and modular nature. This characteristic is also expressed in the Gothic architecture of Central Europe in the huge size of the towers and spires, often projected, but not always completed.Freiburg, Regensburg, Strasbourg, Vienna, Ulm, Cologne, Antwerp, Gdansk, Wroclaw. Gothic design in Germany and Czech lands, generally follows the French formula, but the towers are much taller and, if complete, are surmounted by enormous openwork spires that are a regional feature. Because of the size of the towers, the section of the façade between them may appear narrow and compressed. The distinctive character of the interior of German Gothic cathedrals is their breadth and openness. This is the case even when, as at Cologne, they have been modelled upon a French cathedral. German and Czech cathedrals, like the French, tend not to have strongly projecting transepts. There are also many hall churches (Hallenkirchen) without clerestory windows. In contrast to the Gothic designs found in German and Czech areas, which followed the French patterns, Brick Gothic was particularly prevalent in Poland. The Polish gothic architecture is characterized by its utilitarian nature, with very limited use of sculpture and heavy exterior design.
Spain and Portugal
thumb|left|250px|Burgos Cathedral in Burgos, Spain.
The distinctive characteristic of Gothic cathedrals of the Iberian Peninsula is their spatial complexity, with many areas of different shapes leading from each other. They are comparatively wide, and often have very tall arcades surmounted by low clerestories, giving a similar spacious appearance to the 'Hallenkirche of Germany, as at the Church of the Batalha Monastery in Portugal. Many of the cathedrals are completely surrounded by chapels. Like English cathedrals, each is often stylistically diverse. This expresses itself both in the addition of chapels and in the application of decorative details drawn from different sources. Among the influences on both decoration and form are Islamic architecture and, towards the end of the period, Renaissance details combined with the Gothic in a distinctive manner. The West front, as at Leon Cathedral, typically resembles a French west front, but wider in proportion to height and often with greater diversity of detail and a combination of intricate ornament with broad plain surfaces. At Burgos Cathedral there are spires of German style. The roofline often has pierced parapets with comparatively few pinnacles. There are often towers and domes of a great variety of shapes and structural invention rising above the roof.
Catalonia
thumb|left|250px|Barcelona Cathedral has a wide nave with the clerestory windows nestled under the vault.
In Catalonia and the territories under its influence (Northern Catalonia in France, the Balearic Islands, the Valencian Country, among others in the Italian islands), the Gothic style suppressed the transcept and made the aisle almost as high as the main nave, allowing it to create very wide spaces, with few ornaments; it is called Catalan Gothic style (different than the Spanish or French style).
The most important samples of Catalan Gothic style are the cathedrals of Girona, Barcelona, Perpignan and Palma (in Mallorca), the basilica of Santa Maria del Mar (in Barcelona), the Basílica del Pi (in Barcelona), and the church of Santa Maria de l'Alba in Manresa.
Italy
thumb|250px|The clear proportions of Florence Cathedral are defined by dark stone against the colour-washed plastered brick.
The distinctive characteristic of Italian Gothic is the use of polychrome decoration, both externally as marble veneer on the brick façade and also internally where the arches are often made of alternating black and white segments, and where the columns may be painted red, the walls decorated with frescoes and the apse with mosaic. The plan is usually regular and symmetrical, Italian cathedrals have few and widely spaced columns. The proportions are generally mathematically equilibrated, based on the square and the concept of "armonìa", and except in Venice where they loved flamboyant arches, the arches are almost always equilateral. Colours and moldings define the architectural units rather than blending them. Italian cathedral façades are often polychrome and may include mosaics in the lunettes over the doors. The façades have projecting open porches and occular or wheel windows rather than roses, and do not usually have a tower. The crossing is usually surmounted by a dome. There is often a free-standing tower and baptistry. The eastern end usually has an apse of comparatively low projection. The windows are not as large as in northern Europe and, although stained glass windows are often found, the favourite narrative medium for the interior is the fresco.
Other Gothic buildings
Synagogues were commonly built in the Gothic style in Europe during the Medieval period. A surviving example is the Old New Synagogue in Prague built in the 13th century.
thumb|250px|Façade of Doge's Palace in Venice, Italy.
The Palais des Papes in Avignon is the best complete large royal palace, alongside the Royal palace of Olite, built during the 13th and 14th centuries for the kings of Navarre. The Malbork Castle built for the master of the Teutonic order is an example of Brick Gothic architecture. Partial survivals of former royal residences include the Doge's Palace of Venice, the Palau de la Generalitat in Barcelona, built in the 15th century for the kings of Aragon, or the famous Conciergerie, former palace of the kings of France, in Paris.
|thumb|left|250px|Gallery of Palau de la Generalitat in Barcelona, Spain.
Secular Gothic architecture can also be found in a number of public buildings such as town halls, universities, markets or hospitals. The Gdańsk, Wrocław and Stralsund town halls are remarkable examples of northern Brick Gothic built in the late 14th centuries. The Belfry of Bruges or Brussels Town Hall, built during the 15th century, are associated to the increasing wealth and power of the bourgeoisie in the late Middle Ages; by the 15th century, the traders of the trade cities of Burgundy had acquired such wealth and influence that they could afford to express their power by funding lavishly decorated buildings of vast proportions. This kind of expressions of secular and economic power are also found in other late mediaeval commercial cities, including the Llotja de la Seda of Valencia, Spain, a purpose built silk exchange dating from the 15th century, in the partial remains of Westminster Hall in the Houses of Parliament in London, or the Palazzo Pubblico in Siena, Italy, a 13th-century town hall built to host the offices of the then prosperous republic of Siena. Other Italian cities such as Florence (Palazzo Vecchio), Mantua or Venice also host remarkable examples of secular public architecture.
thumb|250px|Courtyard of Collegium Maius in Kraków, Poland.
By the late Middle Ages university towns had grown in wealth and importance as well, and this was reflected in the buildings of some of Europe's ancient universities. Particularly remarkable examples still standing nowadays include the Collegio di Spagna in the University of Bologna, built during the 14th and 15th centuries; the Collegium Carolinum of the University of Prague in Bohemia; the Escuelas mayores of the University of Salamanca in Spain; the chapel of King's College, Cambridge; or the Collegium Maius of the Jagiellonian University in Kraków, Poland.
In addition to monumental secular architecture, examples of the Gothic style in private buildings can be seen in surviving medieval portions of cities across Europe, above all the distinctive Venetian Gothic such as the Ca' d'Oro. The house of the wealthy early 15th-century merchant Jacques Coeur in Bourges, is the classic Gothic bourgeois mansion, full of the asymmetry and complicated detail beloved of the Gothic Revival.Begun in 1443.
Other cities with a concentration of secular Gothic include Bruges and Siena. Most surviving small secular buildings are relatively plain and straightforward; most windows are flat-topped with mullions, with pointed arches and vaulted ceilings often only found at a few focal points. The country-houses of the nobility were slow to abandon the appearance of being a castle, even in parts of Europe, like England, where defence had ceased to be a real concern. The living and working parts of many monastic buildings survive, for example at Mont Saint-Michel.
Exceptional works of Gothic architecture can also be found on the islands of Sicily and Cyprus, in the walled cities of Nicosia and Famagusta. Also, the roofs of the Old Town Hall in Prague and Znojmo Town Hall Tower in the Czech Republic are an excellent example of late Gothic craftsmanship.
Gothic survival and revival
thumb|Western façade of Westminster Abbey, London, completed in 1745
In 1663 at the Archbishop of Canterbury's residence, Lambeth Palace, a Gothic hammerbeam roof was built to replace that destroyed when the building was sacked during the English Civil War. Also in the late 17th century, some discrete Gothic details appeared on new construction at Oxford University and Cambridge University, notably on Tom Tower at Christ Church, Oxford, by Christopher Wren. It is not easy to decide whether these instances were Gothic survival or early appearances of Gothic revival.
Ireland was a focus for Gothic architecture in the 17th and 18th centuries. Derry Cathedral (completed 1633), Sligo Cathedral (c. 1730), and Down Cathedral (1790-1818) are notable examples. The term "Planter's Gothic" has been applied to the most typical of these.-Bob Hunter "Londonderry Cathedtral". BBC.
In England in the mid-18th century, the Gothic style was more widely revived, first as a decorative, whimsical alternative to Rococo that is still conventionally termed 'Gothick', of which Horace Walpole's Twickenham villa "Strawberry Hill" is the familiar example.
19th- and 20th-century Gothic Revival
thumb|left|Big Ben (completed in 1859) and the Houses of Parliament in London
In England, partly in response to a philosophy propounded by the Oxford Movement and others associated with the emerging revival of 'high church' or Anglo-Catholic ideas during the second quarter of the 19th century, neo-Gothic began to become promoted by influential establishment figures as the preferred style for ecclesiastical, civic and institutional architecture. The appeal of this Gothic revival (which after 1837, in Britain, is sometimes termed Victorian Gothic), gradually widened to encompass "low church" as well as "high church" clients. This period of more universal appeal, spanning 1855–1885, is known in Britain as High Victorian Gothic.
The Houses of Parliament in London by Sir Charles Barry with interiors by a major exponent of the early Gothic Revival, Augustus Welby Pugin, is an example of the Gothic revival style from its earlier period in the second quarter of the 19th century. Examples from the High Victorian Gothic period include George Gilbert Scott's design for the Albert Memorial in London, and William Butterfield's chapel at Keble College, Oxford. From the second half of the 19th century onwards it became more common in Britain for neo-Gothic to be used in the design of non-ecclesiastical and non-governmental buildings types. Gothic details even began to appear in working-class housing schemes subsidised by philanthropy, though given the expense, less frequently than in the design of upper and middle-class housing.
thumb|Gasson Hall on the campus of Boston College in Chestnut Hill, Massachusetts.
In France, simultaneously, the towering figure of the Gothic Revival was Eugène Viollet-le-Duc, who outdid historical Gothic constructions to create a Gothic as it ought to have been, notably at the fortified city of Carcassonne in the south of France and in some richly fortified keeps for industrial magnates. Viollet-le-Duc compiled and coordinated an Encyclopédie médiévale that was a rich repertory his contemporaries mined for architectural details. He effected vigorous restoration of crumbling detail of French cathedrals, including the Abbey of Saint-Denis and famously at Notre Dame de Paris, where many of whose most "Gothic" gargoyles are Viollet-le-Duc's. He taught a generation of reform-Gothic designers and showed how to apply Gothic style to modern structural materials, especially cast iron.
In Germany, the great cathedral of Cologne and the Ulm Minster, left unfinished for 600 years, were brought to completion, while in Italy, Florence Cathedral finally received its polychrome Gothic façade. New churches in the Gothic style were created all over the world, including Mexico, Argentina, Japan, Thailand, India, Australia, New Zealand, Hawaii and South Africa.
thumb|left|250px|St. Alexander Nevsky Gothic Chapel (Peterhof), completed in 1834
As in Europe, the United States, Canada, Australia and New Zealand utilised Neo-Gothic for the building of universities, a fine example being the University of Sydney by Edmund Blacket. In Canada, the Canadian Parliament Buildings in Ottawa designed by Thomas Fuller and Chilion Jones with its huge centrally placed tower is influenced by Flemish Gothic buildings.
Although falling out of favour for domestic and civic use, Gothic for churches and universities continued into the 20th century with buildings such as Liverpool Cathedral, the Cathedral of Saint John the Divine, New York and São Paulo Cathedral, Brazil. The Gothic style was also applied to iron-framed city skyscrapers such as Cass Gilbert's Woolworth Building and Raymond Hood's Tribune Tower.
Post-Modernism in the late 20th and early 21st centuries has seen some revival of Gothic forms in individual buildings, such as the Gare do Oriente in Lisbon, Portugal and a finishing of the Cathedral of Our Lady of Guadalupe in Mexico.
21st-century Gothic Revival
Gallery
See also
About medieval Gothic in particular
Castle
Catenary arch
Czech Gothic architecture
English Gothic architecture
French Gothic architecture
Italian Gothic architecture
List of Gothic architecture
Medieval architecture
Middle Ages in history
Polish Gothic architecture
Portuguese Gothic architecture
Renaissance of the 12th century
Spanish Gothic architecture
Gothic secular and domestic architecture
About Gothic architecture more generally or in other senses
Architectural history
Architectural style
Architecture of cathedrals and great churches
Sondergotik
Gothicmed
Gothic Revival architecture
Carpenter Gothic
Collegiate Gothic in North America
Tented roof
Notes
References
Further reading
Fletcher, Banister; Cruickshank, Dan, Sir Banister Fletcher's a History of Architecture, Architectural Press, 20th edition, 1996 (first published 1896). ISBN 0-7506-2267-9. Cf. Part Two, Chapter 14.
Glaser, Stephanie, "The Gothic Cathedral and Medievalism," in: Falling into Medievalism, ed. Anne Lair and Richard Utz. Special Issue of UNIversitas: The University of Northern Iowa Journal of Research, Scholarship, and Creative Activity, 2.1 (2006). (on the Gothic revival of the 19th century and the depictions of Gothic cathedrals in the Arts)
Rudolph,Conrad ed., A Companion to Medieval Art: Romanesque and Gothic in Northern Europe, 2nd ed. (2016)
Tonazzi, Pascal (2007) Florilège de Notre-Dame de Paris (anthologie), Editions Arléa, Paris, ISBN 2-86959-795-9
External links
Mapping Gothic France, a project by Columbia University and Vassar College with a database of images, 360° panoramas, texts, charts and historical maps
Gothic Architecture Encyclopædia Britannica
Gutenberg.org, from Project Gutenberg
, Archive.org, from Internet Archive
Category:Architectural history
Category:Architectural styles
Category:European architecture
*
Category:English architecture
Category:Italian architecture
Category:Medieval French architecture
Category:Roman Catholic Church architecture
Category:12th-century architecture
Category:13th-century architecture
Category:14th-century architecture
Category:15th-century architecture
Category:16th-century architecture
de:Gotik#Baukunst | 54,044 | 2017-01 |
Steven Spielberg | Steven Allan Spielberg (born December 18, 1946) is an American director, producer, and screenwriter. He is considered one of the founding pioneers of the New Hollywood era, as well as being viewed as one of the most popular directors and producers in film history.The cinema of Steven Spielberg: Empire of light. Nigel Morris. Wallflower Press. 2007 He is also one of the co-founders of DreamWorks Studios.
In a career spanning more than four decades, Spielberg's films have spanned many themes and genres. Spielberg's early science-fiction and adventure films, such as Jaws (1975), Close Encounters of the Third Kind (1977), Raiders of the Lost Ark (1981), and E.T. the Extra-Terrestrial (1982), were seen as archetypes of modern Hollywood escapist filmmaking. In later years, his films began addressing humanistic issues such as the Holocaust, the transatlantic slave trade, civil rights, war, and terrorism in such films as The Color Purple (1985), Empire of the Sun (1987), Schindler's List (1993), Amistad (1997), Saving Private Ryan (1998), Munich (2005), War Horse (2011), Lincoln (2012), and Bridge of Spies (2015). His other films include Jurassic Park (1993), A.I. Artificial Intelligence (2001), and War of the Worlds (2005).
Spielberg won the Academy Award for Best Director for Schindler's List and Saving Private Ryan, as well as receiving five other nominations.Directors with two or more Oscars Three of Spielberg's films—Jaws, E.T. the Extra-Terrestrial, and Jurassic Park—achieved box office records, originated and came to epitomize the blockbuster film. The unadjusted gross of all Spielberg-directed films exceeds $9 billion worldwide, making him the highest-grossing director in history. His personal net worth is estimated to be more than $3 billion. He is also known for his long-standing associations with several actors, producers, and technicians, most notably composer John Williams, who has composed music for all but two of Spielberg's movies which are The Color Purple and Bridge of Spies.
Early life
Spielberg was born in Cincinnati, Ohio, to an Orthodox Jewish family. His mother, Leah (Adler) Posner (born 1920), was a restaurateur and concert pianist, and his father, Arnold Spielberg (born 1917), was an electrical engineer involved in the development of computers. His paternal grandparents were immigrants from Ukraine who settled in Cincinnati in the first decade of the 1900s. In 1950, his family moved to Haddon Township, New Jersey when his father took a job with RCA. Three years later, the family moved to Phoenix, Arizona. Spielberg attended Hebrew school from 1953 to 1957, in classes taught by Rabbi Albert L. Lewis.
As a child, Spielberg faced difficulty reconciling being an Orthodox Jew with the perception of him by other children he played with. "It isn't something I enjoy admitting," he once said, "but when I was seven, eight, nine years old, God forgive me, I was embarrassed because we were Orthodox Jews. I was embarrassed by the outward perception of my parents' Jewish practices. I was never really ashamed to be Jewish, but I was uneasy at times." Spielberg also said he suffered from acts of anti-Semitic prejudice and bullying: "In high school, I got smacked and kicked around. Two bloody noses. It was horrible."
His first home movie was of a train wreck involving his toy Lionel trains, then age 12. Throughout his early teens, and after entering high school, Spielberg continued to make amateur 8 mm "adventure" films.
In 1958, he became a Boy Scout and fulfilled a requirement for the photography merit badge by making a nine-minute 8 mm film entitled The Last Gunfight. Years later, Spielberg recalled to a magazine interviewer, "My dad's still-camera was broken, so I asked the scoutmaster if I could tell a story with my father's movie camera. He said yes, and I got an idea to do a Western. I made it and got my merit badge. That was how it all started."
At age thirteen, while living in Phoenix, Spielberg won a prize for a 40-minute war film he titled Escape to Nowhere, using a cast composed of other high school friends. That motivated him to make 15 more amateur 8mm films. In 1963, at age sixteen, Spielberg wrote and directed his first independent film, a 140-minute science fiction adventure called Firelight, which would later inspire Close Encounters. The film was made for $500, most of which came from his father, and was shown in a local cinema for one evening, which earned back its cost.From Inside the Actor's Studio with James Lipton interviewing Steven Spielberg.
After attending Arcadia High School in Phoenix for three years, his family next moved to Saratoga, California where he later graduated from Saratoga High School in 1965. He attained the rank of Eagle Scout. His parents divorced while he was still in school, and soon after he graduated Spielberg moved to Los Angeles, staying initially with his father. His long-term goal was to become a film director. His three sisters and mother remained in Saratoga.
In Los Angeles, he applied to the University of Southern California's film school, but was turned down because of his "C" grade average.Fischer, Dennis. Science Fiction Film Directors, 1895-1998, McFarland & Co. (2000) He then applied and was admitted to California State University, Long Beach, where he became a brother of Theta Chi Fraternity."Notable Theta Chis" , Theta Chi Fraternity alumni
While still a student, he was offered a small unpaid intern job at Universal Studios with the editing department. He was later given the opportunity to make a short film for theatrical release, the 26-minute, 35mm, Amblin', which he wrote and directed. Studio vice president Sidney Sheinberg was impressed by the film, which had won a number of awards, and offered Spielberg a seven-year directing contract. It made him the youngest director ever to be signed for a long-term deal with a major Hollywood studio. He subsequently dropped out of college to begin professionally directing TV productions with Universal. Speilberg later returned to Cal State Long Beach and completed his BA degree in Film and Electronic Arts in 2002.http://www.telegraph.co.uk/culture/film/3579578/Spielberg-why-I-went-back-to-college.html
Career
1970s
His first professional TV job came when he was hired to direct one of the segments for the 1969 pilot episode of Night Gallery, written by Rod Serling and would star Joan Crawford. Crawford, however, was "speechless, and then horrified" at the thought of a twenty-one-year-old newcomer directing her, one of Hollywood's leading stars. "Why was this happening to me?" she asked the producer.Chandler, Charlotte. Not the Girl Next Door: Joan Crawford, a Personal Biography, Hal Leonard Corp. (2008) p. 261 Her attitude changed after they began working on her scenes:
She and Spielberg were reportedly close friends until her death. The episode is unusual in his body of work, in that the camerawork is more highly stylized than his later, more "mature" films. After this, and an episode of Marcus Welby, M.D., Spielberg got his first feature-length assignment: an episode of The Name of the Game called "L.A. 2017". This futuristic science fiction episode impressed Universal Studios and they signed him to a short contract. He did another segment on Night Gallery and did some work for shows such as Owen Marshall: Counselor at Law and The Psychiatrist, before landing the first series episode of Columbo (previous episodes were actually TV films).
Based on the strength of his work, Universal signed Spielberg to do four TV films. The first was a Richard Matheson adaptation called Duel. The film is about a psychotic Peterbilt 281 tanker truck driver who chases the terrified driver (Dennis Weaver) of a small Plymouth Valiant and tries to run him off the road. Special praise of this film by the influential British critic Dilys Powell was highly significant to Spielberg's career. Another TV film (Something Evil) was made and released to capitalize on the popularity of The Exorcist, then a major best-selling book which had not yet been released as a film. He fulfilled his contract by directing the TV film-length pilot of a show called Savage, starring Martin Landau. Spielberg's debut full-length feature film was The Sugarland Express, about a married couple who are chased by police as the couple tries to regain custody of their baby. Spielberg's cinematography for the police chase was praised by reviewers, and The Hollywood Reporter stated that "a major new director is on the horizon." However, the film fared poorly at the box office and received a limited release.
Studio producers Richard D. Zanuck and David Brown offered Spielberg the director's chair for Jaws, a thriller-horror film based on the Peter Benchley novel about an enormous killer shark. Spielberg has often referred to the gruelling shoot as his professional crucible. Despite the film's ultimate, enormous success, it was nearly shut down due to delays and budget over-runs. But Spielberg persevered and finished the film. It was an enormous hit, winning three Academy Awards (for editing, original score and sound) and grossing more than $470 million worldwide at the box office. It also set the domestic record for box office gross, leading to what the press described as "Jawsmania." Jaws made Spielberg a household name and one of America's youngest multi-millionaires, allowing him a great deal of autonomy for his future projects. It was nominated for Best Picture and featured Spielberg's first of three collaborations with actor Richard Dreyfuss.
Rejecting offers to direct Jaws 2, King Kong and Superman, Spielberg and actor Richard Dreyfuss re-convened to work on a film about UFOs, which became Close Encounters of the Third Kind (1977). One of the rare films both written and directed by Spielberg, Close Encounters was a critical and box office hit, giving Spielberg his first Best Director nomination from the Academy as well as earning six other Academy Awards nominations. It won Oscars in two categories (Cinematography, Vilmos Zsigmond, and a Special Achievement Award for Sound Effects Editing, Frank E. Warner). This second blockbuster helped to secure Spielberg's rise. His next film, 1941, a big-budgeted World War II farce, was not nearly as successful and though it grossed over $92.4 million worldwide (and did make a small profit for co-producing studios Columbia and Universal) it was seen as a disappointment, mainly with the critics."1941, Box Office Information." The Numbers, September 27, 2012.
Spielberg then revisited his Close Encounters project and, with financial backing from Columbia Pictures, released Close Encounters: The Special Edition in 1980. For this, Spielberg fixed some of the flaws he thought impeded the original 1977 version of the film and also, at the behest of Columbia, and as a condition of Spielberg revising the film, shot additional footage showing the audience the interior of the mothership seen at the end of the film (a decision Spielberg would later regret as he felt the interior of the mothership should have remained a mystery). Nevertheless, the re-release was a moderate success, while the 2001 DVD release of the film restored the original ending.
1980s
Next, Spielberg teamed with Star Wars creator and friend George Lucas on an action adventure film, Raiders of the Lost Ark, the first of the Indiana Jones films. The archaeologist and adventurer hero Indiana Jones was played by Harrison Ford (whom Lucas had previously cast in his Star Wars films as Han Solo). The film was considered an homage to the cliffhanger serials of the Golden Age of Hollywood. It became the biggest film at the box office in 1981, and the recipient of numerous Oscar nominations including Best Director (Spielberg's second nomination) and Best Picture (the second Spielberg film to be nominated for Best Picture). Raiders is still considered a landmark example of the action-adventure genre. The film also led to Ford's casting in Ridley Scott's Blade Runner.
A year later, Spielberg returned to the science fiction genre with E.T. the Extra-Terrestrial. It was the story of a young boy and the alien he befriends, who was accidentally left behind by his companions and is attempting to return home. E.T. went on to become the top-grossing film of all time. It was also nominated for nine Academy Awards including Best Picture and Best Director.
Between 1982 and 1985, Spielberg produced three high-grossing films: Poltergeist (for which he also co-wrote the screenplay), a big-screen adaptation of The Twilight Zone (for which he directed the segment "Kick The Can"), and The Goonies (Spielberg, executive producer, also wrote the story on which the screenplay was based). Spielberg appeared in a cameo on Cyndi Lauper's music video for the movie's theme song, "The Goonies 'R' Good Enough".
thumb|200px|right|Steven Spielberg and Chandran Rutnam on a location in Sri Lanka during the filming of Indiana Jones and the Temple of Doom.
His next directorial feature was the Raiders prequel Indiana Jones and the Temple of Doom. Teaming up once again with Lucas and Ford, the film was plagued with uncertainty for the material and script. This film and the Spielberg-produced Gremlins led to the creation of the PG-13 rating due to the high level of violence in films targeted at younger audiences. In spite of this, Temple of Doom is rated PG by the MPAA, even though it is the darkest and, possibly, most violent Indy film. Nonetheless, the film was still a huge blockbuster hit in 1984. It was on this project that Spielberg also met his future wife, actress Kate Capshaw.
In 1985, Spielberg released The Color Purple, an adaptation of Alice Walker's Pulitzer Prize-winning novel of the same name, about a generation of empowered African-American women during depression-era America. Starring Whoopi Goldberg and future talk-show superstar Oprah Winfrey, the film was a box office smash and critics hailed Spielberg's successful foray into the dramatic genre. Roger Ebert proclaimed it the best film of the year and later entered it into his Great Films archive. The film received eleven Academy Award nominations, including two for Goldberg and Winfrey. However, much to the surprise of many, Spielberg did not get a Best Director nomination.
In 1987, as China began opening to Western capital investment, Spielberg shot the first American film in Shanghai since the 1930s, an adaptation of J. G. Ballard's autobiographical novel Empire of the Sun, starring John Malkovich and a young Christian Bale. The film garnered much praise from critics and was nominated for several Oscars, but did not yield substantial box office revenues. Reviewer Andrew Sarris called it the best film of the year and later included it among the best films of the decade. Spielberg was also a co-producer of the 1987 film *batteries not included.
After two forays into more serious dramatic films, Spielberg then directed the third Indiana Jones film, 1989's Indiana Jones and the Last Crusade. Once again teaming up with Lucas and Ford, Spielberg also cast actor Sean Connery in a supporting role as Indy's father. The film earned generally positive reviews and was another box office success, becoming the highest-grossing film worldwide that year; its total box office receipts even topped those of Tim Burton's much-anticipated film Batman, which had been the bigger hit domestically. Also in 1989, he re-united with actor Richard Dreyfuss for the romantic comedy-drama Always, about a daredevil pilot who extinguishes forest fires. Spielberg's first romantic film, Always was only a moderate success and had mixed reviews.
1990s
In 1991, Spielberg directed Hook, about a middle-aged Peter Pan, played by Robin Williams, who returns to Neverland. Despite innumerable rewrites and creative changes coupled with mixed reviews, the film proved popular with audiences, making over $300 million worldwide (from a $70 million budget).
In 1993, Spielberg returned to the adventure genre with the film version of Michael Crichton's novel Jurassic Park, about a theme park with genetically engineered dinosaurs. With revolutionary special effects provided by friend George Lucas's Industrial Light & Magic company, the film would eventually become the highest-grossing film of all time (at the worldwide box office) with $914.7 million. This would be the third time that one of Spielberg's films became the highest-grossing film ever.
Spielberg's next film, Schindler's List, was based on the true story of Oskar Schindler, a man who risked his life to save 1,100 Jews from the Holocaust.The screenplay, adapted from Thomas Keneally's novel, was originally in the hands of fellow director Martin Scorsese, but Spielberg negotiated with Scorsese to trade scripts. (At the time, Spielberg held the script for a remake of Cape Fear.) Schindler's List earned Spielberg his first Academy Award for Best Director (it also won Best Picture). With the film a huge success at the box office, Spielberg used the profits to set up the Shoah Foundation, a non-profit organization that archives filmed testimony of Holocaust survivors. In 1997, the American Film Institute listed it among the 10 Greatest American Films ever Made (#9) which moved up to (#8) when the list was remade in 2007.
thumb|upright|Spielberg in March 1990
In 1994, Spielberg took a hiatus from directing to spend more time with his family and build his new studio, DreamWorks, with partners Jeffrey Katzenberg and David Geffen. In 1996, he directed the sequel to 1993's Jurassic Park with The Lost World: Jurassic Park, which generated over $618 million worldwide despite mixed reviews, and was the second biggest film of 1997 behind James Cameron's Titanic (which topped the original Jurassic Park to become the new recordholder for box office receipts).
His next film, Amistad, was based on a true story (like Schindler's List), specifically about an African slave rebellion. Despite decent reviews from critics, it did not do well at the box office. Spielberg released Amistad under DreamWorks Pictures,(formed with former Disney animation exec Jeffrey Katzenberg and media mogul David Geffen, providing the other letters in the company name) which has produced all of his films from Amistad onwards with the exception of Indiana Jones and the Kingdom of the Crystal Skull, The Adventures of Tintin and Ready Player One.
In 1998, Spielberg re-visited Close Encounters yet again, this time for a more definitive 137-minute "Collector's Edition" that puts more emphasis on the original 1977 release, while adding some elements of the previous 1980 "Special Edition," but deleting the latter version's "Mothership Finale," which Spielberg regretted shooting in the first place, feeling it should have remained ambiguous in the minds of viewers.
His next theatrical release in that same year was the World War II film Saving Private Ryan, about a group of U.S. soldiers led by Capt. Miller (Tom Hanks) sent to bring home a paratrooper whose three older brothers were killed in the same twenty-four hours, June 5–6, of the Normandy landing. The film was a huge box office success, grossing over $481 million worldwide and was the biggest film of the year at the North American box office (worldwide it made second place after Michael Bay's Armageddon). Spielberg won his second Academy Award for his direction. The film's graphic, realistic depiction of combat violence influenced later war films such as Black Hawk Down and Enemy at the Gates. The film was also the first major hit for DreamWorks, which co-produced the film with Paramount Pictures (as such, it was Spielberg's first release from the latter that was not part of the Indiana Jones series). Later, Spielberg and Tom Hanks produced a TV mini-series based on Stephen Ambrose's book Band of Brothers. The ten-part HBO mini-series follows Easy Company of the 101st Airborne Division's 506th Parachute Infantry Regiment. The series won a number of awards at the Golden Globes and the Emmys.
2000s
In 2001, Spielberg filmed fellow director and friend Stanley Kubrick's final project, A.I. Artificial Intelligence which Kubrick was unable to begin during his lifetime. A futuristic film about a humanoid android longing for love, A.I. featured groundbreaking visual effects and a multi-layered, allegorical storyline, adapted by Spielberg himself. Though the film's reception in the US was relatively muted, it performed better overseas for a worldwide total box office gross of $236 million.
Spielberg and actor Tom Cruise collaborated for the first time for the futuristic neo-noir Minority Report, based upon the science fiction short story written by Philip K. Dick about a Washington D.C. police captain in the year 2054 who has been foreseen to murder a man he has not yet met. The film received strong reviews with the review tallying website Rotten Tomatoes giving it a 92% approval rating, reporting that 206 out of the 225 reviews they tallied were positive. The film earned over $358 million worldwide. Roger Ebert, who named it the best film of 2002, praised its breathtaking vision of the future as well as for the way Spielberg blended CGI with live-action.
thumb|Spielberg in 2011, at the Paris premiere of The Adventures of Tintin: The Secret of the Unicorn.
Spielberg's 2002 film Catch Me If You Can is about the daring adventures of a youthful con artist (played by Leonardo DiCaprio). It earned Christopher Walken an Academy Award nomination for Best Supporting Actor. The film is known for John Williams' score and its unique title sequence. It was a hit both commercially and critically.
Spielberg collaborated again with Tom Hanks along with Catherine Zeta-Jones and Stanley Tucci in 2004's The Terminal, a warm-hearted comedy about a man of Eastern European descent who is stranded in an airport. It received mixed reviews but performed relatively well at the box office. In 2005, Empire magazine ranked Spielberg number one on a list of the greatest film directors of all time.
Also in 2005, Spielberg directed a modern adaptation of War of the Worlds (a co-production of Paramount and DreamWorks), based on the H. G. Wells book of the same name (Spielberg had been a huge fan of the book and the original 1953 film). It starred Tom Cruise and Dakota Fanning, and, as with past Spielberg films, Industrial Light & Magic (ILM) provided the visual effects. Unlike E.T. and Close Encounters of the Third Kind, which depicted friendly alien visitors, War of the Worlds featured violent invaders. The film was another huge box office smash, grossing over $591 million worldwide.
Spielberg's film Munich, about the events following the 1972 Munich Massacre of Israeli athletes at the Olympic Games, was his second film essaying Jewish relations in the world (the first being Schindler's List). The film is based on Vengeance, a book by Canadian journalist George Jonas. It was previously adapted into the 1986 made-for-TV film Sword of Gideon. The film received strong critical praise, but underperformed at the U.S. and world box-office; it remains one of Spielberg's most controversial films to date. Munich received five Academy Awards nominations, including Best Picture, Film Editing, Original Music Score (by John Williams), Best Adapted Screenplay, and Best Director for Spielberg. It was Spielberg's sixth Best Director nomination and fifth Best Picture nomination.
In June 2006, Steven Spielberg announced he would direct a scientifically accurate film about "a group of explorers who travel through a worm hole and into another dimension", from a treatment by Kip Thorne and producer Lynda Obst. In January 2007, screenwriter Jonathan Nolan met with them to discuss adapting Obst and Thorne's treatment into a narrative screenplay. The screenwriter suggested the addition of a "time element" to the treatment's basic idea, which was welcomed by Obst and Thorne. In March of that year, Paramount hired Nolan, as well as scientists from Caltech, forming a workshop to adapt the treatment under the title Interstellar. The following July, Kip Thorne said there was a push by people for him to portray himself in the film. Spielberg later abandoned Interstellar, which was eventually directed by Christopher Nolan.
Spielberg directed Indiana Jones and the Kingdom of the Crystal Skull, which wrapped filming in October 2007 and was released on May 22, 2008. This was his first film not to be released by DreamWorks since 1997. The film received generally positive reviews from critics, and was financially successful, grossing $786 million worldwide.
2010s
thumb|left|Spielberg at his masterclass at the Cinémathèque Française in January 2012.
thumb|Spielberg promoting The BFG at the 2016 Cannes Film Festival.
In early 2009, Spielberg shot the first film in a planned trilogy of motion capture films based on The Adventures of Tintin, written by Belgian artist Hergé,"The Man Behind Boy, Dog and Their Adventures" Book review by Charles McGrath, The New York Times, December 22, 2009 (December 23, 2009, p. C1 NY ed.). Book reviewed: Hergé: The Man Who Created Tintin, by Pierre Assouline; translated by Charles Ruas, 276 pages. Oxford University Press. Retrieved December 24, 2009. with Peter Jackson. The Adventures of Tintin: The Secret of the Unicorn, was not released until October 2011, due to the complexity of the computer animation involved. The world premiere took place on October 22, 2011 in Brussels, Belgium. The film was released in North American theaters on December 21, 2011, in Digital 3D and IMAX. It received generally positive reviews from critics, and grossed over $373 million worldwide. The Adventures of Tintin won the award for Best Animated Feature Film at the Golden Globe Awards that year. It is the first non-Pixar film to win the award since the category was first introduced. Jackson has been announced to direct the second film.
Spielberg followed with War Horse, shot in England in the summer of 2010."Steven Spielberg starts filming War Horse on Dartmoor" by Tristan Nichols, The Herald August 3, 2010 It was released just four days after The Adventures of Tintin, on December 25, 2011. The film, based on the novel of the same name written by Michael Morpurgo and published in 1982, follows the long friendship between a British boy and his horse Joey before and during World War I – the novel was also adapted into a hit play in London which is still running there, as well as on Broadway. Distributed by Walt Disney Studios, with whom DreamWorks made a distribution deal in 2009, War Horse was the first of four consecutive Spielberg films released by Disney. War Horse received generally positive reviews from critics, and was nominated for six Academy Awards, including Best Picture.
Spielberg next directed the historical drama film Lincoln, starring Daniel Day-Lewis as United States President Abraham Lincoln and Sally Field as Mary Todd Lincoln. Based on Doris Kearns Goodwin's bestseller Team of Rivals: The Political Genius of Abraham Lincoln, the film covered the final four months of Lincoln's life. Written by Tony Kushner, the film was shot in Richmond, Virginia, in late 2011, and was released in the United States in November 2012. Upon release, Lincoln received widespread critical acclaim, and was nominated for twelve Academy Awards (the most of any film that year) including Best Picture and Best Director for Spielberg. It won the award for Best Production Design and Day-Lewis won the Academy Award for Best Actor for his portrayal of Lincoln, becoming the first three time winner in that category as well as the first to win for a performance directed by Spielberg.
It was announced on May 2, 2013, that Spielberg would direct the film about the story of U.S. sniper Chris Kyle, titled American Sniper. However, on August 5, 2013, it was announced that Spielberg had decided not to direct the film, which was instead directed by Clint Eastwood.
Spielberg directed 2015's Bridge of Spies, a Cold War thriller based on the 1960 U-2 incident, and focusing on James B. Donovan's negotiations with the Soviets for the release of pilot Gary Powers after his aircraft was shot down over Soviet territory. The film starred Tom Hanks as Donovan, as well as Mark Rylance, Amy Ryan, and Alan Alda, with a script by the Coen brothers. The film was shot from September to December 2014 on location in New York City, Berlin and Wroclaw, Poland (which doubled for East Berlin), and was released on October 16, 2015. Bridge of Spies received positive reviews from critics, and was nominated for six Academy Awards, including Best Picture; Rylance won the Academy Award for Best Supporting Actor, becoming the second actor to win for a performance directed by Spielberg.
Spielberg's The BFG is an adaptation of Roald Dahl's celebrated children's story, starring newcomer Ruby Barnhill, and Rylance as the titular Big Friendly Giant. DreamWorks bought the rights in 2010, originally intending John Madden to direct. The film was the last to be written by E.T. screenwriter Melissa Mathison before she died. It was co-produced and released by Walt Disney Pictures, marking the first Disney-branded film to be directed by Spielberg. The BFG premiered out of competition at the Cannes Film Festivalhttp://deadline.com/2016/04/cannes-film-festival-2016-official-selection-lineup-full-list-1201736807/ on May 14, 2016http://www.festival-cannes.com/en/press/programmation/?date=2016-05-14# and received a wide release in the US on July 1, 2016.
Production credits
Since the mid-1980s, Spielberg has increased his role as a film producer. He headed up the production team for several cartoons, including the Warner Bros. hits Tiny Toon Adventures, Animaniacs, Pinky and the Brain, Toonsylvania, and Freakazoid!, for which he collaborated with Jean MacCurdy and Tom Ruegger. Due to his work on these series, in the official titles, most of them say, "Steven Spielberg presents" as well as making numerous cameos on the shows. Spielberg also produced the Don Bluth animated features, An American Tail and The Land Before Time, which were released by Universal Studios. He also served as one of the executive producers of Who Framed Roger Rabbit and its three related shorts (Tummy Trouble, Roller Coaster Rabbit, Trail Mix-Up), which were all released by Disney, under both the Walt Disney Pictures and the Touchstone Pictures banners. He was furthermore, for a short time, the executive producer of the long-running medical drama ER. In 1989, he brought the concept of The Dig to LucasArts. He contributed to the project from that time until 1995 when the game was released. He also collaborated with software publishers Knowledge Adventure on the multimedia game Steven Spielberg's Director's Chair, which was released in 1996. Spielberg appears, as himself, in the game to direct the player. The Spielberg name provided branding for a Lego Moviemaker kit, the proceeds of which went to the Starbright Foundation.
thumb|left|Spielberg speaking at the Pentagon on August 11, 1999 after receiving the Department of Defense Medal for Distinguished Public Service from Secretary of Defense William S. Cohen
In 1993, Spielberg acted as executive producer for the highly anticipated television series seaQuest DSV; a science fiction series set "in the near future" starring Roy Scheider (who Spielberg had directed in Jaws) and Jonathan Brandis that aired on Sundays at 8:00 pm. on NBC. While the first season was moderately successful, the second season did less well. Spielberg's name no longer appeared in the third season and the show was cancelled midway through it.
Spielberg served as an uncredited executive producer on The Haunting, The Prince of Egypt, Just Like Heaven, Shrek, Road to Perdition, and Evolution. He served as an executive producer for the 1997 film Men in Black, and its sequels, Men in Black II and Men in Black III. In 2005, he served as a producer of Memoirs of a Geisha, an adaptation of the novel by Arthur Golden, a film to which he was previously attached as director. In 2006, Spielberg co-executive produced with famed filmmaker Robert Zemeckis a CGI children's film called Monster House, marking their eighth collaboration since 1990's Back to the Future Part III. He also teamed with Clint Eastwood for the first time in their careers, co-producing Eastwood's Flags of Our Fathers and Letters from Iwo Jima with Robert Lorenz and Eastwood himself. He earned his twelfth Academy Award nomination for the latter film as it was nominated for Best Picture. Spielberg served as executive producer for Disturbia and the Transformers live action film with Brian Goldner, an employee of Hasbro. The film was directed by Michael Bay and written by Roberto Orci and Alex Kurtzman, and Spielberg continued to collaborate on the sequels, Transformers: Revenge of the Fallen and Transformers: Dark of the Moon. In 2011, he produced the J. J. Abrams science fiction thriller film Super 8 for Paramount Pictures.
Other major television series Spielberg produced were Band of Brothers, Taken and The Pacific. He was an executive producer on the critically acclaimed 2005 TV miniseries Into the West which won two Emmy awards, including one for Geoff Zanelli's score. For his 2010 miniseries The Pacific he teamed up once again with co-producer Tom Hanks, with Gary Goetzman also co-producing'. The miniseries is believed to have cost $250 million and is a 10-part war miniseries centered on the battles in the Pacific Theater during World War II. Writer Bruce McKenna, who penned several installments of (Band of Brothers), was the head writer.
In 2007, Steven Spielberg and Mark Burnett co-produced On the Lot a short-lived TV reality show about filmmaking. Despite this, he never gave up working on television. He currently serves as one of the executive producers on United States of Tara, a show created by Academy Award winner Diablo Cody which they developed together (Spielberg is uncredited as creator).
In 2011, Spielberg launched Falling Skies, a science fiction television series, on the TNT network. He developed the series with Robert Rodat and is credited as an executive producer. Spielberg is also producing the Fox TV series Terra Nova. Terra Nova begins in the year 2149 when all life on the planet Earth is threatened with extinction resulting in scientists opening a door that allows people to travel back 85 million years to prehistoric times. Spielberg also produced The River, Smash, Under the Dome,Stephen King, Steven Spielberg Go 'Under the Dome' Extant and The Whispers, as well as a TV adaptation of Minority Report.
In 2008, Spielberg and DreamWorks acquired the rights to produce a live-action film adaptation of the original Ghost in the Shell manga. Avi Arad and Steven Paul were later confirmed as producers, Rupert Sanders directed, and Scarlett Johansson stars in the lead role.
In March 2013, Spielberg announced that he was "developing a Stanley Kubrick screenplay for a miniseries, not for a motion picture, about the life of Napoleon."Steven Spielberg developing Stanley Kubrick, sur Hollywood Reporter In May 2016 it was announced that Cary Fukunaga is in talks to direct the miniseries for HBO, from a script by David Lenland based on extensive research materials accumulated by Kubrick over many years.http://www.hollywoodreporter.com/live-feed/cary-fukunaga-talks-direct-hbo-895382
Acting credits
Spielberg had cameo roles in The Blues Brothers, Gremlins, Vanilla Sky, and Austin Powers in Goldmember, as well as small uncredited cameos in a handful of other films, such as a life-station worker in Jaws. He also made numerous cameo roles in the Warner Bros. cartoons he produced, such as Animaniacs, and even made reference to some of his films. Spielberg voiced himself in the film Paul, and in one episode of Tiny Toon Adventures titled Buster and Babs Go Hawaiian.
Involvement in video games
Apart from being an ardent gamer Spielberg has had a long history of involvement in video games. He has been giving thanks to his games of his division DreamWorks Interactive most notable as Someone's in the Kitchen with script written by Animaniacs' Paul Rugg, Goosebumps: Escape from HorrorLand, The Neverhood (all in 1996), Skullmonkeys, Dilbert's Desktop Games, Goosebumps: Attack of the Mutant (all 1997), Boombots (1999), T'ai Fu: Wrath of the Tiger (1999), and Clive Barker's Undying (2001). In 2005 the director signed with Electronic Arts to collaborate on three games including an action game and an award-winning puzzle game for the Wii called Boom Blox (and its 2009 sequel: Boom Blox Bash Party). Previously, he was involved in creating the scenario for the adventure game The Dig. In 1996, Spielberg worked on and shot original footage for a movie-making simulation game called Steven Spielberg's Director's Chair. He is the creator of the Medal of Honor series by Electronic Arts. He is credited in the special thanks section of the 1998 video game Trespasser. In 2013, Spielberg has announced he is collaborating with 343 Industries for a live-action TV show of Halo.
Upcoming projects
Spielberg began filming an adaptation of the popular sci-fi novel Ready Player One by Ernest Cline in London in July 2016, starring Tye Sheridan, Olivia Cooke, Ben Mendelsohn, Simon Pegg and Mark Rylance. It was originally slated to be released on December 15, 2017 by Warner Bros., but was pushed back to March 30, 2018, to avoid competition with Star Wars Episode VIII.
After completing filming on Ready Player One, while it is in its lengthy, effects-heavy post-production, he will film his long-planned adaptation of David Kertzer's acclaimed The Kidnapping of Edgardo Mortara. The book follows the true story of a young Jewish boy in 1858 Italy who was secretly baptized by a family servant and then kidnapped from his family by the Papal States, where he was raised and trained as a priest, causing international outrage and becoming a media sensation. First announced in 2014, the book has been adapted by Tony Kushner and the film will again star Mark Rylance, in his fourth consecutive collaboration with Spielberg, in the role of Pope Pius IX. It will also star Oscar Isaac.http://theplaylist.net/oscar-isaac-joins-steven-spielbergs-kidnapping-edgardo-montara-mark-rylance-20160715/ It will be filmed in early 2017 for release at the end of that year, before Ready Player One is completed and released in 2018.
Spielberg will follow that with a fifth installment in the Indiana Jones series. The untitled film is set to star Harrison Ford and will be produced by Kathleen Kennedy and Frank Marshall. It is being written by David Koepp, who has written numerous other films for Spielberg, including the last Indiana Jones film. It will be released by Disney on July 19, 2019.
Spielberg is attached to direct Cortes, a historical epic written by Steven Zaillian about the Spanish conquest of the Aztec empire, and Cortes's relationship with Aztec ruler Montezuma. The script is based on an earlier one from 1965 by Oscar-winner Dalton Trumbo. The project at one time had Javier Bardem attached to play the lead role of explorer Hernán Cortés.
Spielberg is attached to direct an adaptation of American photojournalist Lynsey Addario's memoir It's What I Do. Jennifer Lawrence is attached to star in the lead role.
Projects on hold
Spielberg was scheduled to shoot a $200 million adaptation of Daniel H. Wilson's novel Robopocalypse, adapted for the screen by Drew Goddard. The film would follow a global human war against a robot uprising about 15–20 years in the future. Like Lincoln, it was to be released by Disney in the United States and Fox overseas. It was set for release on April 25, 2014, with Anne Hathaway and Chris Hemsworth set to star, but Spielberg postponed production indefinitely in January 2013, just before it had been set to begin.
In 2009, Spielberg reportedly tried to obtain the screen rights to make a film based on Microsoft's Halo series. In September 2008, Steven Spielberg bought film rights for John Wyndham's novel Chocky and is interested in directing it. He is also interested in making an adaptation of A Steady Rain, Pirate Latitudes,Spielberg to make pirates movie – Yahoo! Movies UK & Ireland The 39 Clues, and a remake of When Worlds Collide.
In May 2009, Steven Spielberg bought the rights to the life story of Martin Luther King, Jr. Spielberg will be involved not only as producer but also as a director."Steven Spielberg to direct Martin Luther King film" Daily Telegraph, May 19, 2009. Footnote format December 24, 2009. However, the purchase was made from the King estate, led by son Dexter, while the two other surviving children, the Reverend Bernice and Martin III, immediately threatened to sue, not having given their approvals to the project.-sue-over-planned-biographical-film/ "King's Children May Sue Over Planned Biographical Film" by Dave Itzkoff, The New York Times ArtsBeat blog, May 20, 2009. Retrieved December 24, 2009.
Spielberg has also considered directing a remake of West Side Story.
Themes
Spielberg's films often deal with several recurring themes. Most of his films deal with ordinary characters searching for or coming in contact with extraordinary beings or finding themselves in extraordinary circumstances. In an AFI interview in August 2000 Spielberg commented on his interest in the possibility of extra terrestrial life and how it has influenced some of his films. Spielberg described himself as feeling like an alien during childhood, and his interest came from his father, a science fiction fan, and his opinion that aliens would not travel light years for conquest, but instead curiosity and sharing of knowledge.
A strong consistent theme in his family-friendly work is a childlike sense of wonder and faith, as attested by works such as Close Encounters of the Third Kind, E.T. the Extra-Terrestrial, Hook, A.I. Artificial Intelligence and The BFG. According to Warren Buckland,Directed by Steven Spielberg: Poetics of the Contemporary Hollywood Blockbuster these themes are portrayed through the use of low height camera tracking shots, which have become one of Spielberg's directing trademarks. In the cases when his films include children (E.T. the Extra-Terrestrial, Empire of the Sun, Jurassic Park, etc.), this type of shot is more apparent, but it is also used in films like Munich, Saving Private Ryan, The Terminal, Minority Report, and Amistad. If one views each of his films, one will see this shot utilized by the director, notably the water scenes in Jaws are filmed from the low-angle perspective of someone swimming. Another child oriented theme in Spielberg's films is that of loss of innocence and coming-of-age. In Empire of the Sun, Jim, a well-groomed and spoiled English youth, loses his innocence as he suffers through World War II China. Similarly, in Catch Me If You Can, Frank naively and foolishly believes that he can reclaim his shattered family if he accumulates enough money to support them.
The most persistent theme throughout his films is tension in parent-child relationships. Parents (often fathers) are reluctant, absent or ignorant. Peter Banning in Hook starts off in the beginning of the film as a reluctant married-to-his-work parent who through the course of the film regains the respect of his children. The notable absence of Elliott's father in E.T., is the most famous example of this theme. In Indiana Jones and the Last Crusade, it is revealed that Indy has always had a very strained relationship with his father, who is a professor of medieval literature, as his father always seemed more interested in his work, specifically in his studies of the Holy Grail, than in his own son, although his father does not seem to realize or understand the negative effect that his aloof nature had on Indy (he even believes he was a good father in the sense that he taught his son "self reliance," which is not how Indy saw it). Even Oskar Schindler, from Schindler's List, is reluctant to have a child with his wife. Munich depicts Avner as a man away from his wife and newborn daughter. There are of course exceptions; Brody in Jaws is a committed family man, while John Anderton in Minority Report is a shattered man after the disappearance of his son. This theme is arguably the most autobiographical aspect of Spielberg's films, since Spielberg himself was affected by his parents' divorce as a child and by the absence of his father. Furthermore, to this theme, protagonists in his films often come from families with divorced parents, most notably E.T. the Extra-Terrestrial (protagonist Elliot's mother is divorced) and Catch Me If You Can (Frank Abagnale's mother and father split early on in the film). Little known also is Tim in Jurassic Park (early in the film, another secondary character mentions Tim and Lex's parents' divorce). The family often shown divided is often resolved in the ending as well. Following this theme of reluctant fathers and father figures, Tim looks to Dr. Alan Grant as a father figure. Initially, Dr. Grant is reluctant to return those paternal feelings to Tim. However, by the end of the film, he has changed, and the kids even fall asleep with their heads on his shoulders.
Most of his films are generally optimistic in nature. Though some critics accuse his films of being a little overly sentimental, Spielberg feels it is fine as long as it is disguised. He is still a highly praised director as well as being credited as one of the most influential directors of all time. The influence comes from directors Frank Capra and John Ford.
Frequent collaborators
Actors
In terms of casting and production itself, Spielberg has a known penchant for working with actors and production members from his previous films. For instance, he has cast Richard Dreyfuss in several films: Jaws, Close Encounters of the Third Kind, and Always. Aside from his role as Indiana Jones, Spielberg also cast Harrison Ford as a headteacher in E.T. the Extra-Terrestrial (though the scene was ultimately cut). Although Spielberg directed veteran voice actor Frank Welker only once (in Raiders of the Lost Ark, for which he voiced many of the animals), Welker has lent his voice in a number of productions Spielberg has executive produced from Gremlins to its sequel Gremlins 2: The New Batch, as well as The Land Before Time, Who Framed Roger Rabbit, and television shows such as Tiny Toons, Animaniacs, and SeaQuest DSV. Spielberg has used Tom Hanks on several occasions and has cast him in Saving Private Ryan, Catch Me If You Can, The Terminal, and Bridge of Spies. Spielberg has collaborated with Tom Cruise twice on Minority Report and War of the Worlds, and cast Shia LaBeouf in five films: Transformers, Eagle Eye, Indiana Jones and the Kingdom of the Crystal Skull, Transformers: Revenge of the Fallen, and Transformers: Dark of the Moon. Spielberg cast Mark Rylance in Bridge of Spies and The BFG, as well as the upcoming Kidnapping of Edgardo Mortara and Ready Player One.
Other collaborations
Spielberg prefers working with production members with whom he has developed an existing working relationship. An example of this is his production relationship with Kathleen Kennedy who has served as producer on all his major films from E.T. the Extra-Terrestrial to the recent Lincoln. For cinematography, Allen Daviau, a childhood friend and cinematographer, shot the early Spielberg film Amblin and most of his films up to Empire of the Sun; Janusz Kamiński who has shot every Spielberg film since Schindler's List (see List of film director and cinematographer collaborations); and the film editor Michael Kahn who has edited every film directed by Spielberg from Close Encounters to Munich (except E.T. the Extra-Terrestrial). Most of the DVDs of Spielberg's films have documentaries by Laurent Bouzereau.
A famous example of Spielberg working with the same professionals is his long-time collaboration with John Williams and the use of his musical scores in all of his films since The Sugarland Express (except Bridge of Spies, The Color Purple and Twilight Zone: The Movie). One of Spielberg's trademarks is his use of music by Williams to add to the visual impact of his scenes and to try and create a lasting picture and sound of the film in the memories of the film audience. These visual scenes often uses images of the sun (e.g. Empire of the Sun, Saving Private Ryan, the final scene of Jurassic Park, and the end credits of Indiana Jones and the Last Crusade (where they ride into the sunset)), of which the last two feature a Williams score at that end scene. Spielberg is a contemporary of filmmakers George Lucas, Francis Ford Coppola, Martin Scorsese, John Milius, and Brian De Palma, collectively known as the "Movie Brats". Aside from his principal role as a director, Spielberg has acted as a producer for a considerable number of films, including early hits for Joe Dante and Robert Zemeckis. Spielberg has often never worked with the same screenwriter in his films, beside Tony Kushner and David Koepp, who have written a few of his films more than once.
Personal life
Marriages and children
Spielberg first met actress Amy Irving in 1976 at the suggestion of director Brian De Palma, who knew he was looking for an actress to play in Close Encounters. After meeting her, Spielberg told his co-producer Julia Phillips, "I met a real heartbreaker last night." Although she was too young for the role, she and Spielberg began dating and she eventually moved in to what she described as his "bachelor funky" house. They lived together for four years, but the stresses of their professional careers took a toll on their relationship. Irving wanted to be certain that whatever success she attained as an actress would be her own: "I don't want to be known as Steven's girlfriend," she said, and chose not to be in any of his films during those years.
As a result, they broke up in 1979, but remained close friends. Then in 1984 they renewed their romance, and in November 1985, they married, already having had a son, Max Samuel. After three and a half years of marriage, however, many of the same competing stresses of their careers caused them to divorce in 1989. They agreed to maintain homes near each other as to facilitate the shared custody and parenting of their son. Their divorce was recorded as the third most costly celebrity divorce in history.
Spielberg subsequently developed a relationship with actress Kate Capshaw, whom he met when he cast her in Indiana Jones and the Temple of Doom. They married on October 12, 1991. Capshaw is a convert to Judaism. They currently move among their four homes in Pacific Palisades, California; New York City; Quelle Farm, Georgica Pond in East Hampton, New York, on Long Island;"Billionaires on vacation: No. 80: Steven Spielberg" by Christina Valhouli, Forbes magazine, September 19, 2002. Retrieved December 24, 2009. and Naples, Florida.
There are seven children in the Spielberg-Capshaw family:
Jessica Capshaw (born August 9, 1976) – daughter from Kate Capshaw's previous marriage to Robert Capshaw
Max Samuel Spielberg (born June 13, 1985) – son from Spielberg's previous marriage to actress Amy Irving
Theo Spielberg (born August 21, 1988) – son adopted by Capshaw before her marriage to Spielberg, who later also adopted him
Sasha Rebecca Spielberg (born May 14, 1990, Los Angeles)
Sawyer Avery Spielberg (born March 10, 1992, Los Angeles)
Mikaela George (born February 28, 1996) – adopted with Kate Capshaw
Destry Allyn Spielberg (born December 1, 1996)
Religion
Spielberg grew up in a Jewish household, including having a bar mitzvah ceremony in Phoenix when he turned 13. He grew away from Judaism after his family moved to various cities during his high school years, where they became the only Jews in the neighborhood. Before those years, his family was involved in the synagogue and had many Jewish friends and nearby relatives.
He remembers his grandparents telling him about their life in Russia, where they were subjected to religious persecution, causing them to eventually flee to the United States. He was made aware of the Holocaust by his parents, who he says “talked about it all the time, and so it was always on my mind.” His father had lost between sixteen and twenty relatives during the Holocaust.
Spielberg "rediscovered the honor of being a Jew," he says, before he made Schindler's List, when he married Kate Capshaw.Pogrebin, Abigail. Stars of David, Broadway Books, NY, (2005) Until then, having become a filmmaker, he only felt his connection to Judaism when he visited his parents. He says he made the film partly to create “something that would confirm my Judaism to my family and myself.”Loshitzky, Yosefa. Spielberg's Holocaust: Critical Perspectives on “Schindler's List”, Indiana Univ. Press (1997) p. 162
He credits her with fueling his family's current level of observance and for keeping the “momentum flowing" in their lives, as they now observe Jewish holidays, light candles on Friday nights, and give their children Bar and Bat Mitzvahs. "This shiksa goddess has made me a better Jew than my own parents."
Producing Schindler's List in 1993 also renewed his faith, Spielberg says, but "it really was the fact that my wife took a profound interest in Judaism." He waited ten years after being given the story in 1982 to make the film, as he did not yet feel "mature" enough. He first wanted to have a family, "to figure out what my place was in the world. . . . When my first son, [Max] was born, it greatly affected me. . . . A spirit began to ignite in me, and I became a Jewish dad. . ."
He said that making the film became a “natural experience” for him, adding, "I had to tell the story. I've lived on its outer edges." The film, writes biographer Joseph McBride, thereby became the "culmination" of Spielberg's long personal struggle with his Jewish identity.<ref name=McBride2>McBride, Joseph. Steven Spielberg: A Biography (3rd edition)</ref> Some claim the film has made Spielberg "the one true heir to the great Jewish moguls who created Hollywood," most of whom had actively avoided depicting Jews or the Holocaust in their films.
WealthForbes magazine places Spielberg's personal net worth at $3.7 billion.
Yachting
In 2015, Spielberg purchased the 282-foot mega-yacht Seven Seas for US$182 million. He has since put it up for sale and in the meantime has made it available for charter. At US$1.2 million per month, it is one of the most expensive charters on the market. He has ordered a new 300-foot yacht costing a reported US$250 million.http://www.businessinsider.com/this-184-million-yacht-isnt-enough-for-steven-spielberg-2015-8
Recognition
In 2002, Spielberg was one of eight flagbearers who carried the Olympic Flag into Rice-Eccles Stadium at the Opening Ceremonies of the 2002 Winter Olympic Games in Salt Lake City. In 2006, Premiere listed him as the most powerful and influential figure in the motion picture industry. Time listed him as one of the 100 Most Important People of the Century. At the end of the 20th century, Life named him the most influential person of his generation. In 2009, Boston University presented him an honorary Doctor of Humane Letters degree.
According to Forbes Most Influential Celebrities 2014 list, Spielberg was listed as the most influential celebrity in America. The annual list is conducted by E-Poll Market Research and it gave more than 6,600 celebrities on 46 different personality attributes a score representing "how that person is perceived as influencing the public, their peers, or both." Spielberg received a score of 47, meaning 47% of the US believes he is influential. Gerry Philpott, president of E-Poll Market Research, supported Spielberg's score by stating, "If anyone doubts that Steven Spielberg has greatly influenced the public, think about how many will think for a second before going into the water this summer."
Politics
Spielberg usually supports U.S. Democratic Party candidates. He has donated over $800,000 to the Democratic party and its nominees. He has been a close friend of former President Bill Clinton and worked with the President for the USA Millennium celebrations. He directed an 18-minute film for the project, scored by John Williams and entitled The American Journey. It was shown at America's Millennium Gala on December 31, 1999, in the National Mall at the Reflecting Pool at the base of the Lincoln Memorial in Washington, D.C.
thumb|right|Secretary of Defense William S. Cohen escorts Spielberg through a military honor cordon into the Pentagon. Spielberg resigned as a member of the national advisory board of the Boy Scouts of America in 2001 because of his disapproval of the organization's anti-homosexuality stance. In 2007 the Arab League voted to boycott Spielberg's movies after he donated $1 million for relief efforts in Israel during the 2006 Lebanon War."Spielberg movies banned by Arab League, WikiLeaks cable reveals." Haaretz, December 18, 2010. On February 20, 2007, Spielberg, Jeffrey Katzenberg, and David Geffen invited Democrats to a fundraiser for Barack Obama.Obama excites entertainment community By JOCELYN NOVECK, AP National Writer In February 2008, Spielberg pulled out of his role as advisor to the 2008 Summer Olympics in response to the Chinese government's inaction over the War in Darfur. Spielberg said in a statement that "I find that my conscience will not allow me to continue business as usual." It also said that "Sudan's government bears the bulk of the responsibility for these on-going crimes, but the international community, and particularly China, should be doing more.." The International Olympic Committee respected Spielberg's decision, but IOC president Jacques Rogge admitted in an interview that "[Spielberg] certainly would have brought a lot to the opening ceremony in terms of creativity." Spielberg's statement drew criticism from Chinese officials and state-run media calling his criticism "unfair". In September 2008, Spielberg and his wife offered their support to same-sex marriage, by issuing a statement following their donation of $100,000 to the "No on Proposition 8" campaign fund, a figure equal to the amount of money Brad Pitt donated to the same campaign less than a week prior.
Spielberg supported Hillary Clinton for President of the United States in the 2016 election. He donated US$1 million to Priorities USA, a pro-Clinton Super PAC.
Hobbies
A collector of film memorabilia, Spielberg purchased a balsa Rosebud sled from Citizen Kane (1941) in 1982. He bought Orson Welles's own directorial copy of the script for the radio broadcast The War of the Worlds (1938) in 1994.Sale 7565 / Lot 149, Orson Welles. Typescript radioplay The War of the Worlds. Christie's, June 2, 1994Millar, John, "Cruising for a Summer Hit; The Aliens Have Landed"; Sunday Mail (Scotland), June 26, 2005 Spielberg has purchased Academy Award statuettes being sold on the open market and donated them to the Academy of Motion Picture Arts and Sciences, to prevent their further commercial exploitation. His donations include the Oscars that Bette Davis received for Dangerous (1935) and Jezebel (1938), and Clark Gable's Oscar for It Happened One Night (1934).
Spielberg is a major collector of the work of American illustrator and painter Norman Rockwell. A collection of 57 Rockwell paintings and drawings owned by Spielberg and fellow Rockwell collector and film director George Lucas were displayed at the Smithsonian American Art Museum July 2, 2010 – January 2, 2011, in an exhibition titled Telling Stories.
Spielberg is an avid film buff, and, when not shooting a picture, he will watch many films in a single weekend. He sees almost every major summer blockbuster in theaters if not preoccupied and enjoys most of them.
Since playing Pong while filming Jaws in 1974, Spielberg has been an avid video gamer. Spielberg played many of LucasArts adventure games, including the first Monkey Island games.GameSpot, "Storytime with Ron Gilbert - PAX Australia 2013 Keynote", Ron Gilbert, 7 July 2013, accessed 21 March 2015DoubleFineProd, "Tim Schafer Plays Day of the Tentacle Part 1", Tim Schafer, 9 May 2014, accessed 22 March 2015 He owns a Wii, a PlayStation 3, a PSP and Xbox 360, and enjoys playing first-person shooters such as the Medal of Honor series and Call of Duty 4: Modern Warfare. He has also criticized the use of cutscenes in games, calling them intrusive, and feels making story flow naturally into the gameplay is a challenge for future game developers.
Stalking
In 2001, Spielberg was stalked by conspiracy theorist and former social worker Diana Napolis. She accused him, along with actress Jennifer Love Hewitt, of controlling her thoughts through "cybertronic" technology and being part of a satanic conspiracy against her. Napolis was committed to a mental institution before pleading guilty to stalking, and released on probation with a condition that she have no contact with either Spielberg or Hewitt.
Jonathan Norman was arrested after making two attempts to enter Spielberg's Pacific Palisades home in June and July 1997. Norman was jailed for 25 years in California. Spielberg told the court: "Had Jonathan Norman actually confronted me, I genuinely, in my heart of hearts, believe that I would have been raped or maimed or killed."
Awards and honors
thumb|right|Spielberg receiving a public service award presented by United States Secretary of Defense William Cohen, 1999
thumb|right|Steven Spielberg's star on the Hollywood Walk of Fame
thumb|Footprints and handprints of Steven Spielberg in front of the Grauman's Chinese Theatre
thumb|left|Former President Clinton with Spielberg as he accepts the 2009 Liberty Award in Philadelphia
Spielberg has won three Academy Awards. He has been nominated for seven Academy Awards for the category of Best Director, winning two of them (Schindler's List and Saving Private Ryan), and ten of the films he directed were up for the Best Picture Oscar (Schindler's List won). In 1987 he was awarded the Irving G. Thalberg Memorial Award for his work as a creative producer.
Drawing from his own experiences in Scouting, Spielberg helped the Boy Scouts of America develop a merit badge in cinematography in order to help promote filmmaking as a marketable skill. The badge was launched at the 1989 National Scout Jamboree, which Spielberg attended, and where he personally counseled many boys in their work on requirements.Boys' Life, September 1989
That same year, 1989, saw the release of Indiana Jones and the Last Crusade. The opening scene shows a teenage Indiana Jones in scout uniform bearing the rank of a Life Scout. Spielberg stated he made Indiana Jones a Boy Scout in honor of his experience in Scouting. For his career accomplishments, service to others, and dedication to a new merit badge Spielberg was awarded the Distinguished Eagle Scout Award.
Steven Spielberg received the AFI Life Achievement Award in 1995.
In 1998 he was awarded the Federal Cross of Merit with Ribbon of the Federal Republic of Germany. The Award was presented to him by President Roman Herzog in recognition of his film Schindler's List and his Shoa-Foundation.
In 1999, Spielberg received an honorary degree from Brown University. Spielberg was also awarded the Department of Defense Medal for Distinguished Public Service by Secretary of Defense William Cohen at the Pentagon on August 11, 1999; Cohen presented the award in recognition of Spielberg's film Saving Private Ryan.
In 2001, he was honored as an honorary Knight Commander of the Order of the British Empire (KBE) by Queen Elizabeth II.
In 2004 he was admitted as knight of the Légion d'honneur by president Jacques Chirac. On July 15, 2006, Spielberg was also awarded the Gold Hugo Lifetime Achievement Award at the Summer Gala of the Chicago International Film Festival, and also was awarded a Kennedy Center honour on December 3. The tribute to Spielberg featured a short, filmed biography narrated by Tom Hanks and included thank-yous from World War II veterans for Saving Private Ryan, as well as a performance of the finale to Leonard Bernstein's Candide, conducted by John Williams (Spielberg's frequent composer).
The Science Fiction Hall of Fame inducted Spielberg in 2005, the first year it considered non-literary contributors.. Press release March 24, 2005. Science Fiction Museum (sfhomeworld.org). Archived 2005-03-26. Retrieved 2013-03-22."Science Fiction and Fantasy Hall of Fame" . Mid American Science Fiction and Fantasy Conventions, Inc. Retrieved 2013-04-07. This was the official website of the hall of fame to 2004. In November 2007, he was chosen for a Lifetime Achievement Award to be presented at the sixth annual Visual Effects Society Awards in February 2009. He was set to be honored with the Cecil B. DeMille Award at the January 2008 Golden Globes; however, the new, watered-down format of the ceremony resulting from conflicts in the 2007–08 writers strike, the HFPA postponed his honor to the 2009 ceremony. In 2008, Spielberg was awarded the Légion d'honneur.
In June 2008, Spielberg received Arizona State University's Hugh Downs Award for Communication Excellence.Spielberg Receives Arizona State University Communication Award Newswise. Retrieved June 22, 2008.
Spielberg received an honorary degree at Boston University's 136th Annual Commencement on May 17, 2009. In October 2009 Steven Spielberg received the Philadelphia Liberty Medal; presenting him with the medal was former US president and Liberty Medal recipient Bill Clinton. Special guests included Whoopi Goldberg, Pennsylvania Governor Ed Rendell and Philadelphia Mayor Michael Nutter.
On October 22, 2011 he was admitted as a Commander of the Belgian Order of the Crown. He was given the badge on a red neck ribbon by the Belgian Federal Minister of Finance Didier Reynders. The Commander is the third highest rank of the Order of the Crown. He was the president of the jury for the 2013 Cannes Film Festival.
On November 19, 2013, Spielberg was honored by the National Archives and Records Administration with its Records of Achievement Award. Spielberg was given two facsimiles of the 13th Amendment to the United States Constitution, one passed but not ratified in 1861, as well as a facsimile of the actual 1865 amendment signed into law by President Abraham Lincoln. The amendment and the process of passing it were the subject of his film Lincoln.
In November 2015, it was announced that Spielberg would be awarded the Presidential Medal of Freedom from President Barack Obama in a ceremony at the White House.
On May 26, 2016, Spielberg was awarded an Honorary Doctor of Arts by Harvard University.
In July 2016, Spielberg was awarded a gold Blue Peter badge on the BBC children's television programme Blue Peter.Burns, Catherine (22 July 2016) Steven Spielberg has just won at life. He's got a Gold Blue Peter badge BBC Newsbeat website
Filmography
Praise and criticism
In 2005, Steven Spielberg was rated the greatest film director of all time by Empire magazine. In 1997 a Wall Street sell-side analyst said, "There are only two brand names in the business: Disney and Spielberg".
After watching the unconventional, off-center camera techniques of Jaws, Alfred Hitchcock praised "young Spielberg," for thinking outside of the visual dynamics of the theater, saying "He's the first one of us who doesn't see the proscenium arch".
Some of Spielberg's most notable admirers include Robert Aldrich, Ingmar Bergman, Werner Herzog, Stanley Kubrick, David Lean, Sidney Lumet, Roman Polanski, Martin Scorsese, François Truffaut, David Lynch and Zhang Yimou.
Spielberg's movies have also influenced many directors that followed, including Adam Green, J. J. Abrams,Five Favorite Films with J.J. Abrams. Rotten Tomatoes. Retrieved June 21, 2011. Paul Thomas Anderson, Neill Blomkamp, James Cameron, Guillermo del Toro,http://www.deltorofilms.com/GDT_Favorites.php Roland Emmerich, David Fincher, Peter Jackson, Kal Ng, Robert Rodriguez, John Sayles, Ridley Scott, John Singleton, Kevin Smith, Steven Soderbergh, Quentin Tarantino, and Gareth Edwards. In 2016, Jeffrey Katzenberg said of Spielberg: "You can take James Cameron, Chris Nolan or Martin Scorsese - all brilliant and in many ways his peers, but look at quality and consistency, and no one compares."
British film critic Tom Shone has said of Spielberg, "If you have to point to any one director of the last twenty-five years in whose work the medium of film was most fully itself – where we found out what it does best when left to its own devices, it has to be that guy."Shone, Tom. Blockbuster: how Hollywood learned to stop worrying and love the summer. p. 80. Simon and Schuster, 2004. ISBN 0-7432-3568-1. Jess Cagle, the managing editor of Entertainment Weekly, called Spielberg "...arguably (well, who would argue?) the greatest filmmaker in history.""Spielberg and You" – Entertainment Weekly. Pg. 6. 12/9/11.
Spielberg's critics complain that his films are overly sentimental and tritely moralistic. In his book Easy Riders, Raging Bulls: How the Sex 'n' Drugs 'n' Rock 'n' Roll Generation Saved Hollywood, Peter Biskind summarized the views of Spielberg's detractors, accusing the director of "infantilizing the audience, reconstituting the spectator as child, then overwhelming him and her with sound and spectacle, obliterating irony, aesthetic self-consciousness, and critical reflection."Easy Riders, Raging Bulls: How the Sex 'n' Drugs 'n' Rock 'n' Roll Generation Saved Hollywood by Peter Biskind, Bloomsbury, London, 1999, pp. 343–344.
Critics of mainstream film such as Ray Carney and American artist and actor Crispin Glover (who starred in the Spielberg-produced Back to the Future and who sued Spielberg for using his likeness in Back to the Future Part II) claim that Spielberg's films lack depth and do not take risks.
French New Wave filmmaker Jean-Luc Godard stated that he holds Spielberg partly responsible for the lack of artistic merit in mainstream cinema and accused Spielberg of using his film Schindler's List to make a profit off tragedy while Schindler's wife, Emilie Schindler, lived in poverty in Argentina. In defense of Spielberg, critic Roger Ebert said "Has Godard or any other director living or dead done more than Spielberg, with his Holocaust Project, to honor and preserve the memories of the survivors?" Author Thomas Keneally has also disputed claims that Emilie Schindler was never paid for her contributions to the film, "not least because I had recently sent Emilie a check myself."
Film critic Pauline Kael, who had championed Spielberg's films in the 1970s, expressed disappointment in his later development, stating that "he's become, I think, a very bad director.... And I'm a little ashamed for him, because I loved his early work.... [H]e turned to virtuous movies. And he's become so uninteresting now.... I think that he had it in him to become more of a fluid, far-out director. But, instead, he's become a melodramatist."Afterglow: A Last Conversation with Pauline Kael by Francis Davis, Da Capo Press, 2003, p. 50.
Imre Kertész, Hungarian Jewish author, Nazi concentration camp survivor, and winner of the Nobel Prize in Literature, criticized Spielberg's depiction of the Holocaust in Schindler's List as kitsch, saying "I regard as kitsch any representation of the Holocaust that is incapable of understanding or unwilling to understand the organic connection between our own deformed mode of life and the very possibility of the Holocaust." Veteran documentary filmmaker and professor Claude Lanzmann also labeled Schindler's List "pernicious in its impact and influence" and "very sentimental".The New York Times, December 6, 2010 Maker of 'Shoah' Stresses Its Lasting Value
Stephen Rowley wrote an extensive essay about Spielberg and his career in Senses of Cinema. In it he discussed Spielberg's strengths as a filmmaker, saying "there is a welcome complexity of tone and approach in these later films that defies the lazy stereotypes often bandied about his films" and that "Spielberg continues to take risks, with his body of work continuing to grow more impressive and ambitious", concluding that he has only received "limited, begrudging recognition" from critics.
Shia LaBeouf, who worked with Spielberg on a number of films including Indiana Jones and the Kingdom of the Crystal Skull and various DreamWorks productions (notably the Transformers film series), described his experiences working with the director in a wide-ranging interview with Variety in 2016. He stated, "I grew up with this idea, [that] if you got to Spielberg, that’s where it is – I’m not talking about fame, and I’m not talking about money. You get there, and you realize you’re not meeting the Spielberg you dream of. You’re meeting a different Spielberg, who is in a different stage in his career. He’s less a director than he is a fucking company." He went on to discuss his on-set actor/director relationship with Spielberg, as well as the films they made together, "Spielberg’s sets are very different – everything has been so meticulously planned. You got to get this line out in 37 seconds. You do that for five years, you start to feel like not knowing what you’re doing for a living." He concluded his point by stating: "I don’t like the movies that I made with Spielberg. The only movie that I liked that we made together was [the first] Transformers [film]." Later in the interview, LaBeouf recited and criticised the advice given to him by Spielberg following the mixed reaction to both Kingdom of the Crystal Skull'', and LaBeouf's performance in the film. He claims Spielberg told him not to read about himself in the media, but LaBeouf felt irritated by what he perceived to be non-advice and a lack of understanding, saying "There’s no way to not do that. For me to not read that means I need to not take part in society. The generation previous to mine didn’t have the immediate response [of the internet]. If you were Mark Hamill [in Star Wars], you could lie to yourself. You could find the pockets of joy, and turn a blind eye to the shit over there."
Other
In 1999, Spielberg, then a co-owner of DreamWorks, was involved in a heated debate in which the studio proposed building on wetlands near Los Angeles, though development was later dropped for economic reasons.
In August 2007, Ai Weiwei, artistic consultant for the Beijing Olympic Stadium, known as the "Bird's Nest", accused those choreographing the Olympic opening ceremony, including Spielberg, of failing to live up to their responsibility as artists by allowing their work to be used by the Chinese government, which has suppressed human rights in China, including those of Ai's family, for the purpose of "propaganda". Ai said, "It's disgusting. I don't like anyone who shamelessly abuses their profession, who makes no moral judgment."
See also
References
Further reading
External links
Category:Steven Spielberg
Category:1946 births
Category:Living people
Category:20th-century American businesspeople
Category:20th-century American male actors
Category:21st-century American businesspeople
Category:21st-century American male actors
Category:21st-century American writers
Category:Akira Kurosawa Award winners
Category:American billionaires
Category:American film directors
Category:American film editors
Category:American film producers
Category:American film studio executives
Category:American humanitarians
Category:American male film actors
Category:American male screenwriters
Category:American television directors
Category:American television producers
Category:American people of Ukrainian-Jewish descent
Category:Animaniacs
Category:BAFTA fellows
Category:Best Directing Academy Award winners
Category:Best Director BAFTA Award winners
Category:Best Director Empire Award winners
Category:Best Director Golden Globe winners
Category:Businesspeople from Arizona
Category:Businesspeople from Los Angeles
Category:Businesspeople from New Jersey
Category:Businesspeople from New York
Category:Businesspeople from Ohio
Category:California Democrats
Category:California State University, Long Beach alumni
Category:Cecil B. DeMille Award Golden Globe winners
Category:Chevaliers of the Légion d'honneur
Category:Commanders of the Order of the Crown (Belgium)
Category:David di Donatello Career Award winners
Category:David di Donatello winners
Category:Daytime Emmy Award winners
Category:Directors Guild of America Award winners
Category:Distinguished Eagle Scouts
Category:Eagle Scouts
Category:English-language film directors
Category:Fantasy film directors
Category:Film directors from California
Category:Film directors from New York
Category:Film directors from Ohio
Category:Film theorists
Category:Golden Globe Award-winning producers
Category:Honorary Knights Commander of the Order of the British Empire
Category:International Emmy Founders Award winners
Category:Jewish American art collectors
Category:Jewish American philanthropists
Category:Jewish American writers
Category:Jews and Judaism in Cincinnati
Category:Kennedy Center honorees
Category:Knight Commanders of the Order of Merit of the Federal Republic of Germany
Category:Male actors from Arizona
Category:Male actors from Cincinnati
Category:Male actors from Los Angeles
Category:Male actors from New Jersey
Category:Male actors from New York
Category:National Humanities Medal recipients
Category:People from Cincinnati
Category:People from East Hampton (town), New York
Category:People from Haddon Township, New Jersey
Category:Philanthropists from California
Category:Producers who won the Best Picture Academy Award
Category:Recipients of the Irving G. Thalberg Memorial Award
Category:Science fiction fans
Category:Science fiction film directors
Category:Science Fiction Hall of Fame inductees
Category:Special effects people
Category:Television producers from California
Category:Television producers from New York
Category:Writers from Cincinnati
Category:Writers from Los Angeles
Category:Writers from New Jersey
Category:Writers from New York
Category:Writers from Scottsdale, Arizona
Category:Presidential Medal of Freedom recipients
Category:People from Saratoga, California | 26,940 | 2017-01 |
Animal | Animals are multicellular, eukaryotic organisms of the kingdom Animalia (also called Metazoa). The animal kingdom emerged as a basal clade within Apoikozoa as a sister of the choanoflagellates. Sponges are the most basal clade of animals. Animals are motile, meaning they can move spontaneously and independently at some point in their lives. Their body plan eventually becomes fixed as they develop, although some undergo a process of metamorphosis later on in their lives. All animals are heterotrophs: they must ingest other organisms or their products for sustenance.
Most known animal phyla appeared in the fossil record as marine species during the Cambrian explosion, about 542 million years ago. Animals can be divided broadly into vertebrates and invertebrates. Vertebrates have a backbone or spine (vertebral column), and amount to less than five percent of all described animal species. They include fish, amphibians, reptiles, birds and mammals. The remaining animals are the invertebrates, which lack a backbone. These include molluscs (clams, oysters, octopuses, squid, snails); arthropods (millipedes, centipedes, insects, spiders, scorpions, crabs, lobsters, shrimp); annelids (earthworms, leeches), nematodes (filarial worms, hookworms), flatworms (tapeworms, liver flukes), cnidarians (jellyfish, sea anemones, corals), ctenophores (comb jellies), and sponges. The study of animals is called zoology.
Etymology
The word "animal" comes from the Latin , meaning having breath, having soul or living being. In everyday non-scientific usage the word excludes humans – that is, "animal" is often used to refer only to non-human members of the kingdom Animalia; often, only closer relatives of humans such as mammals and other vertebrates, are meant. The biological definition of the word refers to all members of the kingdom Animalia, encompassing creatures as diverse as sponges, jellyfish, insects, and humans.
History of classification
thumb|left|alt=oil painting of wigged scholar in suit and waistcoat|Carl Linnaeus, known as the father of modern taxonomy
Aristotle divided the living world between animals and plants, and this was followed by Carl Linnaeus, in the first hierarchical classification. In Linnaeus's original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then the last four have all been subsumed into a single phylum, the Chordata, whereas the various other forms have been separated out.
In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals) and Protozoa (single-celled animals). The protozoa were later moved to the kingdom Protista, leaving only the metazoa. Thus Metazoa is now considered a synonym of Animalia.
Characteristics
Animals have several characteristics that set them apart from other living things. Animals are eukaryotic and multicellular, which separates them from bacteria and most protists. They are heterotrophic, generally digesting food in an internal chamber, which separates them from plants and algae. They are also distinguished from plants, algae, and fungi by lacking rigid cell walls. All animals are motile, if only at certain life stages. In most animals, embryos pass through a blastula stage, which is a characteristic exclusive to animals.
Structure
With a few exceptions, most notably the sponges (Phylum Porifera) and Placozoa, animals have bodies differentiated into separate tissues. These include muscles, which are able to contract and control locomotion, and nerve tissues, which send and process signals. Typically, there is also an internal digestive chamber, with one or two openings. Animals with this sort of organization are called metazoans, or eumetazoans when the former is used for animals in general.
All animals have eukaryotic cells, surrounded by a characteristic extracellular matrix composed of collagen and elastic glycoproteins. This may be calcified to form structures like shells, bones, and spicules. During development, it forms a relatively flexible framework upon which cells can move about and be reorganized, making complex structures possible. In contrast, other multicellular organisms, like plants and fungi, have cells held in place by cell walls, and so develop by progressive growth. Also, unique to animal cells are the following intercellular junctions: tight junctions, gap junctions, and desmosomes.
Reproduction and development
thumb|left|240px|alt=microscopic view of dart with point| Some species of land snails use love darts as a form of sexual selection
Nearly all animals undergo some form of sexual reproduction. They produce haploid gametes by meiosis (see Origin and function of meiosis). The smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop into new individuals (see Allogamy).
Many animals are also capable of asexual reproduction. This may take place through parthenogenesis, where fertile eggs are produced without mating, budding, or fragmentation.
A zygote initially develops into a hollow sphere, called a blastula, which undergoes rearrangement and differentiation. In sponges, blastula larvae swim to a new location and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber, and two separate germ layers—an external ectoderm and an internal endoderm. In most cases, a mesoderm also develops between them. These germ layers then differentiate to form tissues and organs.
Inbreeding avoidance
thumb|In Gombe Stream National Park, male chimpanzees remain in their natal community while females disperse to other groups
During sexual reproduction, mating with a close relative (inbreeding) generally leads to inbreeding depression. For instance, inbreeding was found to increase juvenile mortality in 11 small animal species. Inbreeding depression is considered to be largely due to expression of deleterious recessive mutations. Mating with unrelated or distantly related members of the same species is generally thought to provide the advantage of masking deleterious recessive mutations in progeny. (see Heterosis). Animals have evolved numerous diverse mechanisms for avoiding close inbreeding and promoting outcrossing (see Inbreeding avoidance).
As indicated in the image of chimpanzees, they have adopted dispersal as a way to separate close relatives and prevent inbreeding. Their dispersal route is known as natal dispersal, whereby individuals move away from the area of birth.
thumb|left|DNA analysis has shown that 60% of offspring in splendid fairywrens nests were sired through extra-pair copulations, rather than from resident males.
In various species, such as the splendid fairywren, females benefit by mating with multiple males, thus producing more offspring of higher genetic quality. Females that are pair bonded to a male of poor genetic quality, as is the case in inbreeding, are more likely to engage in extra-pair copulations in order to improve their reproductive success and the survivability of their offspring.Petrie, M. & Kempenaers, B. 1998, "Extra-pair paternity in birds: Explaining variation between species and populations", Trends in Ecology and Evolution, vol. 13, no. 2, pp. 52-57.
Food and energy sourcing
thumb|left|alt=multi-color stain of cell showing mitosis|A newt lung cell stained with fluorescent dyes undergoing the early anaphase stage of mitosis
All animals are heterotrophs, meaning that they feed directly or indirectly on other living things. They are often further subdivided into groups such as carnivores, herbivores, omnivores, and parasites.
Predation is a biological interaction where a predator (a heterotroph that is hunting) feeds on its prey (the organism that is attacked).Begon, M., Townsend, C., Harper, J. (1996). Ecology: Individuals, populations and communities (Third edition). Blackwell Science, London. ISBN 0-86542-845-X, ISBN 0-632-03801-2, ISBN 0-632-04393-8. Predators may or may not kill their prey prior to feeding on them, but the act of predation almost always results in the death of the prey.predation. Britannica.com. Retrieved on 2011-11-23. The other main category of consumption is detritivory, the consumption of dead organic matter. It can at times be difficult to separate the two feeding behaviours, for example, where parasitic species prey on a host organism and then lay their eggs on it for their offspring to feed on its decaying corpse. Selective pressures imposed on one another has led to an evolutionary arms race between prey and predator, resulting in various antipredator adaptations.
Most animals indirectly use the energy of sunlight by eating plants or plant-eating animals. Most plants use light to convert inorganic molecules in their environment into carbohydrates, :fats, proteins and other biomolecules, characteristically containing reduced carbon in the form of carbon-hydrogen bonds. Starting with carbon dioxide (CO2) and water (H2O), photosynthesis converts the energy of sunlight into chemical energy in the form of simple sugars (e.g., glucose), with the release of molecular oxygen. These sugars are then used as the building blocks for plant growth, including the production of other biomolecules. When an animal eats plants (or eats other animals which have eaten plants), the reduced carbon compounds in the food become a source of energy and building materials for the animal. They are either used directly to help the animal grow, or broken down, releasing stored solar energy, and giving the animal the energy required for motion.
Animals living close to hydrothermal vents and cold seeps on the ocean floor are not dependent on the energy of sunlight. Instead chemosynthetic archaea and bacteria form the base of the food chain.
Origin and fossil record
thumb|alt=pre-historic fish with bony skull|Dunkleosteus was a prehistoric fish.
Animals are generally considered to have emerged within flagellated eukaryota. Their closest known living relatives are the choanoflagellates, collared flagellates that have a morphology similar to the choanocytes of certain sponges. Molecular studies place animals in a supergroup called the opisthokonts, which also include the choanoflagellates, fungi and a few small parasitic protists. The name comes from the posterior location of the flagellum in motile cells, such as most animal spermatozoa, whereas other eukaryotes tend to have anterior flagella.
The first fossils that might represent animals appear in the Trezona Formation at Trezona Bore, West Central Flinders, South Australia. Pdf These fossils are interpreted as being early sponges. They were found in 665-million-year-old rock.
The next oldest possible animal fossils are found towards the end of the Precambrian, around 610 million years ago, and are known as the Ediacaran or Vendian biota. These are difficult to relate to later fossils, however. Some may represent precursors of modern phyla, but they may be separate groups, and it is possible they are not really animals at all.
Aside from them, most known animal phyla make a more or less simultaneous appearance during the Cambrian period, about 542 million years ago. It is still disputed whether this event, called the Cambrian explosion, is due to a rapid divergence between different groups or due to a change in conditions that made fossilization possible.
Some palaeontologists suggest that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Trace fossils such as tracks and burrows found in the Tonian period indicate the presence of triploblastic worms, like metazoans, roughly as large (about 5 mm wide) and complex as earthworms. During the beginning of the Tonian period around 1 billion years ago, there was a decrease in Stromatolite diversity, which may indicate the appearance of grazing animals, since stromatolite diversity increased when grazing animals became extinct at the End Permian and End Ordovician extinction events, and decreased shortly after the grazer populations recovered. However the discovery that tracks very similar to these early trace fossils are produced today by the giant single-celled protist Gromia sphaerica casts doubt on their interpretation as evidence of early animal evolution.
Groups of animals
Traditional morphological and modern molecular phylogenetic analysis have both recognized a major evolutionary transition from "non-bilaterian" animals, which are those lacking a bilaterally symmetric body plan (Porifera, Ctenophora, Cnidaria and Placozoa), to "bilaterian" animals (Bilateria) whose body plans display bilateral symmetry. The latter are further classified based on a major division between Deuterostomes and Protostomes. The relationships among non-bilaterian animals are disputed, but all bilaterian animals are thought to form a monophyletic group. Current understanding of the relationships among the major groups of animals is summarized by the following cladogram:
Non-bilaterian animals: Porifera, Placozoa, Ctenophora, Cnidaria
Several animal phyla are recognized for their lack of bilateral symmetry, and are thought to have diverged from other animals early in evolution. Among these, the sponges (Porifera) were long thought to have diverged first, representing the oldest animal phylum. They lack the complex organization found in most other phyla. Their cells are differentiated, but in most cases not organized into distinct tissues. Sponges typically feed by drawing in water through pores. However, a series of phylogenomic studies from 2008-2015 have found support for Ctenophora, or comb jellies, as the basal lineage of animals. This result has been controversial, since it would imply that sponges may not be so primitive, but may instead be secondarily simplified. Other researchers have argued that the placement of Ctenophora as the earliest-diverging animal phylum is a statistical anomaly caused by the high rate of evolution in ctenophore genomes.
Among the other phyla, the Ctenophora and the Cnidaria, which includes sea anemones, corals, and jellyfish, are radially symmetric and have digestive chambers with a single opening, which serves as both the mouth and the anus. Both have distinct tissues, but they are not organized into organs. There are only two main germ layers, the ectoderm and endoderm, with only scattered cells between them. As such, these animals are sometimes called diploblastic. The tiny placozoans are similar, but they do not have a permanent digestive chamber.
The Myxozoa, microscopic parasites that were originally considered Protozoa, are now believed to have evolved within Cnidaria.
thumb|left|upright|alt=Orange elephant ear sponge under water with sea fan in background|Orange elephant ear sponge, Agelas clathrodes, in foreground. Two corals in the background: a sea fan, Iciligorgia schrammi, and a sea rod, Plexaurella nutans.
Bilaterian animals
The remaining animals form a monophyletic group called the Bilateria. For the most part, they are bilaterally symmetric, and often have a specialized head with feeding and sensory organs. The body is triploblastic, i.e. all three germ layers are well-developed, and tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and there is also an internal body cavity called a coelom or pseudocoelom. There are exceptions to each of these characteristics, however—for instance adult echinoderms are radially symmetric, and certain parasitic worms have extremely simplified body structures.
Genetic studies have considerably changed our understanding of the relationships within the Bilateria. Most appear to belong to two major lineages: the deuterostomes and the protostomes, the latter of which includes the Ecdysozoa, and Lophotrochozoa. In addition, there are a few small groups of bilaterians with relatively similar structure whose relationships with other animals are not well-established. These include the Acoelomorpha, Rhombozoa, and Orthonectida.
Deuterostomes and Protostomes
thumb|alt=blue and gray wren on branch|Superb fairy-wren, Malurus cyaneus
Deuterostomes differ from protostomes in several ways. Animals from both groups possess a complete digestive tract. However, in protostomes, the first opening of the gut to appear in embryological development (the archenteron) develops into the mouth, with the anus forming secondarily. In deuterostomes the anus forms first, with the mouth developing secondarily. In most protostomes, cells simply fill in the interior of the gastrula to form the mesoderm, called schizocoelous development, but in deuterostomes, it forms through invagination of the endoderm, called enterocoelic pouching. Deuterostome embryos undergo radial cleavage during cell division, while protostomes undergo spiral cleavage.
All this suggests the deuterostomes and protostomes are separate, monophyletic lineages. The main phyla of deuterostomes are the Echinodermata and Chordata. The former are radially symmetric and exclusively marine, such as starfish, sea urchins, and sea cucumbers. The latter are dominated by the vertebrates, animals with backbones. These include :fish, amphibians, reptiles, :birds, and mammals.
In addition to these, the deuterostomes also include the Hemichordata, or acorn worms, which are thought to be closely related to Echinodermata forming a group known as Ambulacraria. Although they are not especially prominent today, the important fossil graptolites may belong to this group.
Ecdysozoa
thumb|alt=multi-colored dragonfly on branch facing left|Yellow-winged darter, Sympetrum flaveolum
The Ecdysozoa are protostomes, named after the common trait of growth by moulting or ecdysis. The largest animal phylum belongs here, the Arthropoda, including insects, spiders, crabs, and their kin. All these organisms have a body divided into repeating segments, typically with paired appendages. Two smaller phyla, the Onychophora and Tardigrada, are close relatives of the arthropods and share these traits. The ecdysozoans also include the Nematoda or roundworms, perhaps the second largest animal phylum. Roundworms are typically microscopic, and occur in nearly every environment where there is water. A number are important parasites. Smaller phyla related to them are the Nematomorpha or horsehair worms, and the Kinorhyncha, Priapulida, and Loricifera. These groups have a reduced coelom, called a pseudocoelom.thumb|alt=snail in shell facing right|Roman snail, Helix pomatia
Lophotrochozoa
The Lophotrochozoa, evolved within Protostomia, include two of the most successful animal phyla, the Mollusca and Annelida. The former, which is the second-largest animal phylum by number of described species, includes animals such as snails, clams, and squids, and the latter comprises the segmented worms, such as earthworms and leeches. These two groups have long been considered close relatives because of the common presence of trochophore larvae, but the annelids were considered closer to the arthropods because they are both segmented. Now, this is generally considered convergent evolution, owing to many morphological and genetic differences between the two phyla. The Lophotrochozoa also include the Nemertea or ribbon worms, the Sipuncula, and several phyla that have a ring of ciliated tentacles around the mouth, called a lophophore. These were traditionally grouped together as the lophophorates. but it now appears that the lophophorate group may be paraphyletic, with some closer to the nemerteans and some to the molluscs and annelids. They include the Brachiopoda or lamp shells, which are prominent in the fossil record, the Entoprocta, the Phoronida, and possibly the Bryozoa or moss animals.
The Platyzoa include the phylum Platyhelminthes, the flatworms. These were originally considered some of the most primitive Bilateria, but it now appears they developed from more complex ancestors. A number of parasites are included in this group, such as the flukes and tapeworms. Flatworms are acoelomates, lacking a body cavity, as are their closest relatives, the microscopic Gastrotricha. The other platyzoan phyla are mostly microscopic and pseudocoelomate. The most prominent are the Rotifera or rotifers, which are common in aqueous environments. They also include the Acanthocephala or spiny-headed worms, the Gnathostomulida, Micrognathozoa, and possibly the Cycliophora. These groups share the presence of complex jaws, from which they are called the Gnathifera.
The Chaetognatha or arrow worms have been traditionally classified as deuterostomes, though recent molecular studies have identified this group as a basal protostome lineage.
Number of extant species
Animals can be divided into two broad groups: vertebrates (animals with a backbone) and invertebrates (animals without a backbone). Half of all described vertebrate species are fishes and three-quarters of all described invertebrate species are insects. The following table lists the number of described extant species for each major animal subgroup as estimated for the IUCN Red List of Threatened Species, 2014.3.The World Conservation Union. 2014. IUCN Red List of Threatened Species, 2014.3. Summary Statistics for Globally Threatened Species. Table 1: Numbers of threatened species by major groups of organisms (1996–2014).
thumb|301px|right|alt=pie chart showing arthropoda with 90 percent of phylum|The relative number of species contributed to the total by each phylum of animals
Group Image Subgroup Estimated number ofdescribed species Vertebrates alt=large goldfish facing right|140px Fishes 32,900 alt=green spotted frog facing right|140px Amphibians 7,302 alt=florida box turtle facing right|140px Reptiles 10,038 alt=Secretary bird gliding to the right|140px Birds 10,425 alt=drawing of squirrel facing right on branch|140px Mammals 5,513 Invertebrates alt=wasp facing right|140px Insects 1,000,000 alt=snail in shell facing right|140px Molluscs 85,000 alt=Tasmanian giant crab facing up with large left claw|140px Crustaceans 47,000 alt=Table coral at French Frigate Shoals, Northwestern Hawaiian Islands|140px Corals 2,000 alt=black spider|140px Arachnids 102,248 alt=drawing of Cambrian-aged soft-bodied, caterpillar|140px Velvet worms 165 alt=horse shoe crab on sand facing right|140px Horseshoe crabs 4 Others 68,658
Over 95% of the described animal species in the world are invertebrates.
Model organisms
Because of the great diversity found in animals, it is more economical for scientists to study a small number of chosen species so that connections can be drawn from their work and conclusions extrapolated about how animals function in general. Because they are easy to keep and breed, the fruit fly Drosophila melanogaster and the nematode Caenorhabditis elegans have long been the most intensively studied metazoan model organisms, and were among the first life-forms to be genetically sequenced. This was facilitated by the severely reduced state of their genomes, but as many genes, introns, and linkages lost, these ecdysozoans can teach us little about the origins of animals in general. The extent of this type of evolution within the superphylum will be revealed by the crustacean, annelid, and molluscan genome projects currently in progress. Analysis of the starlet sea anemone genome has emphasized the importance of sponges, placozoans, and choanoflagellates, also being sequenced, in explaining the arrival of 1500 ancestral genes unique to the Eumetazoa.
An analysis of the homoscleromorph sponge Oscarella carmela also suggests that the last common ancestor of sponges and the eumetazoan animals was more complex than previously assumed. Pdf
Other model organisms belonging to the animal kingdom include the house mouse (Mus musculus) and zebrafish (Danio rerio'').
See also
Animal attacks
Animal coloration
Biological classification
Ethology
Fauna
List of animal names
Lists of animals
Lists of organisms by population
References
Bibliography
External links
Tree of Life Project
[http://animaldiversity.org/ Animal Diversity Web] – University of Michigan's database of animals, showing taxonomic classification, images, and other information.
ARKive – multimedia database of worldwide endangered/protected species and common species of UK.
The Animal Kingdom
Getting a Leg Up on Land Scientific American Magazine (December 2005 Issue) – About the evolution of four-limbed animals from fish.
Animals
Category:Cryogenian first appearances | 11,039,790 | 2017-01 |
Geological history of Earth | thumb|right|325px|Geologic time represented in a diagram called a geological clock, showing the relative lengths of the eons of Earth's history and noting major events
The geological history of Earth follows the major events in Earth's past based on the geologic time scale, a system of chronological measurement based on the study of the planet's rock layers (stratigraphy). Earth formed about 4.54 billion years ago by accretion from the solar nebula, a disk-shaped mass of dust and gas left over from the formation of the Sun, which also created the rest of the Solar System.
Earth was initially molten due to extreme volcanism and frequent collisions with other bodies. Eventually, the outer layer of the planet cooled to form a solid crust when water began accumulating in the atmosphere. The Moon formed soon afterwards, possibly as a result of the impact of a planetoid with the Earth. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, augmented by ice delivered from comets, produced the oceans.
As the surface continually reshaped itself over hundreds of millions of years, continents formed and broke apart. They migrated across the surface, occasionally combining to form a supercontinent. Roughly , the earliest-known supercontinent Rodinia, began to break apart. The continents later recombined to form Pannotia, , then finally Pangaea, which broke apart .
The present pattern of ice ages began about , then intensified at the end of the Pliocene. The polar regions have since undergone repeated cycles of glaciation and thaw, repeating every 40,000–100,000 years. The last glacial period of the current ice age ended about 10,000 years ago.
Precambrian
The Precambrian includes approximately 90% of geologic time. It extends from 4.6 billion years ago to the beginning of the Cambrian Period (about 541 Ma). It includes three eons, the Hadean, Archean, and Proterozoic.
Hadean Eon
During Hadean time (4.6–4 Ga), the Solar System was forming, probably within a large cloud of gas and dust around the sun, called an accretion disc from which Earth formed .
thumb|left|150px|Artist's conception of a protoplanetary disc
The Hadean Eon is not formally recognized, but it essentially marks the era before we have adequate record of significant solid rocks. The oldest dated zircons date from about .Wilde, S. A.; Valley, J.W.; Peck, W.H. and Graham, C.M. (2001) "Evidence from detrital zircons for the existence of continental crust and oceans on the Earth 4.4 Gyr ago" Nature 409: pp. 175-178 Abstract
Earth was initially molten due to extreme volcanism and frequent collisions with other bodies. Eventually, the outer layer of the planet cooled to form a solid crust when water began accumulating in the atmosphere. The Moon formed soon afterwards, possibly as a result of the impact of a large planetoid with the Earth. Some of this object's mass merged with the Earth, significantly altering its internal composition, and a portion was ejected into space. Some of the material survived to form an orbiting moon. More recent potassium isotopic studies suggest that the Moon was formed by a smaller, high-energy, high-angular-momentum giant impact cleaving off a significant portion of the Earth. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, augmented by ice delivered from comets, produced the oceans.
During the Hadean the Late Heavy Bombardment occurred (approximately ) during which a large number of impact craters are believed to have formed on the Moon, and by inference on Earth, Mercury, Venus and Mars as well.
Archean Eon
The Earth of the early Archean () may have had a different tectonic style. During this time, the Earth's crust cooled enough that rocks and continental plates began to form. Some scientists think because the Earth was hotter, that plate tectonic activity was more vigorous than it is today, resulting in a much greater rate of recycling of crustal material. This may have prevented cratonisation and continent formation until the mantle cooled and convection slowed down. Others argue that the subcontinental lithospheric mantle is too buoyant to subduct and that the lack of Archean rocks is a function of erosion and subsequent tectonic events.
In contrast to the Proterozoic, Archean rocks are often heavily metamorphized deep-water sediments, such as graywackes, mudstones, volcanic sediments and banded iron formations. Greenstone belts are typical Archean formations, consisting of alternating high- and low-grade metamorphic rocks. The high-grade rocks were derived from volcanic island arcs, while the low-grade metamorphic rocks represent deep-sea sediments eroded from the neighboring island frogs and deposited in a forearc basin. In short, greenstone belts represent sutured protocontinents.
The Earth's magnetic field was established 3.5 billion years ago. The solar wind flux was about 100 times the value of the modern Sun, so the presence of the magnetic field helped prevent the planet's atmosphere from being stripped away, which is what likely happened to the atmosphere of Mars. However, the field strength was lower than at present and the magnetosphere was about half the modern radius.
Proterozoic Eon
The geologic record of the Proterozoic () is more complete than that for the preceding Archean. In contrast to the deep-water deposits of the Archean, the Proterozoic features many strata that were laid down in extensive shallow epicontinental seas; furthermore, many of these rocks are less metamorphosed than Archean-age ones, and plenty are unaltered. Study of these rocks show that the eon featured massive, rapid continental accretion (unique to the Proterozoic), supercontinent cycles, and wholly modern orogenic activity. Roughly ,International Stratigraphic Chart 2008, International Commission on Stratigraphy the earliest-known supercontinent Rodinia, began to break apart. The continents later recombined to form Pannotia, 600–540 Ma.
The first-known glaciations occurred during the Proterozoic, one began shortly after the beginning of the eon, while there were at least four during the Neoproterozoic, climaxing with the Snowball Earth of the Varangian glaciation.
Phanerozoic Eon
The Phanerozoic Eon is the current eon in the geologic timescale. It covers roughly 541 million years. During this period continents drifted about, eventually collected into a single landmass known as Pangea and then split up into the current continental landmasses.
The Phanerozoic is divided into three eras — the Paleozoic, the Mesozoic and the Cenozoic.
Paleozoic Era
The Paleozoic spanned from roughly (Ma) and is subdivided into six geologic periods; from oldest to youngest they are the Cambrian, Ordovician, Silurian, Devonian, Carboniferous and Permian. Geologically, the Paleozoic starts shortly after the breakup of a supercontinent called Pannotia and at the end of a global ice age. Throughout the early Paleozoic, the Earth's landmass was broken up into a substantial number of relatively small continents. Toward the end of the era the continents gathered together into a supercontinent called Pangaea, which included most of the Earth's land area.
Cambrian Period
The Cambrian is a major division of the geologic timescale that begins about 541.0 ± 1.0 Ma. Cambrian continents are thought to have resulted from the breakup of a Neoproterozoic supercontinent called Pannotia. The waters of the Cambrian period appear to have been widespread and shallow. Continental drift rates may have been anomalously high. Laurentia, Baltica and Siberia remained independent continents following the break-up of the supercontinent of Pannotia. Gondwana started to drift toward the South Pole. Panthalassa covered most of the southern hemisphere, and minor oceans included the Proto-Tethys Ocean, Iapetus Ocean and Khanty Ocean.
Ordovician Period
The Ordovician Period started at a major extinction event called the Cambrian-Ordovician extinction events some time about 485.4 ± 1.9 Ma. During the Ordovician the southern continents were collected into a single continent called Gondwana. Gondwana started the period in the equatorial latitudes and, as the period progressed, drifted toward the South Pole. Early in the Ordovician the continents Laurentia, Siberia and Baltica were still independent continents (since the break-up of the supercontinent Pannotia earlier), but Baltica began to move toward Laurentia later in the period, causing the Iapetus Ocean to shrink between them. Also, Avalonia broke free from Gondwana and began to head north toward Laurentia. The Rheic Ocean was formed as a result of this. By the end of the period, Gondwana had neared or approached the pole and was largely glaciated.
The Ordovician came to a close in a series of extinction events that, taken together, comprise the second-largest of the five major extinction events in Earth's history in terms of percentage of genera that became extinct. The only larger one was the Permian-Triassic extinction event. The extinctions occurred approximately and mark the boundary between the Ordovician and the following Silurian Period.
The most-commonly accepted theory is that these events were triggered by the onset of an ice age, in the Hirnantian faunal stage that ended the long, stable greenhouse conditions typical of the Ordovician. The ice age was probably not as long-lasting as once thought; study of oxygen isotopes in fossil brachiopods shows that it was probably no longer than 0.5 to 1.5 million years. The event was preceded by a fall in atmospheric carbon dioxide (from 7000ppm to 4400ppm) which selectively affected the shallow seas where most organisms lived. As the southern supercontinent Gondwana drifted over the South Pole, ice caps formed on it. Evidence of these ice caps have been detected in Upper Ordovician rock strata of North Africa and then-adjacent northeastern South America, which were south-polar locations at the time.
Silurian Period
The Silurian is a major division of the geologic timescale that started about 443.8 ± 1.5 Ma. During the Silurian, Gondwana continued a slow southward drift to high southern latitudes, but there is evidence that the Silurian ice caps were less extensive than those of the late Ordovician glaciation. The melting of ice caps and glaciers contributed to a rise in sea levels, recognizable from the fact that Silurian sediments overlie eroded Ordovician sediments, forming an unconformity. Other cratons and continent fragments drifted together near the equator, starting the formation of a second supercontinent known as Euramerica. The vast ocean of Panthalassa covered most of the northern hemisphere. Other minor oceans include Proto-Tethys, Paleo-Tethys, Rheic Ocean, a seaway of Iapetus Ocean (now in between Avalonia and Laurentia), and newly formed Ural Ocean.
Devonian Period
The Devonian spanned roughly from 419 to 359 Ma. The period was a time of great tectonic activity, as Laurasia and Gondwana drew closer together. The continent Euramerica (or Laurussia) was created in the early Devonian by the collision of Laurentia and Baltica, which rotated into the natural dry zone along the Tropic of Capricorn. In these near-deserts, the Old Red Sandstone sedimentary beds formed, made red by the oxidized iron (hematite) characteristic of drought conditions. Near the equator Pangaea began to consolidate from the plates containing North America and Europe, further raising the northern Appalachian Mountains and forming the Caledonian Mountains in Great Britain and Scandinavia. The southern continents remained tied together in the supercontinent of Gondwana. The remainder of modern Eurasia lay in the Northern Hemisphere. Sea levels were high worldwide, and much of the land lay submerged under shallow seas. The deep, enormous Panthalassa (the "universal ocean") covered the rest of the planet. Other minor oceans were Paleo-Tethys, Proto-Tethys, Rheic Ocean and Ural Ocean (which was closed during the collision with Siberia and Baltica).
Carboniferous Period
The Carboniferous extends from about 358.9 ± 0.4 to about 298.9 ± 0.15 Ma.
A global drop in sea level at the end of the Devonian reversed early in the Carboniferous; this created the widespread epicontinental seas and carbonate deposition of the Mississippian. There was also a drop in south polar temperatures; southern Gondwana was glaciated throughout the period, though it is uncertain if the ice sheets were a holdover from the Devonian or not. These conditions apparently had little effect in the deep tropics, where lush coal swamps flourished within 30 degrees of the northernmost glaciers. A mid-Carboniferous drop in sea-level precipitated a major marine extinction, one that hit crinoids and ammonites especially hard. This sea-level drop and the associated unconformity in North America separate the Mississippian Period from the Pennsylvanian period.
The Carboniferous was a time of active mountain building, as the supercontinent Pangea came together. The southern continents remained tied together in the supercontinent Gondwana, which collided with North America-Europe (Laurussia) along the present line of eastern North America. This continental collision resulted in the Hercynian orogeny in Europe, and the Alleghenian orogeny in North America; it also extended the newly uplifted Appalachians southwestward as the Ouachita Mountains. In the same time frame, much of present eastern Eurasian plate welded itself to Europe along the line of the Ural mountains. There were two major oceans in the Carboniferous the Panthalassa and Paleo-Tethys. Other minor oceans were shrinking and eventually closed the Rheic Ocean (closed by the assembly of South and North America), the small, shallow Ural Ocean (which was closed by the collision of Baltica, and Siberia continents, creating the Ural Mountains) and Proto-Tethys Ocean.
thumb|Pangaea separation animation
Permian Period
The Permian extends from about 298.9 ± 0.15 to 252.17 ± 0.06 Ma.
During the Permian all the Earth's major land masses, except portions of East Asia, were collected into a single supercontinent known as Pangaea. Pangaea straddled the equator and extended toward the poles, with a corresponding effect on ocean currents in the single great ocean (Panthalassa, the universal sea), and the Paleo-Tethys Ocean, a large ocean that was between Asia and Gondwana. The Cimmeria continent rifted away from Gondwana and drifted north to Laurasia, causing the Paleo-Tethys to shrink. A new ocean was growing on its southern end, the Tethys Ocean, an ocean that would dominate much of the Mesozoic Era. Large continental landmasses create climates with extreme variations of heat and cold ("continental climate") and monsoon conditions with highly seasonal rainfall patterns. Deserts seem to have been widespread on Pangaea.
Mesozoic Era
thumb|Plate tectonics-
thumb|Plate tectonics-
The Mesozoic extended roughly from .
After the vigorous convergent plate mountain-building of the late Paleozoic, Mesozoic tectonic deformation was comparatively mild. Nevertheless, the era featured the dramatic rifting of the supercontinent Pangaea. Pangaea gradually split into a northern continent, Laurasia, and a southern continent, Gondwana. This created the passive continental margin that characterizes most of the Atlantic coastline (such as along the U.S. East Coast) today.
Triassic Period
The Triassic Period extends from about 252.17 ± 0.06 to 201.3 ± 0.2 Ma. During the Triassic, almost all the Earth's land mass was concentrated into a single supercontinent centered more or less on the equator, called Pangaea ("all the land"). This took the form of a giant "Pac-Man" with an east-facing "mouth" constituting the Tethys sea, a vast gulf that opened farther westward in the mid-Triassic, at the expense of the shrinking Paleo-Tethys Ocean, an ocean that existed during the Paleozoic.
The remainder was the world-ocean known as Panthalassa ("all the sea"). All the deep-ocean sediments laid down during the Triassic have disappeared through subduction of oceanic plates; thus, very little is known of the Triassic open ocean. The supercontinent Pangaea was rifting during the Triassic—especially late in the period—but had not yet separated. The first nonmarine sediments in the rift that marks the initial break-up of Pangea—which separated New Jersey from Morocco—are of Late Triassic age; in the U.S., these thick sediments comprise the Newark Supergroup.
Because of the limited shoreline of one super-continental mass, Triassic marine deposits are globally relatively rare; despite their prominence in Western Europe, where the Triassic was first studied. In North America, for example, marine deposits are limited to a few exposures in the west. Thus Triassic stratigraphy is mostly based on organisms living in lagoons and hypersaline environments, such as Estheria crustaceans and terrestrial vertebrates.
Jurassic Period
The Jurassic Period extends from about 201.3 ± 0.2 to 145.0 Ma.
During the early Jurassic, the supercontinent Pangaea broke up into the northern supercontinent Laurasia and the southern supercontinent Gondwana; the Gulf of Mexico opened in the new rift between North America and what is now Mexico's Yucatan Peninsula. The Jurassic North Atlantic Ocean was relatively narrow, while the South Atlantic did not open until the following Cretaceous Period, when Gondwana itself rifted apart.
The Tethys Sea closed, and the Neotethys basin appeared. Climates were warm, with no evidence of glaciation. As in the Triassic, there was apparently no land near either pole, and no extensive ice caps existed. The Jurassic geological record is good in western Europe, where extensive marine sequences indicate a time when much of the continent was submerged under shallow tropical seas; famous locales include the Jurassic Coast World Heritage Site and the renowned late Jurassic lagerstätten of Holzmaden and Solnhofen.
In contrast, the North American Jurassic record is the poorest of the Mesozoic, with few outcrops at the surface. Though the epicontinental Sundance Sea left marine deposits in parts of the northern plains of the United States and Canada during the late Jurassic, most exposed sediments from this period are continental, such as the alluvial deposits of the Morrison Formation. The first of several massive batholiths were emplaced in the northern Cordillera beginning in the mid-Jurassic, marking the Nevadan orogeny.Monroe and Wicander, 607. Important Jurassic exposures are also found in Russia, India, South America, Japan, Australasia and the United Kingdom.
Cretaceous Period
thumb|Plate tectonics- 100 Ma, Cretaceous period
The Cretaceous Period extends from circa to .
During the Cretaceous, the late Paleozoic-early Mesozoic supercontinent of Pangaea completed its breakup into present day continents, although their positions were substantially different at the time. As the Atlantic Ocean widened, the convergent-margin orogenies that had begun during the Jurassic continued in the North American Cordillera, as the Nevadan orogeny was followed by the Sevier and Laramide orogenies. Though Gondwana was still intact in the beginning of the Cretaceous, Gondwana itself broke up as South America, Antarctica and Australia rifted away from Africa (though India and Madagascar remained attached to each other); thus, the South Atlantic and Indian Oceans were newly formed. Such active rifting lifted great undersea mountain chains along the welts, raising eustatic sea levels worldwide.
To the north of Africa the Tethys Sea continued to narrow. Broad shallow seas advanced across central North America (the Western Interior Seaway) and Europe, then receded late in the period, leaving thick marine deposits sandwiched between coal beds. At the peak of the Cretaceous transgression, one-third of Earth's present land area was submerged.Dougal Dixon et al., Atlas of Life on Earth, (New York: Barnes & Noble Books, 2001), p. 215. The Cretaceous is justly famous for its chalk; indeed, more chalk formed in the Cretaceous than in any other period in the Phanerozoic. Mid-ocean ridge activity—or rather, the circulation of seawater through the enlarged ridges—enriched the oceans in calcium; this made the oceans more saturated, as well as increased the bioavailability of the element for calcareous nanoplankton. These widespread carbonates and other sedimentary deposits make the Cretaceous rock record especially fine. Famous formations from North America include the rich marine fossils of Kansas's Smoky Hill Chalk Member and the terrestrial fauna of the late Cretaceous Hell Creek Formation. Other important Cretaceous exposures occur in Europe and China. In the area that is now India, massive lava beds called the Deccan Traps were laid down in the very late Cretaceous and early Paleocene.
Cenozoic Era
The Cenozoic Era covers the million years since the Cretaceous–Paleogene extinction event up to and including the present day. By the end of the Mesozoic era, the continents had rifted into nearly their present form. Laurasia became North America and Eurasia, while Gondwana split into South America, Africa, Australia, Antarctica and the Indian subcontinent, which collided with the Asian plate. This impact gave rise to the Himalayas. The Tethys Sea, which had separated the northern continents from Africa and India, began to close up, forming the Mediterranean sea.
Paleogene Period
The Paleogene (alternatively Palaeogene) Period is a unit of geologic time that began and ended 23.03 Ma and comprises the first part of the Cenozoic Era. This period consists of the Paleocene, Eocene and Oligocene Epochs.
Paleocene Epoch
The Paleocene, lasted from to .
In many ways, the Paleocene continued processes that had begun during the late Cretaceous Period. During the Paleocene, the continents continued to drift toward their present positions. Supercontinent Laurasia had not yet separated into three continents. Europe and Greenland were still connected. North America and Asia were still intermittently joined by a land bridge, while Greenland and North America were beginning to separate.Hooker, J.J., "Tertiary to Present: Paleocene", pp. 459-465, Vol. 5. of Selley, Richard C., L. Robin McCocks, and Ian R. Plimer, Encyclopedia of Geology, Oxford: Elsevier Limited, 2005. ISBN 0-12-636380-3 The Laramide orogeny of the late Cretaceous continued to uplift the Rocky Mountains in the American west, which ended in the succeeding epoch. South and North America remained separated by equatorial seas (they joined during the Neogene); the components of the former southern supercontinent Gondwana continued to split apart, with Africa, South America, Antarctica and Australia pulling away from each other. Africa was heading north toward Europe, slowly closing the Tethys Ocean, and India began its migration to Asia that would lead to a tectonic collision and the formation of the Himalayas.
Eocene Epoch
During the Eocene ( - ), the continents continued to drift toward their present positions. At the beginning of the period, Australia and Antarctica remained connected, and warm equatorial currents mixed with colder Antarctic waters, distributing the heat around the world and keeping global temperatures high. But when Australia split from the southern continent around 45 Ma, the warm equatorial currents were deflected away from Antarctica, and an isolated cold water channel developed between the two continents. The Antarctic region cooled down, and the ocean surrounding Antarctica began to freeze, sending cold water and ice floes north, reinforcing the cooling. The present pattern of ice ages began about .
The northern supercontinent of Laurasia began to break up, as Europe, Greenland and North America drifted apart. In western North America, mountain building started in the Eocene, and huge lakes formed in the high flat basins among uplifts. In Europe, the Tethys Sea finally vanished, while the uplift of the Alps isolated its final remnant, the Mediterranean, and created another shallow sea with island archipelagos to the north. Though the North Atlantic was opening, a land connection appears to have remained between North America and Europe since the faunas of the two regions are very similar. India continued its journey away from Africa and began its collision with Asia, creating the Himalayan orogeny.
Oligocene Epoch
The Oligocene Epoch extends from about to . During the Oligocene the continents continued to drift toward their present positions.
Antarctica continued to become more isolated and finally developed a permanent ice cap. Mountain building in western North America continued, and the Alps started to rise in Europe as the African plate continued to push north into the Eurasian plate, isolating the remnants of Tethys Sea. A brief marine incursion marks the early Oligocene in Europe. There appears to have been a land bridge in the early Oligocene between North America and Europe since the faunas of the two regions are very similar. During the Oligocene, South America was finally detached from Antarctica and drifted north toward North America. It also allowed the Antarctic Circumpolar Current to flow, rapidly cooling the continent.
Neogene Period
The Neogene Period is a unit of geologic time starting 23.03 Ma. and ends at 2.588 Mya. The Neogene Period follows the Paleogene Period. The Neogene consists of the Miocene and Pliocene and is followed by the Quaternary Period.
Miocene Epoch
The Miocene extends from about 23.03 to 5.333 Ma.
During the Miocene continents continued to drift toward their present positions. Of the modern geologic features, only the land bridge between South America and North America was absent, the subduction zone along the Pacific Ocean margin of South America caused the rise of the Andes and the southward extension of the Meso-American peninsula. India continued to collide with Asia. The Tethys Seaway continued to shrink and then disappeared as Africa collided with Eurasia in the Turkish-Arabian region between 19 and 12 Ma (ICS 2004). Subsequent uplift of mountains in the western Mediterranean region and a global fall in sea levels combined to cause a temporary drying up of the Mediterranean Sea resulting in the Messinian salinity crisis near the end of the Miocene.
Pliocene Epoch
The Pliocene extends from to . During the Pliocene continents continued to drift toward their present positions, moving from positions possibly as far as from their present locations to positions only 70 km from their current locations.
South America became linked to North America through the Isthmus of Panama during the Pliocene, bringing a nearly complete end to South America's distinctive marsupial faunas. The formation of the Isthmus had major consequences on global temperatures, since warm equatorial ocean currents were cut off and an Atlantic cooling cycle began, with cold Arctic and Antarctic waters dropping temperatures in the now-isolated Atlantic Ocean. Africa's collision with Europe formed the Mediterranean Sea, cutting off the remnants of the Tethys Ocean. Sea level changes exposed the land-bridge between Alaska and Asia. Near the end of the Pliocene, about (the start of the Quaternary Period), the current ice age began. The polar regions have since undergone repeated cycles of glaciation and thaw, repeating every 40,000–100,000 years.
Quaternary Period
Pleistocene Epoch
The Pleistocene extends from to 11,700 years before present. The modern continents were essentially at their present positions during the Pleistocene, the plates upon which they sit probably having moved no more than relative to each other since the beginning of the period.
Holocene Epoch
The Holocene Epoch began approximately 11,700 calendar years before present and continues to the present. During the Holocene, continental motions have been less than a kilometer.
The last glacial period of the current ice age ended about 10,000 years ago. Ice melt caused world sea levels to rise about in the early part of the Holocene. In addition, many areas above about 40 degrees north latitude had been depressed by the weight of the Pleistocene glaciers and rose as much as over the late Pleistocene and Holocene, and are still rising today. The sea level rise and temporary land depression allowed temporary marine incursions into areas that are now far from the sea. Holocene marine fossils are known from Vermont, Quebec, Ontario and Michigan. Other than higher latitude temporary marine incursions associated with glacial depression, Holocene fossils are found primarily in lakebed, floodplain and cave deposits. Holocene marine deposits along low-latitude coastlines are rare because the rise in sea levels during the period exceeds any likely upthrusting of non-glacial origin. Post-glacial rebound in Scandinavia resulted in the emergence of coastal areas around the Baltic Sea, including much of Finland. The region continues to rise, still causing weak earthquakes across Northern Europe. The equivalent event in North America was the rebound of Hudson Bay, as it shrank from its larger, immediate post-glacial Tyrrell Sea phase, to near its present boundaries.
See also
Future of the Earth
Plate reconstruction
Plate tectonics
Timeline of natural history
References
Further reading
External links
Cosmic Evolution — a detailed look at events from the origin of the universe to the present
Valley, John W. "A Cool Early Earth?" Scientific American. 2005 Oct:58–65. – discusses the timing of the formation of the oceans and other major events in Eh’s early history.
Davies, Paul. "Quantum leap of life". The Guardian. 2005 Dec 20. – discusses speculation into the role of quantum systems in the origin of life
Evolution timeline (uses Shockwave). Animated story of life since about 13,700,000,000 shows everything from the big bang to the formation of the earth and the development of bacteria and other organisms to the ascent of man.
Theory of the Earth & Abstract of the Theory of the Earth
Paleomaps Since 600 Ma (Mollweide Projection, Longitude 0)
Paleomaps Since 600 Ma (Mollweide Projection, Longitude 180)
Category:Geochronology
Category:Earth
ko:지구의 진화 | 11,603,215 | 2017-01 |
Miami | Miami (; ) is a seaport city at the southeastern corner of the U.S. state of Florida and its Atlantic coast. As the seat of Miami-Dade County, the municipality is the principal, central, and most populous of its metropolitan area and part of the second-most populous metropolis in the southeastern United States. According to the U.S. Census Bureau, Miami's metro area is the eighth-most populous and fourth-largest urban area in the U.S., with a population of around 5.5 million.Demographia: World Urban Areas.
Miami is a major center, and a leader in finance, commerce, culture, media, entertainment, the arts, and international trade. In 2012, Miami was classified as an Alpha−World City in the World Cities Study Group's inventory. In 2010, Miami ranked seventh in the United States in terms of finance, commerce, culture, entertainment, fashion, education, and other sectors. It ranked 33rd among global cities. In 2008, Forbes magazine ranked Miami "America's Cleanest City", for its year-round good air quality, vast green spaces, clean drinking water, clean streets, and citywide recycling programs. According to a 2009 UBS study of 73 world cities, Miami was ranked as the richest city in the United States, and the world's fifth-richest city in terms of purchasing power. Miami is nicknamed the "Capital of Latin America" and is the largest city with a Cuban-American plurality.U.S. Census, 2010 (Ethnicity) and Census American Community Survey 2008 (language).
Miami has the third tallest skyline in the U.S. with over 300 high-rises. Downtown Miami is home to the largest concentration of international banks in the United States, and many large national and international companies.Nest Seekers International. Nestseekers.com. Retrieved on September 5, 2015.Brickell – Downtown Miami, Florida. Madduxco.com. Retrieved on October 8, 2012. The Civic Center is a major center for hospitals, research institutes, medical centers, and biotechnology industries. For more than two decades, the Port of Miami, known as the "Cruise Capital of the World", has been the number one cruise passenger port in the world. It accommodates some of the world's largest cruise ships and operations, and is the busiest port in both passenger traffic and cruise lines.Miami-Dade.gov Port of Miami. Miamidade.gov. Retrieved on October 8, 2012. Cruise lines departing from the Port of Miami. Gomiami.about.com (April 10, 2012). Retrieved on October 8, 2012. Metropolitan Miami is the major tourism hub in the American South, number two in the U.S. after New York City and number 13 in the world, including the popular destination of Miami Beach.
History
thumb|left|Approximately 400 men voted for Miami's incorporation in 1896 in the building to the left.
The Miami area was inhabited for thousands of years by indigenous cultures. The Tequestas occupied the area for a thousand years before encountering Europeans. An Indian village of hundreds of people dating to 500–600 B.C. was located at the mouth of the Miami River.
In 1566 the explorer, Pedro Menéndez de Avilés, claimed it for Spain. A Spanish mission was constructed one year later in 1567. Spain and Great Britain successively "controlled" Florida, and Spain ceded it to the United States in 1821. In 1836, the US built Fort Dallas as part of its development of the Florida Territory and attempt to suppress and remove the Seminole. The Miami area subsequently became a site of fighting during the Second Seminole War.
Miami is noted as "the only major city in the United States conceived by a woman, Julia Tuttle", a local citrus grower and a wealthy Cleveland native. The Miami area was better known as "Biscayne Bay Country" in the early years of its growth. In the late 19th century, reports described the area as a promising wilderness. The area was also characterized as "one of the finest building sites in Florida." The Great Freeze of 1894–95 hastened Miami's growth, as the crops of the Miami area were the only ones in Florida that survived. Julia Tuttle subsequently convinced Henry Flagler, a railroad tycoon, to expand his Florida East Coast Railway to the region, for which she became known as "the mother of Miami." Miami was officially incorporated as a city on July 28, 1896 with a population of just over 300. It was named for the nearby Miami River, derived from Mayaimi, the historic name of Lake Okeechobee.
thumb|upright=0.9|The Freedom Tower, built in 1925, is Miami's historical landmark.
Black labor played a crucial role in Miami's early development. During the beginning of the 20th century, migrants from the Bahamas and African-Americans constituted 40 percent of the city's population. Whatever their role in the city's growth, their community's growth was limited to a small space. When landlords began to rent homes to African-Americans in neighborhoods close to Avenue J (what would later become NW Fifth Avenue), a gang of white men with torches visited the renting families and warned them to move or be bombed.
During the early 20th century, northerners were attracted to the city, and Miami prospered during the 1920s with an increase in population and infrastructure. The legacy of Jim Crow was embedded in these developments. Miami's chief of police, H. Leslie Quigg, did not hide the fact that he, like many other white Miami police officers, was a member of the Ku Klux Klan. Unsurprisingly, these officers enforced social codes far beyond the written law. Quigg, for example, "personally and publicly beat a colored bellboy to death for speaking directly to a white woman."
The collapse of the Florida land boom of the 1920s, the 1926 Miami Hurricane, and the Great Depression in the 1930s slowed development. When World War II began, Miami, well-situated on the southern coast of Florida, became a base for US defense against German submarines. The war brought an increase in Miami's population; by 1940, 172,172 people lived in the city.
After Fidel Castro rose to power in Cuba in 1959, many wealthy Cubans sought refuge in Miami, further increasing the population. The city developed businesses and cultural amenities as part of the New South. In the 1980s and 1990s, South Florida weathered social problems related to drug wars, immigration from Haiti and Latin America, and the widespread destruction of Hurricane Andrew. Racial and cultural tensions were sometimes sparked, but the city developed in the latter half of the 20th century as a major international, financial, and cultural center. It is the second-largest US city (after El Paso, Texas) with a Spanish-speaking majority, and the largest city with a Cuban-American plurality.
Miami and its metropolitan area grew from just over 1,000 residents to nearly 5.5 million residents in just 110 years (1896–2006). The city's nickname, The Magic City, comes from this rapid growth. Winter visitors remarked that the city grew so much from one year to the next that it was like magic.
Geography
thumb|right|The mouth of the Miami River at Brickell Key
Miami and its suburbs are located on a broad plain between the Florida Everglades to the west and Biscayne Bay to the east, which also extends from Florida Bay north to Lake Okeechobee. The elevation of the area never rises above and averages at around above mean sea level in most neighborhoods, especially near the coast. The highest undulations are found along the coastal Miami Rock Ridge, whose substrate underlies most of the eastern Miami metropolitan region. The main portion of the city lies on the shores of Biscayne Bay which contains several hundred natural and artificially created barrier islands, the largest of which contains Miami Beach and South Beach. The Gulf Stream, a warm ocean current, runs northward just off the coast, allowing the city's climate to stay warm and mild all year.
Geology
thumb|left|View from one of the higher points in Miami, west of downtown. The highest natural point in the city of Miami is in Coconut Grove, near the bay, along the Miami Rock Ridge at above sea level.
The surface bedrock under the Miami area is called Miami oolite or Miami limestone. This bedrock is covered by a thin layer of soil, and is no more than thick. Miami limestone formed as the result of the drastic changes in sea level associated with recent glaciations or ice ages. Beginning some 130,000 years ago the Sangamonian Stage raised sea levels to approximately above the current level. All of southern Florida was covered by a shallow sea. Several parallel lines of reef formed along the edge of the submerged Florida plateau, stretching from the present Miami area to what is now the Dry Tortugas. The area behind this reef line was in effect a large lagoon, and the Miami limestone formed throughout the area from the deposition of oolites and the shells of bryozoans. Starting about 100,000 years ago the Wisconsin glaciation began lowering sea levels, exposing the floor of the lagoon. By 15,000 years ago, the sea level had dropped to below the contemporary level. The sea level rose quickly after that, stabilizing at the current level about 4000 years ago, leaving the mainland of South Florida just above sea level.
Beneath the plain lies the Biscayne Aquifer, a natural underground source of fresh water that extends from southern Palm Beach County to Florida Bay, with its highest point peaking around the cities of Miami Springs and Hialeah. Most of the Miami metropolitan area obtains its drinking water from this aquifer. As a result of the aquifer, it is not possible to dig more than beneath the city without hitting water, which impedes underground construction, though some underground parking garages exist. For this reason, the mass transit systems in and around Miami are elevated or at-grade.
Most of the western fringes of the city extend into the Everglades, a subtropical marshland located in the southern portion of the U.S. state of Florida. Alligators have ventured into Miami communities and on major highways.
In terms of land area, Miami is one of the smallest major cities in the United States. According to the US Census Bureau, the city encompasses a total area of . Of that area, is land and is water. That means Miami comprises over 400,000 people in , making it one of the most densely populated cities in the United States, along with New York City, San Francisco, Boston, Chicago, and Philadelphia.
Cityscape
Neighborhoods
Miami is partitioned into many different sections, roughly into North, South, West and Downtown. The heart of the city is Downtown Miami and is technically on the eastern side of the city. This area includes Brickell, Virginia Key, Watson Island, and PortMiami. Downtown is South Florida's central business district, and Florida's largest and most influential central business district. Downtown has the largest concentration of international banks in the U.S. along Brickell Avenue. Downtown is home to many major banks, courthouses, financial headquarters, cultural and tourist attractions, schools, parks and a large residential population. East of Downtown, across Biscayne Bay is South Beach. Just northwest of Downtown, is the Civic Center, which is Miami's center for hospitals, research institutes and biotechnology with hospitals such as Jackson Memorial Hospital, Miami VA Hospital, and the University of Miami's Leonard M. Miller School of Medicine.
The southern side of Miami includes Coral Way, The Roads and Coconut Grove. Coral Way is a historic residential neighborhood built in 1922 connecting Downtown with Coral Gables, and is home to many old homes and tree-lined streets. Coconut Grove was established in 1825 and is the location of Miami's City Hall in Dinner Key, the Coconut Grove Playhouse, CocoWalk, many nightclubs, bars, restaurants and bohemian shops, and as such, is very popular with local college students. It is a historic neighborhood with narrow, winding roads, and a heavy tree canopy. Coconut Grove has many parks and gardens such as Villa Vizcaya, The Kampong, The Barnacle Historic State Park, and is the home of the Coconut Grove Convention Center and numerous historic homes and estates.
The western side of Miami includes Little Havana, West Flagler, and Flagami, and is home to many of the city's traditionally immigrant neighborhoods. Although at one time a mostly Jewish neighborhood, today western Miami is home to immigrants from mostly Central America and Cuba, while the west central neighborhood of Allapattah is a multicultural community of many ethnicities.
The northern side of Miami includes Midtown, a district with a great mix of diversity with many West Indians, Hispanics, European Americans, bohemians, and artists. Edgewater, and Wynwood, are neighborhoods of Midtown and are made up mostly of high-rise residential towers and are home to the Adrienne Arsht Center for the Performing Arts. The wealthier residents usually live in the northeastern part, in Midtown, the Design District, and the Upper East Side, with many sought after 1920s homes and home of the MiMo Historic District, a style of architecture originated in Miami in the 1950s. The northern side of Miami also has notable African American and Caribbean immigrant communities such as Little Haiti, Overtown (home of the Lyric Theater), and Liberty City.
Climate
Miami has a tropical monsoon climate (Köppen climate classification Am) with a marked drier season in the winter. Its sea-level elevation, coastal location, position just above the Tropic of Cancer, and proximity to the Gulf Stream shape its climate. With January averaging , winter features highs generally ranging between . Cool air usually settles after the passage of a cold front, which produces much of the little amount of rainfall during the season. Lows fall below , an average of 10-15 nights during the winter season following the passage of cold fronts.
thumb|left|Typical summer afternoon thunderstorm rolling in from the Everglades.
The wet season begins some time in May, ending in mid-October. During this period, temperatures are in the mid 80s to low 90s (29–35 °C), accompanied by high humidity, though the heat is often relieved by afternoon thunderstorms or a sea breeze that develops off the Atlantic Ocean, which then allow lower temperatures, but conditions still remain very muggy. Much of the year's of rainfall occurs during this period. Dew points in the warm months range from in June to in August.
Extremes range from on February 3, 1917 to on July 21, 1940. While Miami has never officially recorded snowfall at any official weather station since records have been kept, snow flurries fell in some parts of Miami on January 19, 1977."Maine shivers at -29: Snow falls in Florida". Associated Press. The Baltimore Sun. January 20, 1977. p. A1. "Temperatures dipped into the 30's in southern Florida, with snow flurries reported even in Miami Beach."Lardner Jr., George; Meyers, Robert. "Miami Is Hit by First Recorded Snow: State of Emergency Is Eyed for Virginia Thousands Idled as Cold Closes Factories, Businesses". The Washington Post. January 20, 1977. p. A1. The meandering jet stream in the upper atmosphere sent flurries of genuine snow onto Miami's palm trees. ... It was the farthest south that snow has been reported in the United States since the record books were started in the 19th century. ... The snow flurries in Miami will be only an asterisk in the record books since they didn't fall on any of the National Weather Service's recording stations in the area, but they were genuine."Khiss, Peter. "New York High is 26 as the South Shivers: Florida Snow Causes Emergency Gas Shortage Widespread". The New York Times. January 20, 1977. p. 1. "Florida officially recorded snow for the first time yesterday in Palm Beach County, 65 miles north of Miami, and even that city had flurries, although not at the official stations at its airport or nearby Coral Gables."
Hurricane season officially runs from June 1 through November 30, although hurricanes can develop beyond those dates. The most likely time for Miami to be hit is during the peak of the Cape Verde season, which is mid-August through the end of September. Although tornadoes are uncommon in the area, one struck in 1925 and again in 1997.
Miami falls under the USDA 10b/11a Plant Hardiness zone.
Demographics
The city proper is home to less than one-thirteenth of the population of South Florida. Miami is the 42nd-most populous city in the United States. The Miami metropolitan area, which includes Miami-Dade, Broward and Palm Beach counties, had a combined population of more than 5.5 million people, ranked seventh largest in the United States, and is the largest metropolitan area in the Southeastern United States. , the United Nations estimates that the Miami Urban Agglomeration is the 44th-largest in the world.
The 2010 US Census file for Hispanic or Latino origin reports that 34.4% of the population were of Cuban origin, 15.8% shared a Central American background (7.2% Nicaraguan, 5.8% Honduran, 1.2% Salvadoran, and 1.0% Guatemalan), 8.7% were of South American descent (3.2% Colombian, 1.4% Venezuelan, 1.2% Peruvian, 1.2% Argentinean, and 0.7% Ecuadorian), 4.0% had other Hispanic or Latino origins (0.5% Spaniard), 3.2% descended from Puerto Ricans, 2.4% were Dominican, and 1.5% had Mexican ancestry.
, those of African ancestry accounted for 19.2% of Miami's population, which includes African Americans. Out of the 19.2%, 5.6% were West Indian or Afro-Caribbean American (4.4% Haitian, 0.4% Jamaican, 0.4% Bahamian, 0.1% British West Indian, and 0.1% Trinidadian and Tobagonian, 0.1% Other or Unspecified West Indian), 3.0% were Black Hispanics, and 0.4% were Subsaharan African.
, those of (non-Hispanic white) European ancestry accounted for 11.9% of Miami's population. Out of the 11.9%, 1.7% were German, 1.6% Italian, 1.4% Irish, 1.0% English, 0.8% French, 0.6% Russian, and 0.5% were Polish.
, those of Asian ancestry accounted for 1.0% of Miami's population. Out of the 1.0%, 0.3% were Indian people/Indo-Caribbean American (1,206 people), 0.3% Chinese (1,804 people), 0.2% Filipino (647 people), 0.1% were other Asian (433 people), 0.1% Japanese (245 people), 0.1% Korean (213 people), and 0.0% were Vietnamese (125 people).
In 2010, 1.9% of the population considered themselves to be of only American ancestry (regardless of race or ethnicity.) And 0.5% were of Arab ancestry, .
, there were 158,317 households of which 14.0% were vacant. 22.7% had children under the age of 18 living with them, 31.3% were married couples living together, 18.1% have a female head of household with no husband present, and 43.1% were non-families. 33.3% of all households were made up of individuals and 11.3% had someone living alone who was 65 years of age or older (4.0% male and 7.3% female.) The average household size was 2.47 and the average family size was 3.15.
In 2010, the city population was spread out with 18.8% under the age of 18, 9.4% from 18 to 24, 33.1% from 25 to 44, 25.0% from 45 to 64, and 13.6% who were 65 years of age or older. The median age was 38.8 years. For every 100 females there were 99.2 males. For every 100 females age 18 and over, there were 98.1 males.
, the median income for a household in the city was $29,621, and the median income for a family was $33,379. Males had a median income of $27,849 versus $24,518 for females. The per capita income for the city was $19,745. About 22.2% of families and 27.3% of the population were below the poverty line, including 37.1% of those under age 18 and 32.8% of those aged 65 or over.
In 2010, 58.1% of the county's population was foreign born, with 41.1% being naturalized American citizens. Of foreign-born residents, 95.4% were born in Latin America, 2.4% were born in Europe, 1.4% born in Asia, 0.5% born in Africa, 0.2% in North America, and 0.1% were born in Oceania.
thumb|right|Plymouth Congregational Church in Coconut Grove.
In 2004, the United Nations Development Program (UNDP) reported that Miami had the highest proportion of foreign-born residents of any major city worldwide (59%), followed by Toronto (50%).
In 1960, non-Hispanic whites represented 80% of Miami-Dade county's population."Demographic Profile, Miami-Dade County, Florida 1960–2000 " (PDF). Miamidade.gov. In 1970, the Census Bureau reported Miami's population as 45.3% Hispanic, 32.9% non-Hispanic White, and 22.7% Black. Miami's explosive population growth has been driven by internal migration from other parts of the country, primarily up until the 1980s, as well as by immigration, primarily from the 1960s to the 1990s. Today, immigration to Miami has slowed significantly and Miami's growth today is attributed greatly to its fast urbanization and high-rise construction, which has increased its inner city neighborhood population densities, such as in Downtown, Brickell, and Edgewater, where one area in Downtown alone saw a 2,069% increase in population in the 2010 Census. Miami is regarded as more of a multicultural mosaic, than it is a melting pot, with residents still maintaining much of, or some of their cultural traits. The overall culture of Miami is heavily influenced by its large population of Hispanics and blacks mainly from the Caribbean islands.
Miami Demographics2010 CensusMiamiMiami-Dade CountyFloridaTotal population399,4572,496,43518,801,310Population, percent change, 2000 to 2010+10.2%+10.8%+17.6%Population density11,135.9/sq mi1,315.5/sq mi350.6/sq miWhite or Caucasian (including White Hispanic)72.6%73.8%75.0%(Non-Hispanic White or Caucasian)11.9%15.4%57.9%Black or African-American19.2%18.9%16.0%Hispanic or Latino (of any race)70.0%65.0%22.5%Asian1.0%1.5%2.4%Native American or Native Alaskan0.3%0.2%0.4%Pacific Islander or Native Hawaiian0.0%0.0%0.1%Two or more races (Multiracial)2.7%2.4%2.5%Some Other Race4.2%3.2%3.6%
Historic Ethnic Makeup of MiamiShaping Florida: The Effects of Immigration, 1970–2020|Center for Immigration Studies. Cis.org. Retrieved on October 8, 2012.YearWhite(includesWhite Hispanics)Non-HispanicWhiteBlackAsianOtherHispanic(of any race)191058.7%–41.3%0.1%––192068.5%–31.3%0.1%––193077.3%–22.7%0.1%––194078.5%–21.4%0.1%––195083.7%–16.2%0.1%––196077.4%–22.4%0.1%–17.6%197076.6%41.7%22.7%0.3%0.4%44.6%198066.6%19.4%25.1%0.5%7.8%55.9%199065.6%12.2%27.4%0.6%6.4%62.5%200066.6%11.8%22.3%0.7%5.6%65.8%201072.6%11.9%19.2%1.0%4.2%70.0%
Languages
, 70.2% of Miami's population age five and over spoke only Spanish at home while 22.7% of the population spoke English at home. About 6.3% spoke other Indo-European languages at home. About 0.4% spoke Asian languages or Pacific Islander languages/Oceanic languages at home. The remaining 0.3% of the population spoke other languages at home. In total, 77.3% spoke another language other than English.
As of 2000, 66.75% of residents spoke Spanish at home, while those who only spoke English made up 25.45%. Speakers of Haitian Creole (French-based) were 5.20%, French speakers comprised 0.76% of the population, and Portuguese at 0.41%. Among U.S. cities, Miami has one of the highest proportions of residents who speak languages other than English at home (74.55% in 2000).
Due to English-speakers moving away from the area, the percentage of residents who speak only English is expected to continue to decline."In Miami, Spanish becoming primary language." Associated Press at MSNBC. May 29, 2008. Retrieved March 27, 2010.
Religion
Christianity is the most prevalently practiced religion in Miami (68%), according to a 2014 study by the Pew Research Center, with 39% professing attendance at a variety of churches that could be considered Protestant, and 27% professing Roman Catholic beliefs.Major U.S. metropolitan areas differ in their religious profiles, Pew Research Center followed by Judaism (8%); Islam, Hinduism, Buddhism, and a variety of other religions have smaller followings; atheism or no self-identifying organized religious affiliation was practiced by 24%.
There has been a Norwegian Seamen's church in Miami since the early 1980s. In November 2011, Crown Princess Mette-Marit opened a new building for the church. The church was built as a center for the 10,000 Scandinavians that live in Florida. Around 4,000 of them are Norwegian. The church is also an important place for the 150 Norwegians that work at Disney World.Crown Princess Opens Seamen's Church in Miami. Norwaypost.no (November 21, 2011). Retrieved on August 3, 2013.
Civic engagement
Organizations such as the Miami-Dade Salvation Army and its iconic Red Kettle Christmas Campaign, Hands On Miami, City Year Miami, Human Services Coalition of South Florida, and Citizens for a Better South Florida, among many other organizations have been working to engage Miamians in volunteerism.
Economy
thumb|right|Downtown is South Florida's main hub for finance, commerce and international business. Brickell Avenue has the largest concentration of international banks in the U.S.
thumb|right|As seen in 2006, the high-rise construction in Miami has inspired popular opinion of "Miami manhattanization"
thumb|Brickell Avenue in Downtown Miami's Brickell Financial District
thumb|right|PortMiami is the world's largest cruise ship port, and is the headquarters of many of the world's largest cruise companies
Miami is a major center of commerce, finance, and boasts a strong international business community. According to the ranking of world cities undertaken by the Globalization and World Cities Study Group & Network (GaWC) in 2010 and based on the level of presence of global corporate service organizations, Miami is considered a "Alpha minus world city"."The World According to GaWC 2012". Lboro.ac.uk (September 14, 2011). Retrieved on October 8, 2012. Miami has a Gross Metropolitan Product of $257 billion and is ranked 20th worldwide in GMP, and 11th in the United States.https://www.ukmediacentre.pwc.com/imagelibrary/downloadMedia.ashx?MediaDetailsID=1562
Several large companies are headquartered in or around Miami, including but not limited to: Akerman Senterfitt, Alienware, Arquitectonica, Arrow Air, Bacardi, Benihana, Brightstar Corporation, Burger King, Celebrity Cruises, Carnival Corporation, Carnival Cruise Lines, Crispin Porter + Bogusky, Duany Plater-Zyberk & Company, Espírito Santo Financial Group, Fizber.com, Greenberg Traurig, Holland & Knight, Inktel Direct, Interval International, Lennar, Navarro Discount Pharmacies, Norwegian Cruise Lines, Oceania Cruises, Perry Ellis International, RCTV International, Royal Caribbean Cruise Lines, Ryder Systems, Seabourn Cruise Line, Sedano's, Telefónica USA, UniMÁS, Telemundo, Univision, U.S. Century Bank, Vector Group, and World Fuel Services. Because of its proximity to Latin America, Miami serves as the headquarters of Latin American operations for more than 1400 multinational corporations, including AIG, American Airlines, Cisco, Disney, Exxon, FedEx, Kraft Foods, LEO Pharma Americas, Microsoft, Yahoo, Oracle, SBC Communications, Sony, Symantec, Visa International, and Wal-Mart.
Miami is a major television production center, and the most important city in the U.S. for Spanish language media. Univisión, Telemundo and UniMÁS have their headquarters in Miami, along with their production studios. The Telemundo Television Studios produces much of the original programming for Telemundo, such as their telenovelas and talk shows. In 2011, 85% of Telemundo's original programming was filmed in Miami.Telemundo plans to tape 1,100 hours of telenovelas in Miami. Miamitodaynews.com (June 23, 2011). Retrieved on October 8, 2012. Miami is also a major music recording center, with the Sony Music Latin and Universal Music Latin Entertainment headquarters in the city, along with many other smaller record labels. The city also attracts many artists for music video and film shootings.
Since 2001, Miami has been undergoing a large building boom with more than 50 skyscrapers rising over built or currently under construction in the city. Miami's skyline is ranked third-most impressive in the U.S., behind New York City and Chicago, and 19th in the world according to the Almanac of Architecture and Design. The city currently has the eight tallest (as well as thirteen of the fourteen tallest) skyscrapers in the state of Florida, with the tallest being the Four Seasons Hotel & Tower.
During the mid-2000s, the city witnessed its largest real estate boom since the Florida land boom of the 1920s. During this period, the city had well over a hundred approved high-rise construction projects in which 50 were actually built.Miami: High rise buildings–All. Emporis. Retrieved August 25, 2007. In 2007, however, the housing market crashed causing lots of foreclosures on houses. This rapid high-rise construction, has led to fast population growth in the city's inner neighborhoods, primarily in Downtown, Brickell and Edgewater, with these neighborhoods becoming the fastest-growing areas in the city. The Miami area ranks 8th in the nation in foreclosures. In 2011, Forbes magazine named Miami the second-most miserable city in the United States due to its high foreclosure rate and past decade of corruption among public officials. In 2012, Forbes magazine named Miami the most miserable city in the United States because of a crippling housing crisis that has cost multitudes of residents their homes and jobs. The metro area has one of the highest violent crime rates in the country and workers face lengthy daily commutes. Like other metro areas in the United States, crime in Miami is localized to specific neighborhoods.
Miami International Airport and PortMiami are among the nation's busiest ports of entry, especially for cargo from South America and the Caribbean. The Port of Miami is the world's busiest cruise port, and MIA is the busiest airport in Florida, and the largest gateway between the United States and Latin America. Additionally, the city has the largest concentration of international banks in the country, primarily along Brickell Avenue in Brickell, Miami's financial district. Due to its strength in international business, finance and trade, many international banks have offices in Downtown such as Espírito Santo Financial Group, which has its U.S. headquarters in Miami. Miami was also the host city of the 2003 Free Trade Area of the Americas negotiations, and is one of the leading candidates to become the trading bloc's headquarters.
, PortMiami accounts for 176,000 jobs and has an annual economic impact in Miami of $18 billion. It is the 11th-largest cargo container port in the United States. In 2010, a record 4.33 million passengers traveled through PortMiami. One in seven of all the world's cruise passengers start from Miami.
right|thumb|The Civic Center has the country's second-largest concentration of medical and research facilities. It is the center of Miami's growing biotechnology sectors.
Tourism is also an important industry in Miami. Along with finance and business, the beaches, conventions, festivals and events draw over 38 million visitors annually into the city, from across the country and around the world, spending $17.1 billion. The Art Deco District in South Beach, is reputed as one of the most glamorous in the world for its nightclubs, beaches, historical buildings, and shopping. Annual events such as the Sony Ericsson Open, Art Basel, Winter Music Conference, South Beach Wine & Food Festival, and Mercedes-Benz Fashion Week Miami attract millions to the metropolis every year.
Miami is the home to the National Hurricane Center and the headquarters of the United States Southern Command, responsible for military operations in Central and South America. In addition to these roles, Miami is also an industrial center, especially for stone quarrying and warehousing. These industries are centered largely on the western fringes of the city near Doral and Hialeah.
According to the U.S. Census Bureau, in 2004, Miami had the third highest incidence of family incomes below the federal poverty line in the United States, making it the third poorest city in the USA, behind only Detroit, Michigan (ranked #1) and El Paso, Texas (ranked #2). Miami is also one of the very few cities where its local government went bankrupt, in 2001. However, since that time, Miami has experienced a revival: in 2008, Miami was ranked as "America's Cleanest City" according to Forbes for its year-round good air quality, vast green spaces, clean drinking water, clean streets and citywide recycling programs. In a 2009 UBS study of 73 world cities, Miami was ranked as the richest city in the United States (of four U.S. cities included in the survey) and the world's fifth-richest city, in terms of purchasing power.
+ Largest employers in Miami Employer EmployeesMiami-Dade County Public Schools48,571Miami-Dade County29,000United States Government19,500Florida Government17,100University of Miami16,100Baptist Health South Florida13,376Jackson Health12,576Publix10,800American Airlines9,000Florida International University8,000Miami Dade College6,200Precision Response Corporation5,000City of Miami4,309Florida Power and Light Company3,840Carnival Cruise Lines3,500
Culture
Entertainment and performing arts
thumb|Adrienne Arsht Center for the Performing Arts, the second-largest performing arts center in the United States.
In addition to such annual festivals like Calle Ocho Festival and Carnaval Miami, Miami is home to many entertainment venues, theaters, museums, parks and performing arts centers. The newest addition to the Miami arts scene is the Adrienne Arsht Center for the Performing Arts, the second-largest performing arts center in the United States after the Lincoln Center in New York City, and is the home of the Florida Grand Opera. Within it are the Ziff Ballet Opera House, the center's largest venue, the Knight Concert Hall, the Carnival Studio Theater and the Peacock Rehearsal Studio. The center attracts many large-scale operas, ballets, concerts, and musicals from around the world and is Florida's grandest performing arts center. Other performing arts venues in Miami include the Gusman Center for the Performing Arts, Coconut Grove Playhouse, Colony Theatre, Lincoln Theatre, New World Center, Actor's Playhouse at the Miracle Theatre, Jackie Gleason Theatre, Manuel Artime Theater, Ring Theatre, Playground Theatre, Wertheim Performing Arts Center, the Fair Expo Center and the Bayfront Park Amphitheater for outdoor music events.
The city attracts a large number of musicians, singers, actors, dancers, and orchestral players. Miami has numerous orchestras, symphonies and performing art conservatories. Some of these include the Florida Grand Opera, FIU School of Music, Frost School of Music, Miami City Ballet, Miami Conservatory, Miami Wind Symphony, New World School of the Arts, New World Symphony Orchestra, as well as the music, theater and art schools of the city's many universities and schools.
Miami is also a major fashion center, home to models and some of the top modeling agencies in the world. Miami is also host to many fashion shows and events, including the annual Miami Fashion Week and the Mercedes-Benz Fashion Week Miami held in the Wynwood Art District.
Museums and art
The city is home to numerous museums as well, many of which are in Downtown. These include the Frost Art Museum, HistoryMiami, Miami Art Museum, Miami Children's Museum, Miami Science Museum, Vizcaya Museum and Gardens, and the Miami-Dade Cultural Center, home of the Miami Main Library. Miami is also the home of the world's largest art exhibition, dubbed the "Olympics of Art", Art Basel Miami. The event is held annually in December, and attracts thousands of visitors from around the world.
Music
thumb|right|The city is a major music production city and attracts many annual music festivals, such as Ultra Music Festival
Miami music is varied. Cubans brought the conga and rumba, while Haitians and the rest of the French West Indies have brought kompa and zouk to Miami from their homelands instantly popularizing them in American culture. Dominicans brought bachata, and merengue, while Colombians brought vallenato and cumbia, and Brazilians brought samba. West Indians and Caribbean people have brought, reggae, soca, calypso, and steel pan to the area as well.
In the early 1970s, the Miami disco sound came to life with TK Records, featuring the music of KC and the Sunshine Band, with such hits as "Get Down Tonight", "(Shake, Shake, Shake) Shake Your Booty" and "That's the Way (I Like It)"; and the Latin-American disco group, Foxy (band), with their hit singles "Get Off" and "Hot Number". Miami-area natives George McCrae and Teri DeSario were also popular music artists during the 1970s disco era. The Bee Gees moved to Miami in 1975 and have lived here ever since then. Miami-influenced, Gloria Estefan and the Miami Sound Machine, hit the popular music scene with their Cuban-oriented sound and had hits in the 1980s with "Conga" and "Bad Boys".
Miami is also considered a "hot spot" for dance music, Freestyle, a style of dance music popular in the 1980s and 90s was heavily influenced by Electro, hip-hop, and disco. Many popular Freestyle acts such as Pretty Tony, Debbie Deb, Stevie B, and Exposé, originated in Miami. Indie/folk acts Cat Power and Iron & Wine are based in the city,Interview: Cat Power. Pitchfork Media (November 13, 2006). Retrieved August 25, 2007. while alternative hip hop artist Sage Francis, electro artist Uffie, and the electroclash duo Avenue D were born in Miami, but musically based elsewhere. Also, ska punk band Against All Authority is from Miami, and rock/metal bands Nonpoint and Marilyn Manson each formed in neighboring Fort Lauderdale. Cuban American female recording artist, Ana Cristina, was born in Miami in 1985.
The 1980s and '90s also brought the genre of high energy Miami Bass to dance floors and car subwoofers throughout the country. Miami Bass spawned artists like 2 Live Crew (featuring Uncle Luke), 95 South, Tag Team, 69 Boyz, Quad City DJ's, and Freak Nasty. Examples of these songs are "Whoomp! (There It Is)" by Tag Team in 1993, "Tootsee Roll" by 69 Boyz in 1994, and "C'mon N' Ride It (The Train)" by the Quad City DJ's in 1996.
This was also a period of alternatives to nightclubs, the warehouse party, acid house, rave and outdoor festival scenes of the late 1980s and early 1990s were havens for the latest trends in electronic dance music, especially house and its ever-more hypnotic, synthetic offspring techno and trance, in clubs like the infamous Warsaw Ballroom better known as Warsaw and The Mix where DJs like David Padilla (who was the resident DJ for both) and radio. The new sound fed back into mainstream clubs across the country. The scene in SoBe, along with a bustling secondhand market for electronic instruments and turntables, had a strong democratizing effect, offering amateur, "bedroom" DJs the opportunity to become proficient and popular as both music players and producers, regardless of the whims of the professional music and club industries. Some of these notable DJs are John Benetiz (better known as JellyBean Benetiz), Danny Tenaglia, and David Padilla.
Miami is also home to a vibrant techno and dance scene and hosts the Winter Music Conference, the largest dance event in the world, Ultra Music Festival and many electronica music-themed celebrations and festivals.
There are also several rap and hip hop artists out of Miami. They include Trick Daddy, Trina, Pitbull, Pretty Ricky, and the Miami Bass group 2 Live Crew.
Cuisine
thumb|250px|right|A cortadito is a popular espresso beverage found in cafeterias around Miami. It is particularly popular for breakfast or in the afternoon with a pastelito.
The cuisine of Miami is a reflection of its diverse population, with a heavy influence especially from Caribbean cuisine and from Latin American cuisine. By combining the two with American cuisine, it has spawned a unique South Florida style of cooking known as Floribbean cuisine. Floribbean cuisine is widely available throughout Miami and South Florida, and can be found in restaurant chains such as Pollo Tropical.
Cuban immigrants in the 1960s brought the Cuban sandwich, medianoche, Cuban espresso, and croquetas, all of which have grown in popularity to all Miamians, and have become symbols of the city's varied cuisine. Today, these are part of the local culture, and can be found throughout the city in window cafés, particularly outside of supermarkets and restaurants.Cuban Sandwich, History of Cuban Sandwich, History of Cubano Sandwich. Whatscookingamerica.net. Retrieved on October 8, 2012.Local Cuisine in Miami at Frommer's. Frommers.com. Retrieved on October 8, 2012. Restaurants such as Versailles restaurant in Little Havana are landmark eateries of Miami. Located on the Atlantic Ocean, and with a long history as a seaport, Miami is also known for its seafood, with many seafood restaurants located along the Miami River, and in and around Biscayne Bay.Miami Cuisine: Seafood Restaurants Guide – Miami Dining Guide. Miaminewtimes.com. Retrieved on October 8, 2012. Miami is also the home of restaurant chains such as Burger King, Tony Roma's and Benihana.
Dialect
The Miami area has a unique dialect, (commonly called the "Miami accent") which is widely spoken. The dialect developed among second- or third-generation Hispanics, including Cuban-Americans, whose first language was English (though some non-Hispanic white, black, and other races who were born and raised the Miami area tend to adopt it as well.) It is based on a fairly standard American accent but with some changes very similar to dialects in the Mid-Atlantic (especially the New York area dialect, Northern New Jersey English, and New York Latino English.) Unlike Virginia Piedmont, Coastal Southern American, and Northeast American dialects and Florida Cracker dialect (see section below), "Miami accent" is rhotic; it also incorporates a rhythm and pronunciation heavily influenced by Spanish (wherein rhythm is syllable-timed).
However, this is a native dialect of English, not learner English or interlanguage; it is possible to differentiate this variety from an interlanguage spoken by second-language speakers in that "Miami accent" does not generally display the following features: there is no addition of before initial consonant clusters with , speakers do not confuse of with , (e.g., Yale with jail), and /r/ and /rr/ are pronounced as alveolar approximant [] instead of alveolar tap [] or alveolar trill [r] in Spanish.English in the 305 has its own distinct Miami sound – Lifestyle – MiamiHerald.com
In popular culture
thumb|upright=2|View of the "Moon over Miami", a famous phrase that has inspired many pop culture items, including a movie, TV series, and song.
The video game Scarface: The World Is Yours takes place in Miami. The game is based on and is a quasi-sequel to the 1983 motion picture Scarface starring Al Pacino reprising his role as Tony Montana, with André Sogliuzzo providing Montana's voice. The game begins in the film's final scene, with Tony Montana's mansion being raided by Alejandro Sosa's (Robert Davi) assassins.
Sports
right|thumb|American Airlines Arena, home of the Miami Heat
thumb|Miami Jai Alai fronton, known as "The Yankee Stadium of Jai Alai"
Miami's main four sports teams are the Miami Dolphins of the National Football League, the Miami Heat of the National Basketball Association, the Miami Marlins of Major League Baseball, and the Florida Panthers of the National Hockey League. As well as having all four major professional teams, Miami is also home to the Major League Soccer expansion team led by David Beckham, Sony Ericsson Open for professional tennis, numerous greyhound racing tracks, marinas, jai alai venues, and golf courses. The city streets has hosted professional auto races, the Miami Indy Challenge and later the Grand Prix Americas. The Homestead-Miami Speedway oval hosts NASCAR national races.
The Heat and the Marlins play within Miami's city limits. The Heat play at the American Airlines Arena in Downtown Miami. The Miami Marlins home ballpark is Marlins Park, located in Little Havana on the site of the old Orange Bowl stadium.
The Miami Dolphins play at Hard Rock Stadium in suburban Miami Gardens. The Florida Panthers play in nearby Sunrise at the BB&T Center. Miami FC of the North American Soccer League, the second tier of the American soccer pyramid, play at FIU Stadium, and the Fort Lauderdale Strikers play at Lockhart Stadium in nearby Fort Lauderdale, also in the North American Soccer League. Miami is also home to Paso Fino horses, where competitions are held at Tropical Park Equestrian Center.
The Orange Bowl, a member of the Bowl Championship Series, hosts their college football championship games at Hard Rock Stadium. The stadium has also hosted the Super Bowl; the Miami metro area has hosted the game a total of ten times (five Super Bowls at the current Hard Rock Stadium, including Super Bowl XLI and five at the Miami Orange Bowl), tying New Orleans for the most games.
Miami is also the home of many college sports teams. The two largest are the University of Miami Hurricanes, whose football team plays at Hard Rock Stadium, and Florida International University Panthers whose football team plays at FIU Stadium.
The following table shows the Miami area major professional teams and Division I teams with an average attendance of more than 10,000:
+ Major professional and D-I college teams (attendance > 10,000)ClubSportLeagueVenue (Capacity) AttendanceLeague ChampionshipsMiami DolphinsFootballNational Football LeagueHard Rock Stadium (80,120) 70,035Super Bowl (2) — 1972, 1973 Miami Hurricanes Football NCAA D-I (ACC) Hard Rock Stadium (80,120) 53,837 National titles (5) — 1983, 1987, 1989, 1991, 2001Miami MarlinsBaseballMajor League BaseballMarlins Park (36,742) 21,386World Series (2) — 1997, 2003Miami HeatBasketballNational Basketball AssociationAmerican Airlines Arena (19,600) 19,710NBA Finals (3) — 2006, 2012, 2013 FIU Panthers Football NCAA D-I (Conference USA) FIU Stadium (23,500) 15,453 NoneFlorida PanthersHockeyNational Hockey LeagueBB&T Center (19,250) 10,250None Miami MLS team Soccer Major League Soccer Miami MLS Stadium None None
Parks
thumb|left|The Barnacle Historic State Park, built in 1891 in Miami's Coconut Grove neighborhood.
Miami's tropical weather allows for year-round outdoors activities. The city has numerous marinas, rivers, bays, canals, and the Atlantic Ocean, which make boating, sailing, and fishing popular outdoors activities. Biscayne Bay has numerous coral reefs which make snorkeling and scuba diving popular. There are over 80 parks and gardens in the city. The largest and most popular parks are Bayfront Park and Bicentennial Park (located in the heart of Downtown and the location of the American Airlines Arena and Bayside Marketplace), Tropical Park, Peacock Park, Morningside Park, Virginia Key, and Watson Island.
Other popular cultural destinations in or near Miami include Zoo Miami, Jungle Island, Miami Seaquarium, Monkey Jungle, Coral Castle, St. Bernard de Clairvaux Church, Charles Deering Estate, Fairchild Botanical Gardens, and Key Biscayne.
Government
thumb|Miami City Hall at Dinner Key in Coconut Grove. The city's primary administrative offices are held here.
thumb|Miami-Dade County Courthouse
The government of the City of Miami (proper) uses the mayor-commissioner type of system. The city commission consists of five commissioners which are elected from single member districts. The city commission constitutes the governing body with powers to pass ordinances, adopt regulations, and exercise all powers conferred upon the city in the city charter. The mayor is elected at large and appoints a city manager. The City of Miami is governed by Mayor Tomás Regalado and 5 City commissioners which oversee the five districts in the city. The commission's regular meetings are held at Miami City Hall, which is located at 3500 Pan American Drive on Dinner Key in the neighborhood of Coconut Grove .
City Commission
Tomás Regalado – Mayor of the City of Miami
Wifredo "Willy" Gort – Miami Commissioner, District 1
Allapattah and Grapeland Heights
Ken Russell – Miami Commissioner, District 2 (Vice-Chairman)
Brickell, Coconut Grove, Coral Way, Downtown Miami, Edgewater, Midtown Miami, Omni, Park West and the Upper Eastside
Frank Carollo – Miami Commissioner, District 3
Coral Way, Little Havana and The Roads
Francis Suárez – Miami Commissioner, District 4
Coral Way, Flagami and West Flagler
Keon Hardemon – Miami Commissioner, District 5 (Chairman)
Buena Vista, Design District, Liberty City, Little Haiti, Little River, Lummus Park, Overtown, Spring Garden and Wynwood
Daniel J. Alfonso – City Manager
Victoria Méndez – City Attorney
Todd B. Hannon- City Clerk
Education
Public schools
right|thumb|Miami Senior High School, Miami's oldest continuously used high school structure
thumb|Florida International University has the largest enrollment of any university in South Florida, and is one of the state's primary research universities.
Public schools in Miami are governed by Miami-Dade County Public Schools, which is the largest school district in Florida and the fourth-largest in the United States. As of September 2008 it has a student enrollment of 385,655 and over 392 schools and centers. The district is also the largest minority public school system in the country, with 60% of its students being of Hispanic origin, 28% Black or West Indian American, 10% White (non-Hispanic) and 2% non-white of other minorities.
Miami is home to some of the nation's best high schools, such as Design and Architecture High School, ranked the nation's best magnet school, MAST Academy, Coral Reef High School, ranked 20th-best public high school in the U.S., Miami Palmetto High School, and the New World School of the Arts. M-DCPS is also one of a few public school districts in the United States to offer optional bilingual education in Spanish, French, German, Haitian Creole, and Mandarin Chinese.
Private schools
Miami is home to several well-known Roman Catholic, Jewish and non-denominational private schools. The Archdiocese of Miami operates the city's Catholic private schools, which include: St. Hugh Catholic School, St. Agatha Catholic School, St. Theresa School, Immaculata-Lasalle High School, Monsignor Edward Pace High School, Archbishop Curley-Notre Dame High School, St. Brendan High School, amongst numerous other Catholic elementary and high schools.
Catholic preparatory schools operated by religious orders are Christopher Columbus High School and Belen Jesuit Preparatory School for boys and Carrollton School of the Sacred Heart and Our Lady of Lourdes Academy for girls.
Non-denominational private schools in Miami are Ransom Everglades, Gulliver Preparatory School, and Miami Country Day School. Other schools in the area include Samuel Scheck Hillel Community Day School, Dade Christian School, Palmer Trinity School, and Westminster Christian School.
Colleges and universities
thumb|right|Founded in 1925, the University of Miami is the oldest college in Florida south of Winter Park.
Miami has over 200,000 students enrolled in local colleges and universities, placing it seventh in the nation in per capita university enrollment. In 2010, the city's four largest colleges and universities (MDC, FIU, UM, and Barry) graduated 28,000 students.
Colleges and universities in and around Miami:
Barry University (private)
Carlos Albizu University (private)
Florida International University (FIU) (public)
Florida Memorial University (private)
Johnson and Wales University (private)
Keiser University (private)
Manchester Business School (satellite location, UK public)
Miami Culinary Institute (public)
Miami Dade College (public)
Miami International University of Art & Design (private)
Nova Southeastern University (private)
St. Thomas University (private)
Talmudic University (private)
University of Miami (private)
Overall, amongst Miamians 25 years and older, 67% had a high school diploma, and 22% had a bachelor's degree or higher.U.S. Census Bureau American FactFinder. Factfinder.census.gov. Retrieved on October 8, 2012.
In 2011, Miami was ranked as the sixth-most-read city in the U.S. with high book sales.Amazon Media Room: Press Releases. Phx.corporate-ir.net. Retrieved on October 8, 2012.
Professional training programs
Miami is also home to both for-profit and nonprofit organizations that offer a range of professional training and other, related educational programs. Per Scholas, for example is a nonprofit organization that offers free professional certification training directed towards successfully passing CompTIA A+ and Network+ certification exams as a route to securing jobs and building careers.
Media
thumb|Former headquarters of The Miami Herald
Miami has one of the largest television markets in the nation and the second largest in the state of Florida. Miami has several major newspapers, the main and largest newspaper being The Miami Herald. El Nuevo Herald is the major and largest Spanish-language newspaper. The Miami Herald and El Nuevo Herald are Miami's and South Florida's main, major and largest newspapers. The papers left their longtime home in downtown Miami in 2013. The newspapers are now headquartered at the former home of U.S. Southern Command in Doral.
Other major newspapers include Miami Today, headquartered in Brickell, Miami New Times, headquartered in Midtown, Miami Sun Post, South Florida Business Journal, Miami Times, and Biscayne Boulevard Times. An additional Spanish-language newspapers, Diario Las Americas also serve Miami. The Miami Herald is Miami's primary newspaper with over a million readers and is headquartered in Downtown in Herald Plaza. Several other student newspapers from the local universities, such as the oldest, the University of Miami's The Miami Hurricane, Florida International University's The Beacon, Miami-Dade College's The Metropolis, Barry University's The Buccaneer, amongst others. Many neighborhoods and neighboring areas also have their own local newspapers such as the Aventura News, Coral Gables Tribune, Biscayne Bay Tribune, and the Palmetto Bay News.
A number of magazines circulate throughout the greater Miami area, including Miami Monthly, Southeast Florida's only city/regional; Ocean Drive, a hot-spot social scene glossy, and South Florida Business Leader.
Miami is also the headquarters and main production city of many of the world's largest television networks, record label companies, broadcasting companies and production facilities, such as Telemundo, TeleFutura, Galavisión, Mega TV, Univisión, Univision Communications, Inc., Universal Music Latin Entertainment, RCTV International and Sunbeam Television. In 2009, Univisión announced plans to build a new production studio in Miami, dubbed 'Univisión Studios'. Univisión Studios is currently headquartered in Miami, and will produce programming for all of Univisión Communications' television networks.
Miami is the twelfth largest radio market and the seventeenth largest television market in the United States. Television stations serving the Miami area include: WAMI (Telefutura), WBFS (My Network TV), WSFL (The CW), WFOR (CBS), WHFT (TBN), WLTV (Univision), WPLG (ABC), WPXM (Ion), WSCV (Telemundo), WSVN (Fox), WTVJ (NBC), WPBT (PBS), and WLRN (also PBS).
Transportation
Airports
Miami International Airport serves as the primary international airport of the Greater Miami Area. One of the busiest international airports in the world, Miami International Airport caters to over 35 million passengers a year. The airport is a major hub and the single largest international gateway for American Airlines. Miami International is the busiest airport in Florida, and is the United States' second-largest international port of entry for foreign air passengers behind New York's John F. Kennedy International Airport, and is the seventh-largest such gateway in the world. The airport's extensive international route network includes non-stop flights to over seventy international cities in North and South America, Europe, Asia, and the Middle East.
Alternatively, nearby Fort Lauderdale-Hollywood International Airport also serves commercial traffic in the Miami area."Southwest Airlines Cities." Southwest Airlines. Retrieved October 30, 2008. Opa-locka Airport in Opa-locka and Kendall-Tamiami Airport in an unincorporated area serve general aviation traffic in the Miami area.
PortMiami
thumb|right|The Royal Caribbean International headquarters at the Port of Miami.
Miami is home to one of the largest ports in the United States, the PortMiami. It is the largest cruise ship port in the world. The port is often called the "Cruise Capital of the World" and the "Cargo Gateway of the Americas". It has retained its status as the number one cruise/passenger port in the world for well over a decade accommodating the largest cruise ships and the major cruise lines. In 2007, the port served 3,787,410 passengers. Additionally, the port is one of the nation's busiest cargo ports, importing 7.8 million tons of cargo in 2007. Among North American ports, it ranks second only to the Port of South Louisiana in New Orleans in terms of cargo tonnage imported/exported from Latin America. The port is on and has 7 passenger terminals. China is the port's number one import country, and Honduras is the number one export country. Miami has the world's largest amount of cruise line headquarters, home to: Carnival Cruise Lines, Celebrity Cruises, Norwegian Cruise Line, Oceania Cruises, and Royal Caribbean International. In 2014, the Port of Miami Tunnel was completed and will serve the PortMiami.
Public transportation
thumb|right|The Miami Metrorail is the city's rapid transit system and connects the city's central core with its outlying suburbs
thumb|right|Tri-Rail is Miami's commuter rail that runs north-south from Miami's suburbs in West Palm Beach to Miami International Airport.
Public transportation in Miami is operated by Miami-Dade Transit and SFRTA, and includes commuter rail (Tri-Rail), heavy-rail rapid transit (Metrorail), an elevated people mover (Metromover), and buses (Metrobus). Miami has Florida's highest transit ridership as about 17% of Miamians use transit on a daily basis.
Miami's heavy-rail rapid transit system, Metrorail, is an elevated system comprising two lines and 23 stations on a -long line. Metrorail connects the urban western suburbs of Hialeah, Medley, and inner-city Miami with suburban The Roads, Coconut Grove, Coral Gables, South Miami and urban Kendall via the central business districts of Miami International Airport, the Civic Center, and Downtown. A free, elevated people mover, Metromover, operates 21 stations on three different lines in greater Downtown Miami, with a station at roughly every two blocks of Downtown and Brickell. Several expansion projects are being funded by a transit development sales tax surcharge throughout Miami-Dade County.
Tri-Rail, a commuter rail system operated by the South Florida Regional Transportation Authority (SFRTA), runs from Miami International Airport northward to West Palm Beach, making eighteen stops throughout Miami-Dade, Broward, and Palm Beach counties.
Construction is currently underway on the Miami Intermodal Center and Miami Central Station, a massive transportation hub servicing Metrorail, Amtrak, Tri-Rail, Metrobus, Greyhound Lines, taxis, rental cars, MIA Mover, private automobiles, bicycles and pedestrians adjacent to Miami International Airport. Completion of the Miami Intermodal Center is expected to be completed by winter 2011, and will serve over 150,000 commuters and travelers in the Miami area. Phase I of Miami Central Station is scheduled to begin service in the spring of 2012, and Phase II in 2013.
Two new light rail systems, Baylink and the Miami Streetcar, have been proposed and are currently in the planning stage. BayLink would connect Downtown with South Beach, and the Miami Streetcar would connect Downtown with Midtown.
Rail
Miami is the southern terminus of Amtrak's Atlantic Coast services, running two lines, the Silver Meteor and the Silver Star, both terminating in New York City. The Miami Amtrak Station is located in the suburb of Hialeah near the Tri-Rail/Metrorail Station on NW 79 St and NW 38 Ave. Current construction of the Miami Central Station will move all Amtrak operations from its current out-of-the-way location to a centralized location with Metrorail, MIA Mover, Tri-Rail, Miami International Airport, and the Miami Intermodal Center all within the same station closer to Downtown. The station was expected to be completed by 2012, but experienced several delays and was later expected to be completed in late 2014, again pushed back to early 2015.
Florida High Speed Rail was a proposed government backed high-speed rail system that would have connected Miami, Orlando, and Tampa. The first phase was planned to connect Orlando and Tampa and was offered federal funding, but it was turned down by Governor Rick Scott in 2011. The second phase of the line was envisioned to connect Miami. By 2014, a private project known as All Aboard Florida by a company of the historic Florida East Coast Railway began construction of a higher-speed rail line in South Florida that is planned to eventually terminate at Orlando International Airport.
Road
upright=2|thumb|The Venetian Causeway (left) and MacArthur Causeway (right) connect Downtown and South Beach, Miami Beach.
thumb|State Road 886 (Port Boulevard) connects downtown and PortMiami by bridge over Biscayne Bay.
Miami's road system is based along the numerical "Miami Grid" where Flagler Street forms the east-west baseline and Miami Avenue forms the north-south meridian. The corner of Flagler Street and Miami Avenue is in the middle of Downtown in front of the Downtown Macy's (formerly the Burdine's headquarters). The Miami grid is primarily numerical so that, for example, all street addresses north of Flagler Street and west of Miami Avenue have "NW" in their address. Because its point of origin is in Downtown, which is close to the coast, therefore, the "NW" and "SW" quadrants are much larger than the "SE" and "NE" quadrants. Many roads, especially major ones, are also named (e.g., Tamiami Trail/SW 8th St), although, with exceptions, the number is in more common usage among locals.
With few exceptions, within this grid north/south roads are designated as Courts, Roads, Avenues or Places (often remembered by their acronym), while east/west roads are Streets, Terraces, Drives or occasionally Ways. Major roads in each direction are located at one mile intervals. There are 16 blocks to each mile on north/south avenues, and 10 blocks to each mile on east/west streets. Major north/south avenues generally end in "7" - e.g., 17th, 27th, 37th/Douglas Aves., 57th/Red Rd., 67th/Ludlam, 87th/Galloway, etc., all the way west beyond 177th/Krome Avenue. (One prominent exception is 42nd Avenue, LeJeune Road, located at the half-mile point instead.) Major east/west streets to the south of downtown are multiples of 16, though the beginning point of this system is at SW 8th St, one half mile south of Flagler ("zeroth") Street. Thus, major streets are at 8th St. + 16 = 24th St./Coral Way, + 16 = 40th St./Bird, +16 = 56th/Miller, + 16 = 72nd/ Sunset, + 16 = 88th/N. Kendall, + 16 = 104th (originally S. Kendall), + 16 = 120th/Montgomery, + 16 = 136th/Howard, + 16 = 152nd/Coral Reef, + 16 = 168th/Richmond, + 16 = 184th/Eureka, + 16 = 200th/Quail Roost, + 16 = 216th/Hainlin Mill, + 16 = 232nd/Silver Palm, + 16 = 248th/Coconut Palm, etc., well into the 300's. Within the Grid, odd-numbered addresses are generally on the north or east side, and even-numbered addresses are on the south or west side. This makes even unfamiliar addresses and distances easy - If one must travel from, say 1709 SW 8th St. to 24832 SW 157th Avenue, one knows it will be 140 blocks (157-17) / 20 miles to the west and 240 blocks (248-8) / 15 miles to the south, and that the destination will be on the south side of 248th St. Remarkably, even Miami natives are often unaware of this pattern.
All streets and avenues in Miami-Dade County follow the Miami Grid, with a few exceptions, most notably Coral Gables, Hialeah, Coconut Grove and Miami Beach. One neighborhood, The Roads, is thusly named because its streets run off the Miami Grid at a 45-degree angle, and therefore are all named roads.
Miami-Dade County is served by four Interstate Highways (I-75, I-95, I-195, I-395) and several U.S. Highways including U.S. Route 1, U.S. Route 27, U.S. Route 41, and U.S. Route 441.
Some of the major Florida State Roads (and their common names) serving Miami are:
SR 112 (Airport Expressway): Interstate 95 to MIA
Homestead Extension of Florida's Turnpike (SR 821): Florida's Turnpike mainline (SR 91)/Miami Gardens to U.S. Route 1/Florida City
SR 826 (Palmetto Expressway): Golden Glades Interchange to U.S. Route 1/Pinecrest
SR 836 (Dolphin Expressway): Downtown to SW 137th Ave via MIA
SR 874 (Don Shula Expressway): 826/Bird Road to Homestead Extension of Florida's Turnpike/Kendall
SR 878 (Snapper Creek Expressway): SR 874/Kendall to U.S. Route 1/Pinecrest & South Miami
SR 924 (Gratigny Parkway) Miami Lakes to Opa-locka
Miami Causeways Name Termini Year builtRickenbacker CausewayBrickell and Key Biscayne1947Venetian CausewayDowntown and South Beach1912–1925MacArthur CausewayDowntown and South Beach1920Julia Tuttle CausewayWynwood/Edgewater and Miami Beach195979th Street CausewayUpper East Side and North Beach1929Broad CausewayNorth Miami and Bal Harbour1951
Miami has six major causeways that span over Biscayne Bay connecting the western mainland, with the eastern barrier islands along the Atlantic Ocean. The Rickenbacker Causeway is the southernmost causeway and connects Brickell to Virginia Key and Key Biscayne. The Venetian Causeway and MacArthur Causeway connect Downtown with South Beach. The Julia Tuttle Causeway connects Midtown and Miami Beach. The 79th Street Causeway connects the Upper East Side with North Beach. The northernmost causeway, the Broad Causeway, is the smallest of Miami's six causeways, and connects North Miami with Bal Harbour.
In 2007, Miami was identified as having the rudest drivers in the United States, the second year in a row to have been cited, in a poll commissioned by automobile club AutoVantage. Miami is also consistently ranked as one of the most dangerous cities in the United States for pedestrians.
Bicycling
In recent years the city government, under Mayor Manny Diaz, has taken an ambitious stance in support of bicycling in Miami for both recreation and commuting. Every month, the city hosts "Bike Miami", where major streets in Downtown and Brickell are closed to automobiles, but left open for pedestrians and bicyclists. The event began in November 2008, and has doubled in popularity from 1,500 participants to about 3,000 in the October 2009 Bike Miami. This is the longest-running such event in the US. In October 2009, the city also approved an extensive 20-year plan for bike routes and paths around the city. The city has begun construction of bike routes as of late 2009, and ordinances requiring bike parking in all future construction in the city became mandatory as of October 2009.
In 2010, Miami was ranked as the 44th-most bike-friendly city in the US according to Bicycling Magazine.
Walkability
A 2011 study by Walk Score ranked Miami the eighth-most walkable of the fifty largest cities in the United States, but a 2013 survey by Travel + Leisure ranked Miami 34th for "public transportation and pedestrian friendliness."
Notable people
International relations
Twin and sister cities
Bogotá, Colombia (since 1971)
Buenos Aires, Argentina (since 1979)
Kagoshima, Japan (since 1990)
Lima, Peru (since 1977)
Madrid, Spain (since 2014)
Port-au-Prince, Haiti (since 1991)
Qingdao, China (since 2005)
Salvador do Bahia, Brazil (since 2006)
Santiago, Chile (since 1986)
Santo Domingo, Dominican Republic (since 1987)
Cooperation agreements
Lisbon, Portugal
See also
Miami Fire Department
Miami Police Department
Miami port tunnel
National Register of Historic Places listings in Miami, Florida
Notes
References
Further reading
Elizabeth M. Aranda, Sallie Hughes, and Elena Sabogal, Making a Life in Multiethnic Miami: Immigration and the Rise of a Global City. Boulder, Colorado: Renner, 2014.
External links
City of Miami – Official Site
City of Miami Government
Greater Miami Convention and Visitors Bureau
U.S. Census Bureau – Census 2000 Demographic Profile Highlights for City of Miami
Miami-Dade Municipalities
Category:1825 establishments in Florida Territory
Category:Bermuda Triangle
Category:Cities in Florida
Category:Cities in Miami-Dade County, Florida
Category:Cities in Miami metropolitan area
Category:County seats in Florida
Category:Populated coastal places in Florida on the Atlantic Ocean
Category:Populated places established in 1825
Category:Port cities and towns of the United States Atlantic coast
Category:Port cities in Florida
Category:Seaside resorts in Florida | 53,846 | 2017-01 |
University of Kansas | The University of Kansas, often referred to as KU or Kansas, is a public research university in the U.S. state of Kansas. The main campus in Lawrence, one of the largest college towns in Kansas, is on Mount Oread, the highest elevation in Lawrence. Two branch campuses are in the Kansas City metropolitan area: the Edwards Campus in Overland Park, and the university's medical school and hospital in Kansas City. There are also educational and research sites in Parsons, Topeka, Garden City, Hays, and Leavenworth, and branches of the medical school in Wichita and Salina. The university is one of the 62 members of the Association of American Universities.
Founded March 21, 1865, the university was opened in 1866, under a charter granted by the Kansas State Legislature in 1864 following enabling legislation passed in 1863 under the Kansas State Constitution, adopted two years after the 1861 admission of the former Kansas Territory as the 34th state into the Union following an internal civil war known as "Bleeding Kansas" during the 1850s.
Enrollment at the Lawrence and Edwards campuses was 24,708 students in fall 2015; an additional 3,383 students were enrolled at the KU Medical Centerhttp://oirp.ku.edu/sites/oirp.ku.edu/files/files/Profiles/2016/4-001.pdf for an enrollment of 28,091 students across the three campuses. The university overall employed 2,814 faculty members in fall 2015.
History
On February 20, 1863, Kansas Governor Thomas Carney signed into law a bill creating the state university in Lawrence. The law was conditioned upon a gift from Lawrence of a $15,000 endowment fund and a site for the university, in or near the town, of not less than forty acres (16 ha) of land. If Lawrence failed to meet these conditions, Emporia instead of Lawrence would get the university.
The site selected for the university was a hill known as Mount Oread, which was owned by former Kansas Governor Charles L. Robinson. Robinson and his wife Sara bestowed the site to the State of Kansas in exchange for land elsewhere. The philanthropist Amos Adams Lawrence donated $10,000 of the necessary endowment fund, and the citizens of Lawrence raised the remaining cash by issuing notes backed by Governor Carney. On November 2, 1863, Governor Carney announced that Lawrence had met the conditions to get the state university, and the following year the university was officially organized.
The school's Board of Regents held its first meeting in March 1865, which is the event that KU dates its founding from. Work on the first college building began later that year. The university opened for classes on September 12, 1866, and the first class graduated in 1873.
During World War II, Kansas was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission.
Famous landmarks and structures
KU is home to the Robert J. Dole Institute of Politics, the Beach Center on Disability, Lied Center of Kansas and radio stations KJHK, 90.7 FM, and KANU, 91.5 FM. The university is host to several museums including the University of Kansas Natural History Museum and the Spencer Museum of Art. The libraries of the University include Watson Library, Kenneth Spencer Research Library, the Murphy Art and Architecture Library, Thomas Gorton Music & Dance Library and Anschutz Library, which commemorates the businessman Philip Anschutz, an alumnus of the University.
thumb|Watson Library - Main Branch
Academics
The University of Kansas is a large, state-sponsored university, with five campuses. KU features the College of Liberal Arts & Sciences, which includes the School of the Arts and the School of Public Affairs & Administration; and the schools of Architecture, Design & Planning; Business; Education; Engineering; Health Professions; Journalism & Mass Communications; Law; Medicine; Music; Nursing; Pharmacy; and Social Welfare. The university offers more than 345 degree programs.
In its 2017 list, U.S. News & World Report ranked KU as tied for 118th place among National Universities and 56th place among public universities.
thumb|right|World War II Memorial Campanile
The city management and urban policy program was ranked first in the nation, and the special education program second, by U.S. News & World Report'''s 2016 rankings. USN&WR also ranked several programs in the top 25 among U.S. universities.
School of Architecture, Design, and Planning (S.A.D.P.)
The University of Kansas School of Architecture, Design, and Planning (SADP), with its main building being Marvin Hall, traces its architectural roots to the creation of the architectural engineering degree program in KU's School of Engineering in 1912. The Bachelor of Architecture degree was added in 1920. In 1969, the School of Architecture and Urban Design (SAUD) was formed with three programs: architecture, architectural engineering, and urban planning. In 2001 architectural engineering merged with civil and environmental engineering. The design programs from the discontinued School of Fine Arts were merged into the school in 2009 forming the current School of Architecture, Design, and Planning.
According to the journal DesignIntelligence, which annually publishes "America's Best Architecture and Design Schools," the School of Architecture and Urban Design at the University of Kansas was named the best in the Midwest and ranked 11th among all undergraduate architecture programs in the U.S in 2012.
thumb|200px|right|Chi Omega Fountain
School of Business
The University of Kansas School of Business is a public business school on the main campus of the University of Kansas in Lawrence, Kansas. The KU School of Business was founded in 1924 and has more than 80 faculty members and approximately 1500 students.
Named one of the best business schools in the Midwest by Princeton Review, the KU School of Business has been continually accredited by the Association to Advance Collegiate Schools of Business (AACSB) for its undergraduate and graduate programs in business and accounting.
thumb|Lippincott Hall - Offices of Study Abroad & The Wilcox Museum
School of Law
The University of Kansas School of Law, founded in 1878, was the top law school in the state of Kansas, and tied for 65th nationally, according to the 2016 U.S. News & World Report "U.S. News Best Colleges Rankings." Classes are held in Green Hall at W 15th St and Burdick Dr, which is named after former dean James Green.
School of Engineering
The KU School of Engineering is an ABET accredited, public engineering school located on the main campus. The School of Engineering was officially founded in 1891, although engineering degrees were awarded as early as 1873.
In the U.S. News & World Report's "America’s Best Colleges" 2016 issue, KU’s School of Engineering was ranked tied for 90th among national universities.
Notable alumni include: Alan Mulally (BS/MS), former President and CEO of Ford Motor Company, Lou Montulli, co-founder of Netscape and author of the Lynx web browser, Brian McClendon (BSEE 1986), VP of Engineering at Google, Charles E. Spahr (1934), former CEO of Standard Oil of Ohio.
School of Journalism and Mass Communications
The William Allen White School of Journalism and Mass Communications is recognized for its ability to prepare students to work in a variety of media. The school offers two tracts of study: 1) News and Information, and 2) Strategic Communication. This professional school teaches students reporting for print, online and broadcast, strategic campaigning for PR and advertising, photojournalism and video reporting and editing. The J-School's students maintain various publications on campus, including The University Daily Kansan, Jayplay magazine, and KUJH TV. In 2008, the Fiske Guide to Colleges praised the KU J-School for its strength. In 2010, the School of Journalism and Mass Communications placed second at the prestigious Hearst Foundation national writing competition.
thumb|The Natural History Museum
Medical Center
The University of Kansas Medical Center features three schools: the School of Medicine, School of Nursing, and School of Health Professions that each has its own programs of graduate study. As of the Fall 2013 semester, there were 3,349 students enrolled at KU Med. The Medical Center also offers four year instruction at the Wichita campus, and features a medical school campus in Salina, Kansas devoted to rural health care.
The university-affiliated independent University of Kansas Hospital is co-located at the University of Kansas Medical Center.
The Edwards Campus, Overland Park
KU's Edwards Campus is in Overland Park, Kansas. Established in 1993, its goal is to provide adults with the opportunity to complete undergraduate, graduate and certificate programs. About 2,000 students attend the Edwards Campus, with an average age of 31. Programs available at the Edwards Campus include business administration, education, engineering, social work and more.
Tuition
Tuition at KU is 13 percent below the national average, according to the College Board, and the University remains a best buy in the region.
Beginning in the 2007–2008 academic year, first-time freshman at KU pay a fixed tuition rate for 48 months according to the Four-Year Tuition Compact passed by the Kansas Board of Regents. For the 2014–15 academic year, tuition was $318 per credit hour for in-state freshman and $828 for out-of-state freshmen. For transfer students, who do not take part in the compact, 2014–15 per-credit-hour tuition was $295 for in-state undergraduates and $785 for out-of-state undergraduates; subject to annual increases. Students enrolled in 6 or more credit hours also paid an annual required campus fee of $888. The schools of architecture, music, arts, business, education, engineering, journalism, law, pharmacy, and social welfare charge additional fees.
Computing innovations
KU's School of Business launched interdisciplinary management science graduate studies in operations research during Fall Semester 1965. The program provided the foundation for decision science applications supporting NASA Project Apollo Command Capsule Recovery Operations.
KU's academic computing department was an active participant in setting up the Internet and is the developer of the early Lynx text based web browser. Lynx provided hypertext browsing and navigation prior to Tim Berners Lee's invention of HTTP and HTML.
Student activities
Athletics
200px|thumb|left|Kansas Jayhawks Athletics wordmark
The school's sports teams, wearing crimson and royal blue, are called the Kansas Jayhawks. They participate in the NCAA's Division I and in the Big 12 Conference. KU has won thirteen National Championships: five in men's basketball (two Helms Foundation championships and three NCAA championships), three in men's indoor track and field, three in men's outdoor track and field, one in men's cross country and one in women's outdoor track and field. The home course for KU Cross Country is Rim Rock Farm. Their most recent championship came on June 8, 2013 when the KU women's track and field team won the NCAA outdoor in Eugene, Oregon becoming the first University of Kansas women's team to win a national title.Women's Track and Field team Championship is 1st KU women's championship
thumb|Memorial Stadium
KU football dates from 1890, and has played in the Orange Bowl three times: 1948, 1968, and 2008. They are currently coached by David Beaty, who was hired in 2014. In 2008, under the leadership of Mark Mangino, the #7 Jayhawks emerged victorious in their first BCS bowl game, the FedEx Orange Bowl, with a 24–21 victory over the #3 Virginia Tech Hokies. This capstone victory marked the end of the most successful season in school history, in which the Jayhawks went 12–1 (.923). The team plays at Memorial Stadium, which recently underwent a $31 million renovation to add the Anderson Family Football Complex, adding a football practice facility adjacent to the stadium complete with indoor partial practice field, weight room, and new locker room.
The KU men's basketball team has fielded a team every year since 1898. The Jayhawks are a perennial national contender, coached by Bill Self. The team has won five national titles, including three NCAA tournament championships in 1952, 1988, and 2008. The basketball program is currently the second winningest program in college basketball history with an overall record of 2,070–806 through the 2011–12 season. The team plays at Allen Fieldhouse. Perhaps its best recognized player was Wilt Chamberlain, who played in the 1950s.
Kansas has counted among its coaches Dr. James Naismith (the inventor of basketball and only coach in Kansas history to have a losing record), Basketball Hall of Fame inductee Phog Allen ("the Father of basketball coaching"), Basketball Hall of Fame inductee Roy Williams of the University of North Carolina at Chapel Hill, and Basketball Hall of Fame inductee and former NBA Champion Detroit Pistons coach Larry Brown. In addition, legendary University of Kentucky coach and Basketball Hall of Fame inductee Adolph Rupp played for KU's 1922 and 1923 Helms National Championship teams, and NCAA Hall of Fame inductee and University of North Carolina Coach Dean Smith played for KU's 1952 NCAA Championship team. Both Rupp and Smith played under Phog Allen. Allen also coached Hall of Fame coaches Dutch Lonborg and Ralph Miller. Allen founded the National Association of Basketball Coaches (NABC), which started what is now the NCAA Tournament. The Tournament began in 1939 under the NABC and the next year was handed off to the newly formed NCAA.
Notable non-varsity sports include rugby. The rugby team owns its private facility and internationally tours every two years.
Sheahon Zenger was introduced as KU's new athletic director in January 2011. Under former athletic director Lew Perkins, the department's budget increased from $27.2 million in 2003 (10th in the conference) to currently over $50 million thanks in large part to money raised from a new priority seating policy at Allen Fieldhouse, a new $26.67 million eight-year contract with Adidas replacing an existing contract with Nike, and a new $40.2 million seven-year contract with ESPN Regional Television. The additional funds brought improvements to the university, including:King, Jason. "Hawk Market", The Kansas City Star (June 11, 2006), pp. C1, C14.
The Booth Family Hall of Athletics addition to Allen Fieldhouse
Brand new offices and lounges for the women's basketball program
Brand new scoreboard and batting facility for the baseball field
A new $35 million football facility adjacent to Memorial Stadium
The $8 million Anderson Family Strength Center
thumb|Fraser Hall - KU's Landmark Academic Building
Debate teams
The University of Kansas has had more teams (70) compete in the National Debate Tournament than any other university. Kansas has won the tournament 5 times (1954, 1970, 1976, 1983, and 2009) and had 14 teams make it to the final four. Kansas trails only Northwestern (13), Dartmouth (6), and Harvard (6) for most tournaments won. Kansas also won the 1981–82 Copeland Award.
Anthems
Notable among a number of songs commonly played and sung at various events such as commencement and convocation, and athletic games are: "I’m a Jayhawk", "Fighting Jayhawk", "Kansas Song", "Sunflower Song", "Crimson and the Blue", "Red and Blue", the "Rock Chalk, Jayhawk" chant", "Home on the Range" and "Stand Up and Cheer."
Media
The university's newspaper is University Daily Kansan, which placed first in the Intercollegiate Writing Competition of the prestigious William Randolph Hearst Writing Foundation competition, often called "The Pulitzers of College Journalism" in 2007. In Winter 2008, a group of students created KUpedia, a wiki about all things KU. They received student funding for operations in 2008–09. The KU Department of English publishes the Coal City Review, an annual literary journal of prose, poetry, reviews and illustrations. The Review typically features the work of many writers, but periodically spotlights one author, as in the case of 2006 Nelson Poetry Book Award-winner Voyeur Poems by Matthew Porubsky."Poet well-versed in voyeurism" ~ Lawrence.com, December 2, 2006
The University Daily Kansan operates outside of the university's William Allen White School of Journalism and reaches at least 30,000 daily readers through its print and online publicationsUniversity Daily Kansan
thumb|The William Allen White School of Journalism
The university houses the following public broadcasting stations: KJHK, a student-run campus radio station, KUJH-LP, an independent station that primarily broadcasts public affairs programs, and KANU, the NPR-affiliated radio station. Kansas Public Radio station KANU was one of the nation's first public radio stations. KJHK, the campus radio has roots back to 1952 and is completely run by students.
Housing
thumb|Potter Lake, with Joseph R. Pearson Hall in the background
KU Student Housing Year opened Year closed Students Accommodations Marie S. McCarthy Hall 2015 38Men Only: Upperclassmen/Non-Traditional Students Oswald Hall 2015 350 Freshmen only Self Hall 2015 350 Freshmen only Battenfeld Hall 1940 50 Men only Corbin Hall 1923 900 Women only Douthart Hall 1954 50 Women only Ellsworth Hall 1963 580 All Students Gertrude Sellards Pearson Hall 1955 380 All Students Grace Pearson Hall 1955 50 Men only Guest House - 2 Visiting Guests Hashinger Hall 1962 370 All Students Jayhawker Towers - 200 Non-traditional, Upperclassmen, Transfer students K.K. Amini Hall 1992 50 Men only Krehbiel Hall 2008 50 Men only Lewis Hall 1962 260 All Students Margret Amini Hall 2000 50 Women only McCollum Hall 1965 2015 Razed November 25, 2015 Miller Hall 1937 50 Women only Oliver Hall 1966 660 All Students Pearson Hall 1952 47 Men only Rieger Hall 2005 50 Women only Sellards Hall 1952 47 Women only Stephenson Hall 1952 50 Men only Stouffer Place - 283 Graduate Students, Couples, Non-Traditional Templin Hall 1959 280 All Students Transition Housing - 19 KU Faculty and Staff (temporary) Watkins Hall 1925 50 Women only Total - 4,534 students -
Foundations
University of Kansas Memorial Corporation
The first union was built on campus in 1926 as a campus community center. The unions are still the "living rooms" of campus, and include three locations – the Kansas Union and Burge Union at the Lawrence Campus and Jayhawk Central at the Edwards Campus. The KU Memorial Unions Corporation manages the KU Bookstore (with seven locations). The KU Bookstore is the official bookstore of KU. The Corporation also includes KU Dining Services, with more than 20 campus locations, including The Market (inside the Kansas Union) and The Underground (located in Wescoe Hall). The KU Bookstore and KU Dining Services are not-for-profit, with proceeds supporting student programs, such as Student Union Activities.
KU Endowment
KU Endowment was established in 1891 as America’s first foundation for a public university. Its mission is to partner with donors in providing philanthropic support to build a greater University of Kansas.
Notable alumni and faculty
See also
Bailey Hall (University of Kansas)
Budig Hall
Kansas Audio-Reader Network
Kansas Crew (University Rowing Club)
University of Kansas Marching Jayhawks
References
Further reading
University of Kansas Traditions: The Jayhawk
Kirke Mechem, "The Mythical Jayhawk", Kansas Historical Quarterly XIII: 1 (February 1944), pp. 3–15. A tongue-in-cheek history and description of the Mythical Jayhawk.
Kansas : A Cyclopedia of State History, Embracing Events, Institutions, Industries, Counties, Cities, Towns, Prominent Persons, Etc''; 3 Volumes; Frank W. Blackmar; Standard Publishing Co; 944 / 955 / 824 pages; 1912. (Volume1 - 54MB PDF), (Volume2 - 53MB PDF), (Volume3 - 33MB PDF)
External links
Kansas Athletics website
University of Kansas
Category:Educational institutions established in 1865
Category:Universities and colleges in Kansas
Category:Education in Douglas County, Kansas
Category:Tourist attractions in Lawrence, Kansas
Category:Education in Wichita, Kansas
Category:Education in Wyandotte County, Kansas
Category:1865 establishments in Kansas
Category:Flagship universities in the United States
Category:Public universities
Category:V-12 Navy College Training Program | 163,327 | 2017-01 |
Daylight saving time | upright=1.67|thumb
|alt=World map. Europe, most of North America, parts of southern South America and southeastern Australia, and a few other places use DST. Most of equatorial Africa and a few other places near the equator have never used DST. The rest of the landmass is marked as formerly using DST.
|Daylight saving time regions :
Daylight saving time (DST) or summer time is the practice of advancing clocks during summer months by one hour so that evening daylight lasts an hour longer, while sacrificing normal sunrise times. Typically, regions with summer time adjust clocks forward one hour close to the start of spring and adjust them backward in the autumn to standard time.
New Zealander George Hudson proposed the idea of daylight saving in 1895. The German Empire and Austria-Hungary organized the first nationwide implementation, starting on April 30, 1916. Many countries have used it at various times since then, particularly since the energy crisis of the 1970s.
The practice has both advocates and critics. Putting clocks forward benefits retailing, sports, and other activities that exploit sunlight after working hours, but can cause problems for outdoor entertainment and other activities tied to sunlight, such as farming. Though some early proponents of DST aimed to reduce evening use of incandescent lighting—once a primary use of electricity—today's heating and cooling usage patterns differ greatly, and research about how DST affects energy use is limited and contradictory.
DST clock shifts sometimes complicate timekeeping, and can disrupt travel, billing, record keeping, medical devices, heavy equipment, and sleep patterns. Computer software often adjusts clocks automatically, but policy changes by various jurisdictions of DST dates and timings may be confusing.
Rationale
Industrialized societies generally follow a clock-based schedule for daily activities that do not change throughout the course of the year. The time of day that individuals begin and end work or school, and the coordination of mass transit, for example, usually remain constant year-round. In contrast, an agrarian society's daily routines for work and personal conduct are more likely governed by the length of daylight hours and by solar time, which change seasonally because of the Earth's axial tilt. North and south of the tropics daylight lasts longer in summer and shorter in winter, with the effect becoming greater as one moves away from the tropics.
By synchronously resetting all clocks in a region to one hour ahead of standard time (one hour "fast"), individuals who follow such a year-round schedule will wake an hour earlier than they would have otherwise; they will begin and complete daily work routines an hour earlier, and they will have available to them an extra hour of daylight after their workday activities. However, they will have one less hour of daylight at the start of each day, making the policy less practical during winter.
While the times of sunrise and sunset change at roughly equal rates as the seasons change, proponents of Daylight Saving Time argue that most people prefer a greater increase in daylight hours after the typical "nine to five" workday. Supporters have also argued that DST decreases energy consumption by reducing the need for lighting and heating, but the actual effect on overall energy use is heavily disputed.
The manipulation of time at higher latitudes (for example Iceland, Nunavut or Alaska) has little impact on daily life, because the length of day and night changes more extremely throughout the seasons (in comparison to other latitudes), and thus sunrise and sunset times are significantly out of phase with standard working hours regardless of manipulations of the clock. DST is also of little use for locations near the equator, because these regions see only a small variation in daylight in the course of the year. The effect also varies according to how far east or west the location is within its time zone, with locations farther east inside the time zone benefiting more from DST than locations farther west in the same time zone.
History
thumb|upright|alt=A water clock. A small human figurine holds a pointer to a cylinder marked by the hours. The cylinder is connected by gears to a water wheel driven by water that also floats, a part that supports the figurine.|
Ancient water clock that lets hour lengths vary with season.
Although they did not fix their schedules to the clock in the modern sense, ancient civilizations adjusted daily schedules to the sun more flexibly than DST does, often dividing daylight into twelve hours regardless of daytime, so that each daylight hour was longer during summer. For example, Roman water clocks had different scales for different months of the year: at Rome's latitude the third hour from sunrise, hora tertia, started by modern standards at 09:02 solar time and lasted 44 minutes at the winter solstice, but at the summer solstice it started at 06:58 and lasted 75 minutes (see also: Roman timekeeping). After ancient times, equal-length civil hours eventually supplanted unequal, so civil time no longer varies by season. Unequal hours are still used in a few traditional settings, such as some monasteries of Mount Athos and all Jewish ceremonies.
During his time as an American envoy to France, Benjamin Franklin, publisher of the old English proverb, "Early to bed, and early to rise, makes a man healthy, wealthy and wise", anonymously published a letter suggesting that Parisians economize on candles by rising earlier to use morning sunlight. This 1784 satire proposed taxing window shutters, rationing candles, and waking the public by ringing church bells and firing cannons at sunrise. Its first publication was in the journal's "Économie" section, in a French translation of the English original. The revised English version [cited February 13, 2009] is commonly called "An Economical Project", a title that is not Franklin's; see Despite common misconception, Franklin did not actually propose DST; 18th-century Europe did not even keep precise schedules. However, this soon changed as rail transport and communication networks came to require a standardization of time unknown in Franklin's day.
thumb|upright|left|alt=Fuzzy head-and-shoulders photo of a 40-year-old man in a cloth cap and mustache.|George Hudson invented modern DST, proposing it first in 1895.
Modern DST was first proposed by the New Zealand entomologist George Hudson, whose shift work job gave him leisure time to collect insects, and led him to value after-hours daylight. In 1895 he presented a paper to the Wellington Philosophical Society proposing a two-hour daylight-saving shift, and after considerable interest was expressed in Christchurch, he followed up in an 1898 paper. Many publications credit DST's proposal to the prominent English builder and outdoorsman William Willett, who independently conceived DST in 1905 during a pre-breakfast ride, when he observed with dismay how many Londoners slept through a large part of a summer's day. An avid golfer, he also disliked cutting short his round at dusk. His solution was to advance the clock during the summer months, a proposal he published two years later.Willett pamphlet: The proposal was taken up by the Liberal Member of Parliament (MP) Robert Pearce, who introduced the first Daylight Saving Bill to the House of Commons on February 12, 1908. A select committee was set up to examine the issue, but Pearce's bill did not become law, and several other bills failed in the following years. Willett lobbied for the proposal in the UK until his death in 1915.
William Sword Frost, mayor of Orillia, Ontario, introduced daylight saving time in the municipality during his tenure from 1911 to 1912.
Starting on April 30, 1916, the German Empire and its World War I ally Austria-Hungary were the first to use DST () as a way to conserve coal during wartime. Britain, most of its allies, and many European neutrals soon followed suit. Russia and a few other countries waited until the next year and the United States adopted it in 1918.
Broadly speaking, Daylight Saving Time was abandoned in the years after the war (with some notable exceptions including Canada, the UK, France, and Ireland). However, it was brought back for periods of time in many different places during the following decades, and commonly during World War II. It became widely adopted, particularly in North America and Europe, starting in the 1970s as a result of the 1970s energy crisis.
Since then, the world has seen many enactments, adjustments, and repeals. For specific details, an overview is available at daylight saving time by country.
Procedure
In the case of the United States, where a one-hour shift occurs at 02:00 local time, in spring the clock jumps forward from the last instant of 01:59 standard time to 03:00 DST and that day has 23 hours, whereas in autumn the clock jumps backward from the last instant of 01:59 DST to 01:00 standard time, repeating that hour, and that day has 25 hours.. A digital display of local time does not read 02:00 exactly at the shift to summer time, but instead jumps from 01:59:59.9 forward to 03:00:00.0.
Clock shifts are usually scheduled near a weekend midnight to lessen disruption to weekday schedules. A one-hour shift is customary. Twenty-minute and two-hour shifts have been used in the past.
Coordination strategies differ when adjacent time zones shift clocks. The European Union shifts all zones at the same instant, at 01:00 Greenwich Mean Time
or 02:00 CET or 03:00 EET. The result of this procedure is that Eastern European Time is always one hour ahead of Central European Time, at the cost of the shift happening at different local times. In contrast most of North America shifts at 02:00 local time, so its zones do not shift at the same instant; for example, Mountain Time is temporarily (for one hour) zero hours ahead of Pacific Time, instead of one hour ahead, in the autumn and two hours, instead of one, ahead of Pacific Time in the spring. In the past, Australian districts went even further and did not always agree on start and end dates; for example, in 2008 most DST-observing areas shifted clocks forward on October 5 but Western Australia shifted on October 26. In some cases only part of a country shifts; for example, in the U.S., Hawaii and most of Arizona do not observe DST.
Start and end dates vary with location and year. Since 1996, European Summer Time has been observed from the last Sunday in March to the last Sunday in October; previously the rules were not uniform across the European Union. Starting in 2007, most of the United States and Canada observe DST from the second Sunday in March to the first Sunday in November, almost two-thirds of the year. The 2007 U.S. change was part of the Energy Policy Act of 2005; previously, from 1987 through 2006, the start and end dates were the first Sunday in April and the last Sunday in October, and Congress retains the right to go back to the previous dates now that an energy-consumption study has been done. Proponents for permanently retaining November as the month for ending DST point to Halloween as a reason to delay the change—to provide extra daylight on October 31.
|thumb|left|alt=Time graph. The horizontal axis shows dates in 2008. The vertical axis shows the UTC offsets of eastern Brazil and eastern U.S. The difference between the two starts at 3 hours, then goes to 2 hours on February 17 at 24:00 Brazil eastern time, then goes to 1 hour on March 9 at 02:00 U.S. eastern time.|In early 2008 central Brazil was one, two, or three hours ahead of eastern U.S., depending on the date.
Beginning and ending dates are roughly the reverse in the southern hemisphere. For example, mainland Chile observed DST from the second Saturday in October to the second Saturday in March, with transitions at 24:00 local time. The time difference between the United Kingdom and mainland Chile could therefore be five hours during the northern summer, three hours during the southern summer and four hours a few weeks per year because of mismatch of changing dates.
DST is generally not observed near the equator, where sunrise times do not vary enough to justify it. Some countries observe it only in some regions; for example, southern Brazil observes it while equatorial Brazil does not. Only a minority of the world's population uses DST because Asia and Africa generally do not observe it.
Politics
Daylight saving has caused controversy since it began.DST practices and controversies:
The British version, focusing on the UK, is Winston Churchill argued that it enlarges "the opportunities for the pursuit of health and happiness among the millions of people who live in this country" and pundits have dubbed it "Daylight Slaving Time". Historically, retailing, sports, and tourism interests have favored daylight saving, while agricultural and evening entertainment interests have opposed it, and its initial adoption had been prompted by energy crises and war.
The fate of Willett's 1907 proposal illustrates several political issues involved. The proposal attracted many supporters, including Arthur Balfour, Churchill, David Lloyd George, Ramsay MacDonald, Edward VII (who used half-hour DST at Sandringham or "Sandringham time"), the managing director of Harrods, and the manager of the National Bank. However, the opposition was stronger: it included Prime Minister H. H. Asquith, Christie (the Astronomer Royal), George Darwin, Napier Shaw (director of the Meteorological Office), many agricultural organizations, and theatre owners. After many hearings the proposal was narrowly defeated in a parliamentary committee vote in 1909. Willett's allies introduced similar bills every year from 1911 through 1914, to no avail. The U.S. was even more skeptical: Andrew Peters introduced a DST bill to the United States House of Representatives in May 1909, but it soon died in committee.
thumb|upright|alt=Poster titled "VICTORY! CONGRESS PASSES DAYLIGHT SAVING BILL" showing Uncle Sam turning a clock to daylight saving time as a clock-headed figure throws his hat in the air. The clock face of the figure reads "ONE HOUR OF EXTRA DAYLIGHT". The bottom caption says "Get Your Hoe Ready!"|Retailers generally favor DST. United Cigar Stores hailed a 1918 DST bill.
After Germany led the way with starting DST () during World War I on April 30, 1916 together with its allies to alleviate hardships from wartime coal shortages and air raid blackouts, the political equation changed in other countries; the United Kingdom used DST first on May 21, 1916. U.S. retailing and manufacturing interests led by Pittsburgh industrialist Robert Garland soon began lobbying for DST, but were opposed by railroads. The U.S.'s 1917 entry to the war overcame objections, and DST was established in 1918.
The war's end swung the pendulum back. Farmers continued to dislike DST, and many countries repealed it after the war. Britain was an exception: it retained DST nationwide but over the years adjusted transition dates for several reasons, including special rules during the 1920s and 1930s to avoid clock shifts on Easter mornings. Now under a European Community directive summer time begins annually on the last Sunday in March, which may be Easter Sunday (as in 2016). The U.S. was more typical: Congress repealed DST after 1919. President Woodrow Wilson, like Willett an avid golfer, vetoed the repeal twice but his second veto was overridden. Only a few U.S. cities retained DST locally thereafter, including New York so that its financial exchanges could maintain an hour of arbitrage trading with London, and Chicago and Cleveland to keep pace with New York. Wilson's successor Warren G. Harding opposed DST as a "deception". Reasoning that people should instead get up and go to work earlier in the summer, he ordered District of Columbia federal employees to start work at 08:00 rather than 09:00 during summer 1922. Some businesses followed suit though many others did not; the experiment was not repeated.
Since Germany's adoption in 1916, the world has seen many enactments, adjustments, and repeals of DST, with similar politics involved.
The history of time in the United States includes DST during both world wars, but no standardization of peacetime DST until 1966. In May 1965, for two weeks, St. Paul, Minnesota and Minneapolis, Minnesota were on different times, when the capital city decided to join most of the nation by starting Daylight Saving Time while Minneapolis opted to follow the later date set by state law.May 1965, Minnesota Mayhem. In the mid-1980s, Clorox (parent of Kingsford Charcoal) and 7-Eleven provided the primary funding for the Daylight Saving Time Coalition behind the 1987 extension to U.S. DST, and both Idaho senators voted for it based on the premise that during DST fast-food restaurants sell more French fries, which are made from Idaho potatoes.
In 1992, after a three-year trial of daylight saving in Queensland, Australia, a referendum on daylight saving was held and defeated with a 54.5% 'no' vote – with regional and rural areas strongly opposed, while those in the metropolitan south-east were in favor. In 2005, the Sporting Goods Manufacturers Association and the National Association of Convenience Stores successfully lobbied for the 2007 extension to U.S. DST. In December 2008, the Daylight Saving for South East Queensland (DS4SEQ) political party was officially registered in Queensland, advocating the implementation of a dual-time zone arrangement for daylight saving in South East Queensland while the rest of the state maintains standard time. DS4SEQ contested the March 2009 Queensland state election with 32 candidates and received one percent of the statewide primary vote, equating to around 2.5% across the 32 electorates contested. After a three-year trial, more than 55% of Western Australians voted against DST in 2009, with rural areas strongly opposed. On April 14, 2010, after being approached by the DS4SEQ political party, Queensland Independent member Peter Wellington, introduced the Daylight Saving for South East Queensland Referendum Bill 2010 into the Queensland parliament, calling for a referendum at the next state election on the introduction of daylight saving into South East Queensland under a dual-time zone arrangement. The Bill was defeated in the Queensland parliament on June 15, 2011.
In the UK the Royal Society for the Prevention of Accidents supports a proposal to observe SDST's additional hour year-round, but is opposed in some industries, such as postal workers and farmers, and particularly by those living in the northern regions of the UK.
In some Muslim countries, DST is temporarily abandoned during Ramadan (the month when no food should be eaten between sunrise and sunset), since the DST would delay the evening dinner. Ramadan took place in July and August in 2012. This concerns at least Morocco and Palestine, although Iran keeps DST during Ramadan. Most Muslim countries do not use DST, partially for this reason.
The 2011 declaration by Russia that it would stay in DST all year long was subsequently followed by a similar declaration from Belarus. Russia's plan generated widespread complaints due to the dark of wintertime morning, and thus was abandoned in 2014. The country changed its clocks to Standard Time on October 26, 2014 and intends to stay there permanently.
Dispute over benefits and drawbacks
thumb|right|upright|alt=A standing man in three-piece suit, facing camera. He is about 60 and is bald with a mustache. His left hand is in his pants pocket, and his right hand is in front of his chest, holding his pocket watch.|William Willett independently proposed DST in 1907 and advocated it tirelessly.
Proponents of DST generally argue that it saves energy, promotes outdoor leisure activity in the evening (in summer), and is therefore good for physical and psychological health, reduces traffic accidents, reduces crime, or is good for business. Groups that tend to support DST are urban workers, retail businesses, outdoor sports enthusiasts and businesses, tourism operators, and others who benefit from increased light during the evening in summer.
Opponents argue that actual energy savings are inconclusive, that DST increases health risks such as heart attack, that DST can disrupt morning activities, and that the act of changing clocks twice a year is economically and socially disruptive and cancels out any benefit. Farmers have tended to oppose DST.
Common agreement about the day's layout or schedule confers so many advantages that a standard DST schedule has generally been chosen over ad hoc efforts to get up earlier. The advantages of coordination are so great that many people ignore whether DST is in effect by altering their nominal work schedules to coordinate with television broadcasts or daylight. DST is commonly not observed during most of winter, because its mornings are darker; workers may have no sunlit leisure time, and children may need to leave for school in the dark. Since DST is applied to many varying communities, its effects may be very different depending on their culture, light levels, geography, and climate; that is why it is hard to make generalized conclusions about the absolute effects of the practice. Some areas may adopt DST simply as a matter of coordination with others rather than for any direct benefits.
Energy use
DST's potential to save energy comes primarily from its effects on residential lighting, which consumes about 3.5% of electricity in the United States and Canada. Delaying the nominal time of sunset and sunrise reduces the use of artificial light in the evening and increases it in the morning. As Franklin's 1784 satire pointed out, lighting costs are reduced if the evening reduction outweighs the morning increase, as in high-latitude summer when most people wake up well after sunrise. An early goal of DST was to reduce evening usage of incandescent lighting, once a primary use of electricity. Although energy conservation remains an important goal, energy usage patterns have greatly changed since then, and recent research is limited and reports contradictory results. Electricity use is greatly affected by geography, climate, and economics, making it hard to generalize from single studies.
The United States Department of Transportation (DOT) concluded in 1975 that DST might reduce the country's electricity usage by 1% during March and April, but the National Bureau of Standards (NBS) reviewed the DOT study in 1976 and found no significant savings.
In 2000, when parts of Australia began DST in late winter, overall electricity consumption did not decrease, but the morning peak load and prices increased. An earlier version is in:
In Western Australia during summer 2006–2007, DST increased electricity consumption during hotter days and decreased it during cooler days, with consumption rising 0.6% overall.
Although a 2007 study estimated that introducing DST to Japan would reduce household lighting energy consumption, a 2007 simulation estimated that DST would increase overall energy use in Osaka residences by 0.13%, with a 0.02% decrease due to less lighting more than outweighed by a 0.15% increase due to extra cooling; neither study examined non-residential energy use. This is probably because DST's effect on lighting energy use is mainly noticeable in residences.
A 2007 study found that the earlier start to DST that year had little or no effect on electricity consumption in California.
A 2007 study estimated that winter daylight saving would prevent a 2% increase in average daily electricity consumption in Great Britain. This paper was revised in October 2009.
A 2008 study examined billing data in Indiana before and after it adopted DST in 2006, and concluded that DST increased overall residential electricity consumption by 1% to 4%, due mostly to extra afternoon cooling and extra morning heating; the main increases came in the fall. A study estimated the overall annual cost of DST to Indiana households $9 million, with an additional $1.7–5.5 million for social costs due to increased pollution. Lay summary: Wall Street Journal, February 27, 2008.
The United States Department of Energy (DOE) concluded in a 2008 report that the 2007 United States extension of DST saved 0.5% of electricity usage during the extended period. This report analyzed only the extension, not the full eight months of DST, and did not examine the use of heating fuels.
Several studies have suggested that DST increases motor fuel consumption. The 2008 DOE report found no significant increase in motor gasoline consumption due to the 2007 United States extension of DST.
Economic effects
Retailers, sporting goods makers, and other businesses benefit from extra afternoon sunlight, as it induces customers to shop and to participate in outdoor afternoon sports. In 1984, Fortune magazine estimated that a seven-week extension of DST would yield an additional $30 million for 7-Eleven stores, and the National Golf Foundation estimated the extension would increase golf industry revenues $200 million to $300 million. A 1999 study estimated that DST increases the revenue of the European Union's leisure sector by about 3%.
Conversely, DST can adversely affect farmers, parents of young children, and others whose hours are set by the sun and they have traditionally opposed the practice,Effect on those whose hours are set by the sun:
although some farmers are neutral. One reason why farmers oppose DST is that grain is best harvested after dew evaporates, so when field hands arrive and leave earlier in summer their labor is less valuable. Dairy farmers are another group who complain of the change. Their cows are sensitive to the timing of milking, so delivering milk earlier disrupts their systems. Today some farmers' groups are in favor of DST.
DST also hurts prime-time television broadcast ratings, drive-ins and other theaters.
Changing clocks and DST rules has a direct economic cost, entailing extra work to support remote meetings, computer applications and the like. For example, a 2007 North American rule change cost an estimated $500 million to $1 billion, and Utah State University economist William F. Shughart II has estimated the lost opportunity cost at around $1.7 billion USD. Although it has been argued that clock shifts correlate with decreased economic efficiency, and that in 2000 the daylight-saving effect implied an estimated one-day loss of $31 billion on U.S. stock exchanges, the estimated numbers depend on the methodology. The results have been disputed, and the original authors have refuted the points raised by disputers.
Public safety
In 1975 the U.S. DOT conservatively identified a 0.7% reduction in traffic fatalities during DST, and estimated the real reduction at 1.5% to 2%, but the 1976 NBS review of the DOT study found no differences in traffic fatalities. In 1995 the Insurance Institute for Highway Safety estimated a reduction of 1.2%, including a 5% reduction in crashes fatal to pedestrians. Others have found similar reductions. Single/Double Summer Time (SDST), a variant where clocks are one hour ahead of the sun in winter and two in summer, has been projected to reduce traffic fatalities by 3% to 4% in the UK, compared to ordinary DST. However, accidents do increase by as much as 11% during the two weeks that follow the end of British Summer Time. It is not clear whether sleep disruption contributes to fatal accidents immediately after the spring clock shifts. Data supporting Coren's half of this exchange are in: A correlation between clock shifts and traffic accidents has been observed in North America and the UK but not in Finland or Sweden. If this effect exists, it is far smaller than the overall reduction in traffic fatalities.Clock shifts and accidents:
A 2009 U.S. study found that on Mondays after the switch to DST, workers sleep an average of 40 minutes less, and are injured at work more often and more severely.
In the 1970s the U.S. Law Enforcement Assistance Administration (LEAA) found a reduction of 10% to 13% in Washington, D.C.'s violent crime rate during DST. However, the LEAA did not filter out other factors, and it examined only two cities and found crime reductions only in one and only in some crime categories; the DOT decided it was "impossible to conclude with any confidence that comparable benefits would be found nationwide". Outdoor lighting has a marginal and sometimes even contradictory influence on crime and fear of crime.
In several countries, fire safety officials encourage citizens to use the two annual clock shifts as reminders to replace batteries in smoke and carbon monoxide detectors, particularly in autumn, just before the heating and candle season causes an increase in home fires. Similar twice-yearly tasks include reviewing and practicing fire escape and family disaster plans, inspecting vehicle lights, checking storage areas for hazardous materials, reprogramming thermostats, and seasonal vaccinations.Clock shifts as safety reminders:
Locations without DST can instead use the first days of spring and autumn as reminders.
Health
upright=1.25|thumb|
alt=Graph of sunrise and sunset times for 2007. The horizontal axis is the date; the vertical axis is the times of sunset and sunrise. There is a bulge in the centre during summer, when sunrise is early and sunset late. There are step functions in spring and fall, when DST starts and stops.|
Clock shifts affecting apparent sunrise and sunset times at Greenwich in 2007.
thumb|upright=1.25|right|Justification for Daylight Saving Time to the effect that it is a more natural adjustment for people rising with the sun.
DST has mixed effects on health. In societies with fixed work schedules it provides more afternoon sunlight for outdoor exercise. It alters sunlight exposure; whether this is beneficial depends on one's location and daily schedule, as sunlight triggers vitamin D synthesis in the skin, but overexposure can lead to skin cancer. DST may help in depression by causing individuals to rise earlier, but some argue the reverse. The Retinitis Pigmentosa Foundation Fighting Blindness, chaired by blind sports magnate Gordon Gund, successfully lobbied in 1985 and 2005 for U.S. DST extensions.
Clock shifts were found to increase the risk of heart attack by 10 percent, and to disrupt sleep and reduce its efficiency. Effects on seasonal adaptation of the circadian rhythm can be severe and last for weeks.DST and circadian rhythm:
A 2008 study found that although male suicide rates rise in the weeks after the spring transition, the relationship weakened greatly after adjusting for season. A 2008 Swedish study found that heart attacks were significantly more common the first three weekdays after the spring transition, and significantly less common the first weekday after the autumn transition. The government of Kazakhstan cited health complications due to clock shifts as a reason for abolishing DST in 2005. In March 2011, Dmitri Medvedev, president of Russia, claimed that "stress of changing clocks" was the motivation for Russia to stay in DST all year long. Officials at the time talked about an annual increase in suicides.
An unexpected adverse effect of daylight saving time may lie in the fact that an extra part of morning rush hour traffic occurs before dawn and traffic emissions then cause higher air pollution than during daylight hours.
Complexity
DST's clock shifts have the obvious disadvantage of complexity. People must remember to change their clocks; this can be time-consuming, particularly for mechanical clocks that cannot be moved backward safely. People who work across time zone boundaries need to keep track of multiple DST rules, as not all locations observe DST or observe it the same way. The length of the calendar day becomes variable; it is no longer always 24 hours. Disruption to meetings, travel, broadcasts, billing systems, and records management is common, and can be expensive. During an autumn transition from 02:00 to 01:00, a clock reads times from 01:00:00 through 01:59:59 twice, possibly leading to confusion.
thumb|upright|alt=A standing stone in a grassy field surrounded by trees. The stone contains a vertical sundial centered on 1 o'clock, and is inscribed "HORAS NON NUMERO NISI ÆSTIVAS" and "SUMMER TIME ACT 1925".|The William Willett Memorial Sundial is always on DST.
Damage to a German steel facility occurred during a DST transition in 1993, when a computer timing system linked to a radio time synchronization signal allowed molten steel to cool for one hour less than the required duration, resulting in spattering of molten steel when it was poured. Medical devices may generate adverse events that could harm patients, without being obvious to clinicians responsible for care. These problems are compounded when the DST rules themselves change; software developers must test and perhaps modify many programs, and users must install updates and restart applications. Consumers must update devices such as programmable thermostats with the correct DST rules, or manually adjust the devices' clocks. A common strategy to resolve these problems in computer systems is to express time using the Coordinated Universal Time (UTC) rather than the local time zone. For example, Unix-based computer systems use the UTC-based Unix time internally.
Some clock-shift problems could be avoided by adjusting clocks continuously or at least more gradually—for example, Willett at first suggested weekly 20-minute transitions—but this would add complexity and has never been implemented.
DST inherits and can magnify the disadvantages of standard time. For example, when reading a sundial, one must compensate for it along with time zone and natural discrepancies. Also, sun-exposure guidelines such as avoiding the sun within two hours of noon become less accurate when DST is in effect.
Terminology
As explained by Richard Meade in the English Journal of the (American) National Council of Teachers of English, the form daylight savings time (with an "s") was already in 1978 much more common than the older form daylight saving time in American English ("the change has been virtually accomplished"). Nevertheless, even dictionaries such as Merriam-Webster's, American Heritage, and Oxford, which describe actual usage instead of prescribing outdated usage (and therefore also list the newer form), still list the older form first. This is because the older form is still very common in print and preferred by many editors. ("Although daylight saving time is considered correct, daylight savings time (with an "s") is commonly used.") The first two words are sometimes hyphenated (daylight-saving[s] time). Merriam-Webster's also lists the forms daylight saving (without "time"), daylight savings (without "time"), and daylight time.Daylight saving time and its variants:
Oxford Dictionaries "also daylight savings time"
In Britain, Willett's 1907 proposal used the term daylight saving, but by 1911 the term summer time replaced daylight saving time in draft legislation. Continental Europe uses similar phrases, the following examples could all be translated to "summer time": Sommerzeit in Germany, zomertijd in Dutch-speaking regions, kesäaika in Finland, horario de verano or hora de verano in Spain and heure d'été in France, hora de verão in Portugal. In Italy the term is ora legale, that is, legal time (legally enforced time) as opposed to "ora solare", solar time, in winter.
The name of local time typically changes when DST is observed. American English replaces standard with daylight: for example, Pacific Standard Time (PST) becomes Pacific Daylight Time (PDT). In the United Kingdom, the standard term for UK time when advanced by one hour is British Summer Time (BST), and British English typically inserts summer into other time zone names, e.g. Central European Time (CET) becomes Central European Summer Time (CEST).
The North American English mnemonic "spring forward, fall back" (also "spring ahead ...", "spring up ...", and "... fall behind") helps people remember which direction to shift clocks.
Computing
thumb|upright|
alt=Strong man in sandals and with shaggy hair, facing away from audience/artist, grabbing a hand of a clock bigger than he is and attempting to force it backwards. The clock uses Roman numerals and the man is dressed in stripped-down Roman gladiator style. The text says "You can't stop time... But you can turn it back one hour at 2 a.m. on Oct. 28 when daylight-saving time ends and standard time begins."|
A 2001 US public service advertisement reminded people to adjust clocks.
Changes to DST rules cause problems in existing computer installations. For example, the 2007 change to DST rules in North America required that many computer systems be upgraded, with the greatest impact on e-mail and calendar programs. The upgrades required a significant effort by corporate information technologists.
Some applications standardize on UTC to avoid problems with clock shifts and time zone differences.
Likewise, most modern operating systems internally handle and store all times as UTC and only convert to local time for display.
However, even if UTC is used internally, the systems still require information on time zones to correctly calculate local time where it is needed. Many systems in use today base their date/time calculations from data derived from the IANA time zone database also known as zoneinfo.
IANA time zone database
The IANA time zone database maps a name to the named location's historical and predicted clock shifts. This database is used by many computer software systems, including most Unix-like operating systems, Java, and the Oracle RDBMS; HP's "tztab" database is similar but incompatible. When temporal authorities change DST rules, zoneinfo updates are installed as part of ordinary system maintenance. In Unix-like systems the TZ environment variable specifies the location name, as in TZ=':America/New_York'. In many of those systems there is also a system-wide setting that is applied if the TZ environment variable is not set: this setting is controlled by the contents of the /etc/localtime file, which is usually a symbolic link or hard link to one of the zoneinfo files. Internal time is stored in timezone-independent epoch time; the TZ is used by each of potentially many simultaneous users and processes to independently localize time display.
Older or stripped-down systems may support only the TZ values required by POSIX, which specify at most one start and end rule explicitly in the value. For example, TZ='EST5EDT,M3.2.0/02:00,M11.1.0/02:00' specifies time for the eastern United States starting in 2007. Such a TZ value must be changed whenever DST rules change, and the new value applies to all years, mishandling some older timestamps.
Microsoft Windows
As with zoneinfo, a user of Microsoft Windows configures DST by specifying the name of a location, and the operating system then consults a table of rule sets that must be updated when DST rules change. Procedures for specifying the name and updating the table vary with release. Updates are not issued for older versions of Microsoft Windows. Windows Vista supports at most two start and end rules per time zone setting. In a Canadian location observing DST, a single Vista setting supports both 1987–2006 and post-2006 time stamps, but mishandles some older time stamps. Older Microsoft Windows systems usually store only a single start and end rule for each zone, so that the same Canadian setting reliably supports only post-2006 time stamps.
These limitations have caused problems. For example, before 2005, DST in Israel varied each year and was skipped some years. Windows 95 used rules correct for 1995 only, causing problems in later years. In Windows 98, Microsoft marked Israel as not having DST, forcing Israeli users to shift their computer clocks manually twice a year. The 2005 Israeli Daylight Saving Law established predictable rules using the Jewish calendar but Windows zone files could not represent the rules' dates in a year-independent way. Partial workarounds, which mishandled older time stamps, included manually switching zone files every year and a Microsoft tool that switches zones automatically. In 2013, Israel standardized its daylight saving time according to the Gregorian calendar.
Microsoft Windows keeps the system real-time clock in local time. This causes several problems, including compatibility when multi booting with operating systems that set the clock to UTC, and double-adjusting the clock when multi booting different Windows versions, such as with a rescue boot disk. This approach is a problem even in Windows-only systems: there is no support for per-user timezone settings, only a single system-wide setting. In 2008 Microsoft hinted that future versions of Windows will partially support a Windows registry entry RealTimeIsUniversal that had been introduced many years earlier, when Windows NT supported RISC machines with UTC clocks, but had not been maintained. Since then at least two fixes related to this feature have been published by Microsoft.
The NTFS file system used by recent versions of Windows stores the file with a UTC time stamp, but displays it corrected to local—or seasonal—time. However, the FAT filesystem commonly used on removable devices stores only the local time. Consequently, when a file is copied from the hard disk onto separate media, its time will be set to the current local time. If the time adjustment is changed, the timestamps of the original file and the copy will be different. The same effect can be observed when compressing and uncompressing files with some file archivers. It is the NTFS file that changes seen time. This effect should be kept in mind when trying to determine if a file is a duplicate of another, although there are other methods of comparing files for equality (such as using a checksum algorithm).
Permanent daylight saving time
A move to "permanent daylight saving time" (staying on summer hours all year with no time shifts) is sometimes advocated, and has in fact been implemented in some jurisdictions such as Argentina, Chile, Iceland, Singapore, Uzbekistan, Belarus and Turkey. Advocates cite the same advantages as normal DST without the problems associated with the twice yearly time shifts. However, many remain unconvinced of the benefits, citing the same problems and the relatively late sunrises, particularly in winter, that year-round DST entails. Russia switched to permanent DST from 2011 to 2014, but the move proved unpopular because of the late sunrises in winter, so the country switched permanently back to "standard" or "winter" time in 2014. Many European countries have also implemented permanent summer time during World War II.
By country and region
Daylight saving time by country
Africa
Asia
Brazil
Europe
North and South America
Oceania
United States
References
Further reading
The British version, focusing on the UK, is
External links
"Legal Time 2015", Telecommunications Standardization Bureau of the ITU
Information about the Current Daylight Saving Time (DST) Rules, U.S. National Institute of Standards and Technology
Sources for time zone and daylight saving time data
| 47,548 | 2017-01 |
Identity (social science) | In psychology, identity is the qualities, beliefs, personality, looks and/or expressions that make a person (self-identity) or group (particular social category or social group).The process of identity can be creative or destructive.
A psychological identity relates to self-image (one's mental model of oneself), self-esteem, and individuality. Consequently, Weinreich gives the definition "A person's identity is defined as the totality of one's self-construal, in which how one construes oneself in the present expresses the continuity between how one construes oneself as one was in the past and how one construes oneself as one aspires to be in the future"; this allows for definitions of aspects of identity, such as: "One's ethnic identity is defined as that part of the totality of one's self-construal made up of those dimensions that express the continuity between one's construal of past ancestry and one's future aspirations in relation to ethnicity" (Weinreich, 1986a).
Gender identity forms an important part of identity in psychology, as it dictates to a significant degree how one views oneself both as a person and in relation to other people, ideas and nature. Other aspects of identity, such as racial, religious, ethnic, occupational… etc. may also be more or less significant – or significant in some situations but not in others (Weinreich & Saunderson 2003 pp26–34). In cognitive psychology, the term "identity" refers to the capacity for self-reflection and the awareness of self.
Sociology places some explanatory weight on the concept of role-behavior. The notion of identity negotiation may arise from the learning of social roles through personal experience. Identity negotiation is a process in which a person negotiates with society at large regarding the meaning of his or her identity.
Psychologists most commonly use the term "identity" to describe personal identity, or the idiosyncratic things that make a person unique. Meanwhile, sociologists often use the term to describe social identity, or the collection of group memberships that define the individual. However, these uses are not proprietary, and each discipline may use either concept and each discipline may combine both concepts when considering a person's identity.
The description or representation of individual and group identity is a central task for psychologists, sociologists and anthropologists and those of other disciplines where "identity" needs to be mapped and defined. How should one describe the identity of another, in ways which encompass both their idiosyncratic qualities and their group memberships or identifications, both of which can shift according to circumstance? Following on from the work of Kelly, Erikson, Tajfel and others Weinreich's Identity Structure Analysis (ISA), is "a structural representation of the individual's existential experience, in which the relationships between self and other agents are organised in relatively stable structures over time … with the emphasis on the socio-cultural milieu in which self relates to other agents and institutions" (Weinreich and Saunderson, (eds) 2003, p1). Using constructs drawn from the salient discourses of the individual, the group and cultural norms, the practical operationalisation of ISA provides a methodology that maps how these are used by the individual, applied across time and milieus by the "situated self" to appraise self and other agents and institutions (for example, resulting in the individual's evaluation of self and significant others and institutions).
In psychology
Erik Erikson (1902-1994) became one of the earliest psychologists to take an explicit interest in identity. The Eriksonian framework rests upon a distinction among the psychological sense of continuity, known as the ego identity (sometimes identified simply as "the self"); the personal idiosyncrasies that separate one person from the next, known as the personal identity; and the collection of social roles that a person might play, known as either the social identity or the cultural identity. Erikson's work, in the psychodynamic tradition, aimed to investigate the process of identity formation across a lifespan. Progressive strength in the ego identity, for example, can be charted in terms of a series of stages in which identity is formed in response to increasingly sophisticated challenges. The process of forming a viable sense of identity for the culture is conceptualized as an adolescent task, and those who do not manage a resynthesis of childhood identifications are seen as being in a state of 'identity diffusion' whereas those who retain their initially given identities unquestioned have 'foreclosed' identities (Weinreich & Saunderson 2003 p7-8). On some readings of Erikson, the development of a strong ego identity, along with the proper integration into a stable society and culture, lead to a stronger sense of identity in general. Accordingly, a deficiency in either of these factors may increase the chance of an identity crisis or confusion .
Although the self is distinct from identity, the literature of self-psychology can offer some insight into how identity is maintained . From the vantage point of self-psychology, there are two areas of interest: the processes by which a self is formed (the "I"), and the actual content of the schemata which compose the self-concept (the "Me"). In the latter field, theorists have shown interest in relating the self-concept to self-esteem, the differences between complex and simple ways of organizing self-knowledge, and the links between those organizing principles and the processing of information .
The "Neo-Eriksonian" identity status paradigm emerged in later years, driven largely by the work of James Marcia. This paradigm focuses upon the twin concepts of exploration and commitment. The central idea is that any individual's sense of identity is determined in large part by the explorations and commitments that he or she makes regarding certain personal and social traits. It follows that the core of the research in this paradigm investigates the degrees to which a person has made certain explorations, and the degree to which he or she displays a commitment to those explorations.
A person may display either relative weakness or relative strength in terms of both exploration and commitments. When assigned categories, four possible permutations result: identity diffusion, identity foreclosure, identity moratorium, and identity achievement. Diffusion is when a person lacks both exploration in life and interest in committing even to those unchosen roles that he or she occupies. Foreclosure is when a person has not chosen extensively in the past, but seems willing to commit to some relevant values, goals, or roles in the future. Moratorium is when a person displays a kind of flightiness, ready to make choices but unable to commit to them. Finally, achievement is when a person makes identity choices and commits to them.
Weinreich's identity variant similarly includes the categories of identity diffusion, foreclosure and crisis, but with a somewhat different emphasis. Here, with respect to identity diffusion for example, an optimal level is interpreted as the norm, as it is unrealistic to expect an individual to resolve all their conflicted identifications with others; therefore we should be alert to individuals with levels which are much higher or lower than the norm – highly diffused individuals are classified as diffused, and those with low levels as foreclosed or defensive. (Weinreich & Saunderson, 2003, pp 65–67; 105-106). Weinreich applies the identity variant in a framework which also allows for the transition from one to another by way of biographical experiences and resolution of conflicted identifications situated in various contexts – for example, an adolescent going through family break-up may be in one state, whereas later in a stable marriage with a secure professional role may be in another. Hence, though there is continuity, there is also development and change. (Weinreich & Saunderson, 2003, pp 22–23).
Laing's definition of identity closely follows Erikson's, in emphasising the past, present and future components of the experienced self. He also develops the concept of the "metaperspective of self", i.e. the self's perception of the other's view of self, which has been found to be extremely important in clinical contexts such as anorexia nervosa. (Saunderson and O'Kane, 2005). Harré also conceptualises components of self/identity – the "person" (the unique being I am to myself and others) along with aspects of self (including a totality of attributes including beliefs about one's characteristics including life history), and the personal characteristics displayed to others.
In social psychology
At a general level, self-psychology is compelled to investigate the question of how the personal self relates to the social environment. To the extent that these theories place themselves in the tradition of "psychological" social psychology, they focus on explaining an individual's actions within a group in terms of mental events and states. However, some "sociological" social psychology theories go further by attempting to deal with the issue of identity at both the levels of individual cognition and of collective behavior.
Collective identity
Many people gain a sense of positive self-esteem from their identity groups, which furthers a sense of community and belonging. Another issue that researchers have attempted to address is the question of why people engage in discrimination, i.e., why they tend to favor those they consider a part of their "in-group" over those considered to be outsiders. Both questions have been given extensive attention by researchers working in the social identity tradition. For example, in work relating to social identity theory it has been shown that merely crafting cognitive distinction between in- and out-groups can lead to subtle effects on people's evaluations of others .
Different social situations also compel people to attach themselves to different self-identities which may cause some to feel marginalized, switch between different groups and self-identifications,Benet-Martínez, V., & Hong, Y-Y. (2014) or reinterpret certain identity components.Kislev, E. (2012) These different selves lead to constructed images dichotomized between what people want to be (the ideal self) and how others see them (the limited self). Educational background and Occupational status and roles significantly influence identity formation in this regard.Hurd, E. (2010). Confessions of Belonging: My Emotional Journey as a Medical Translator. Qualitative Inquiry 16(10), 783-791.
Identity formation strategies
Another issue of interest in social psychology is related to the notion that there are certain identity formation strategies which a person may use to adapt to the social world. developed a typology which investigated the different manners of behavior that individuals may have. (3) Their typology includes:
Psychological symptoms Personality symptoms Social symptoms Refuser Develops cognitive blocks that prevent adoption of adult role-schemas Engages in childlike behavior Shows extensive dependency upon others and no meaningful engagement with the community of adults Drifter Possesses greater psychological resources than the Refuser (i.e., intelligence, charisma) Is apathetic toward application of psychological resources Has no meaningful engagement with or commitment to adult communities Searcher Has a sense of dissatisfaction due to high personal and social expectations Shows disdain for imperfections within the community Interacts to some degree with role-models, but ultimately these relationships are abandoned Guardian Possesses clear personal values and attitudes, but also a deep fear of change Sense of personal identity is almost exhausted by sense of social identity Has an extremely rigid sense of social identity and strong identification with adult communities Resolver Consciously desires self-growth Accepts personal skills and competencies and uses them actively Is responsive to communities that provide opportunity for self-growth
Kenneth Gergen formulated additional classifications, which include the strategic manipulator, the pastiche personality, and the relational self. The strategic manipulator is a person who begins to regard all senses of identity merely as role-playing exercises, and who gradually becomes alienated from his or her social "self". The pastiche personality abandons all aspirations toward a true or "essential" identity, instead viewing social interactions as opportunities to play out, and hence become, the roles they play. Finally, the relational self is a perspective by which persons abandon all sense of exclusive self, and view all sense of identity in terms of social engagement with others. For Gergen, these strategies follow one another in phases, and they are linked to the increase in popularity of postmodern culture and the rise of telecommunications technology.
In social anthropology
Anthropologists have most frequently employed the term 'identity' to refer to this idea of selfhood in a loosely Eriksonian way (Erikson 1972) properties based on the uniqueness and individuality which makes a person distinct from others. Identity became of more interest to anthropologists with the emergence of modern concerns with ethnicity and social movements in the 1970s. This was reinforced by an appreciation, following the trend in sociological thought, of the manner in which the individual is affected by and contributes to the overall social context. At the same time, the Eriksonian approach to identity remained in force, with the result that identity has continued until recently to be used in a largely socio-historical way to refer to qualities of sameness in relation to a person's connection to others and to a particular group of people.
The first favours a primordialist approach which takes the sense of self and belonging to a collective group as a fixed thing, defined by objective criteria such as common ancestry and common biological characteristics. The second, rooted in social constructionist theory, takes the view that identity is formed by a predominantly political choice of certain characteristics. In so doing, it questions the idea that identity is a natural given, characterised by fixed, supposedly objective criteria. Both approaches need to be understood in their respective political and historical contexts, characterised by debate on issues of class, race and ethnicity. While they have been criticized, they continue to exert an influence on approaches to the conceptualisation of identity today.
These different explorations of 'identity' demonstrate how difficult a concept it is to pin down. Since identity is a virtual thing, it is impossible to define it empirically. Discussions of identity use the term with different meanings, from fundamental and abiding sameness, to fluidity, contingency, negotiated and so on. Brubaker and Cooper note a tendency in many scholars to confuse identity as a category of practice and as a category of analysis . Indeed, many scholars demonstrate a tendency to follow their own preconceptions of identity, following more or less the frameworks listed above, rather than taking into account the mechanisms by which the concept is crystallised as reality. In this environment, some analysts, such as Brubaker and Cooper, have suggested doing away with the concept completely . Others, by contrast, have sought to introduce alternative concepts in an attempt to capture the dynamic and fluid qualities of human social self-expression. Hall (1992, 1996), for example, suggests treating identity as a process, to take into account the reality of diverse and ever-changing social experience. Some scholars have introduced the idea of identification, whereby identity is perceived as made up of different components that are 'identified' and interpreted by individuals. The construction of an individual sense of self is achieved by personal choices regarding who and what to associate with. Such approaches are liberating in their recognition of the role of the individual in social interaction and the construction of identity.
Anthropologists have contributed to the debate by shifting the focus of research: One of the first challenges for the researcher wishing to carry out empirical research in this area is to identify an appropriate analytical tool. The concept of boundaries is useful here for demonstrating how identity works. In the same way as Barth, in his approach to ethnicity, advocated the critical focus for investigation as being "the ethnic boundary that defines the group rather than the cultural stuff that it encloses" (1969:15), social anthropologists such as Cohen and Bray have shifted the focus of analytical study from identity to the boundaries that are used for purposes of identification. If identity is a kind of virtual site in which the dynamic processes and markers used for identification are made apparent, boundaries provide the framework on which this virtual site is built. They concentrated on how the idea of community belonging is differently constructed by individual members and how individuals within the group conceive ethnic boundaries.
As a non-directive and flexible analytical tool, the concept of boundaries helps both to map and to define the changeability and mutability that are characteristic of people's experiences of the self in society. While identity is a volatile, flexible and abstract 'thing', its manifestations and the ways in which it is exercised are often open to view. Identity is made evident through the use of markers such as language, dress, behaviour and choice of space, whose effect depends on their recognition by other social beings. Markers help to create the boundaries that define similarities or differences between the marker wearer and the marker perceivers, their effectiveness depends on a shared understanding of their meaning. In a social context, misunderstandings can arise due to a misinterpretation of the significance of specific markers. Equally, an individual can use markers of identity to exert influence on other people without necessarily fulfilling all the criteria that an external observer might typically associate with such an abstract identity.
Boundaries can be inclusive or exclusive depending on how they are perceived by other people. An exclusive boundary arises, for example, when a person adopts a marker that imposes restrictions on the behaviour of others. An inclusive boundary is created, by contrast, by the use of a marker with which other people are ready and able to associate. At the same time, however, an inclusive boundary will also impose restrictions on the people it has included by limiting their inclusion within other boundaries. An example of this is the use of a particular language by a newcomer in a room full of people speaking various languages. Some people may understand the language used by this person while others may not. Those who do not understand it might take the newcomer's use of this particular language merely as a neutral sign of identity. But they might also perceive it as imposing an exclusive boundary that is meant to mark them off from her. On the other hand, those who do understand the newcomer's language could take it as an inclusive boundary, through which the newcomer associates herself with them to the exclusion of the other people present. Equally, however, it is possible that people who do understand the newcomer but who also speak another language may not want to speak the newcomer's language and so see her marker as an imposition and a negative boundary. It is possible that the newcomer is either aware or unaware of this, depending on whether she herself knows other languages or is conscious of the plurilingual quality of the people there and is respectful of it or not.
In philosophy
Hegel rejects Cartesian philosophy, supposing that we do not always doubt and that we do not always have consciousness. In his famous Master-Slave Dialectic Hegel attempts to show that the mind (Geist) only become conscious when it encounters another mind. One Geist attempts to control the other, since up until that point it has only encountered tools for its use. A struggle for domination ensues, leading to Lordship and Bondage.
Nietzsche, who was influenced by Hegel in some ways but rejected him in others, called for a rejection of "Soul Atomism" in The Gay Science. Nietzsche supposed that the Soul was an interaction of forces, an ever-changing thing far from the immortal soul posited by both Descartes and the Christian tradition. His "Construction of the Soul" in many ways resembles modern social constructivism.
Heidegger, following Nietzsche, did work on identity. For Heidegger, people only really form an identity after facing death. It's death that allows people to choose from the social constructed meanings in their world, and assemble a finite identity out of seemingly infinite meanings. For Heidegger, most people never escape the "they", a socially constructed identity of "how one ought to be" created mostly to try to escape death through ambiguity.
Many philosophical schools derive from rejecting Hegel, and diverse traditions of acceptance and rejection have developed.
Ricoeur has introduced the distinction between the ipse identity (selfhood, 'who am I?') and the idem identity (sameness, or a third-person perspective which objectifies identity) .
Implications
The implications are multiple as various research traditions are now heavily utilizing the lens of identity to examine phenomena. One implication of identity and of identity construction can be seen in occupational settings. This becomes increasing challenging in stigmatized jobs or "dirty work" (Hughes, 1951). Tracy and Trethewey (2005) state that "individuals gravitate toward and turn away from particular jobs depending in part, on the extent to which they validate a "preferred organizational self" . Some jobs carry different stigmas or acclaims. In her analysis Tracy uses the example of correctional officers trying to shake the stigma of "glorified maids" . "The process by which people arrive at justifications of and values for various occupational choices." Among these are workplace satisfaction and overall quality of life . People in these types of jobs are forced to find ways in order to create an identity they can live with. "Crafting a positive sense of self at work is more challenging when one's work is considered "dirty" by societal standards" . "In other words, doing taint management is not just about allowing the employee to feel good in that job. "If employees must navigate discourses that question the viability of their work, and/ or experience obstacles in managing taint through transforming dirty work into a badge of honor, it is likely they will find blaming the client to be an efficacious route in affirming their identity" .
In any case, the concept that an individual has a unique identity developed relatively recently in history. Factors influencing the emphasis on personal identity may include:
in the West, the Protestant stress on one's responsibility for one's own soul
psychology itself, emerging as a distinct field of knowledge and study from the 19th century onwards
the growth of a sense of privacy since the Renaissance
specialization of worker roles during the industrial period (as opposed, for example, to the undifferentiated roles of peasants in the feudal system)
occupation and employment's effect on identity
increased emphasis on gender identity, including gender identity disorder and transgender issues
Identity changes
An important implication relates to identity change, i.e. the transformation of identity.
Contexts include:
radical career-change
gender identity transition
national
The practice of adoption
See also
References
Bibliography
{{Cite journal
| last=Tracy | first=S. J.
| last2=Scott | first2=C.
| year=2006
| title=Sexuality, masculinity and taint management among firefighters and correctional officers: Getting down and dirty with America's heroes and the scum of law enforcement
| journal=Management Communication Quarterly
| volume=20 | issue=1
| pages=6–38
| doi=10.1177/0893318906287898
|ref=harv}}
Social Identity Theory: cognitive and motivational basis of intergroup differentiation. Universiteit Twente (2004).
Bray, Z. (2004). Living Boundaries: Frontiers and Identity in the Basque Country. Brussels: Presses interuniversitaires européenes, Peter Lang.
Brockmeier, J. & Carbaugh, D. (2001). Narrative and Identity: Studies in Autobiography, Self and Culture. Amsterdam/Philadelphia: John Benjamins.
Calhoun, C. (1994). "Social Theory and the Politics of Identity," in C. Calhoun (Ed.), Social Theory and Identity Politics. Oxford: Blackwell.
Camilleri, C.; Kastersztein, J. & Lipiansky E.M. et al. (1990) Stratégies Identitaires. Paris: Presses Universitaires de France.
Carey, H. C. & McLean, K. (1864). Manual of social science; being a condensation of the "Principles of social science" of H.C. Carey, LL. D.. Philadelphia: H.C. Baird.
Cohen, A. (1974). Two-Dimensional: an essay on the anthropology of power and symbolism in complex society. London: Routledge
Cohen, A. (1998). "Boundaries and Boundary-Consciousness: Politicising Cultural Identity," in M. Anderson and E. Bort (Eds.), The Frontiers of Europe. London: Printer Press.
Cohen, A. (1994). Self Consciousness: An Alternative Anthropology of Identity. London: Routledge.
Hallam, E. M., et al. (1999). Beyond the Body: Death and Social Identity. London: Routledge. ISBN 0-415-18291-3.
Little, D. (1991). Varieties of social explanation: an introduction to the philosophy of social science. Boulder: Westview Press. ISBN 0-8133-0566-7.
Meyers, D. T. (2004). Being yourself: essays on identity, action, and social life. Feminist constructions. Lanham, Md: Rowman & Littlefield Publishers. ISBN 0-7425-1478-1
Modood, T. & Werbner P. (Eds.) (1997). The Politics of Multiculturalism in the New Europe: Racism, Identity and Community. London: Zed Books.
Hasan Bülent Paksoy (2006) [IDENTITIES: How Governed, Who Pays? Malaga: Entelequia 2nd Ed. http://www.eumed.net/entelequia/pdf/b002.pdf]
Sökefeld, M. (1999). "Debating Self, Identity, and Culture in Anthropology." Current Anthropology 40 (4), August–October, 417–31.
Thompson, R.H. (1989). Theories of Ethnicity. New York: Greenwood Press.
Vermeulen, H. & Gowers, C. (Eds.) (1994). The Anthropology of Ethnicity: 'Beyond Ethnic Groups and Boundaries'. Amsterdam: Het Spinhuis.
Vryan, Kevin D., Patricia A. Adler, Peter Adler. 2003. "Identity." pp. 367–390 in Handbook of Symbolic Interactionism, edited by Larry T. Reynolds and Nancy J. Herman-Kinney. Walnut Creek, CA: AltaMira.
Ward, L. F. (1897). Dynamic sociology, or Applied social science. New York: D. Appleton and company.
Ward, L. F. (1968). Dynamic sociology. Series in American studies. New York: Johnson Reprint Corp.
Weinreich, P. (1986a). The operationalisation of identity theory in racial and ethnic relations, in J.Rex and D.Mason (eds). "Theories of Race and Ethnic Relations". Cambridge: Cambridge University Press.
Weinreich, P and Saunderson, W. (Eds) (2003). "Analysing Identity: Cross-Cultural, Societal and Clinical Contexts." London: Routledge.
Werbner, P. and T. Modood. (Eds.) (1997). Debating Cultural Hybridity: Multi-Cultural Identities and the Politics of Anti-Racism. London: Zed Books.
Williams, J. M. (1920). The foundations of social science; an analysis of their psychological aspects. New York: A.A. Knopf.
Woodward, K. (2004). Questioning Identity: Gender, Class, Ethnicity.'' London: Routledge. ISBN 0-415-32967-1.
External links
Stanford Encyclopedia of Philosophy - Identity
Category:Sociological terminology
de:Identifikation
es:Identidad social
hr:Identificiranje
sv:Identitet (beteendevetenskap)#Identifikation | 880,112 | 2017-01 |
Canon law | Canon law is the body of laws and regulations made by ecclesiastical authority (Church leadership), for the government of a Christian organization or church and its members. It is the internal ecclesiastical law, or operational policy, governing the Catholic Church (both the Latin Church and the Eastern Catholic Churches), the Eastern Orthodox and Oriental Orthodox churches, and the individual national churches within the Anglican Communion.Boudinhon, Auguste. "Canon Law." The Catholic Encyclopedia. Vol. 9. New York: Robert Appleton Company, 1910. 9 August 2013 The way that such church law is legislated, interpreted and at times adjudicated varies widely among these three bodies of churches. In all three traditions, a canon was originally a rule adopted by a church council; these canons formed the foundation of canon law.
Etymology
Greek kanon / , Arabic Qanun / قانون, Hebrew kaneh / קנה, "straight"; a rule, code, standard, or measure; the root meaning in all these languages is "reed" (cf. the Romance-language ancestors of the English word "cane").
Canons of the Apostles
The Apostolic Canons or Ecclesiastical Canons of the Same Holy Apostles is a collection of ancient ecclesiastical decrees (eighty-five in the Eastern, fifty in the Western Church) concerning the government and discipline of the Early Christian Church, incorporated with the Apostolic Constitutions which are part of the Ante-Nicene Fathers.
In the Fourth century the First Council of Nicaea (325) calls canons the disciplinary measures of the Church: the term canon, κανὠν, means in Greek, a rule. There is a very early distinction between the rules enacted by the Church and the legislative measures taken by the State called leges, Latin for laws.
Catholic Church
In the Catholic Church, canon law is the system of laws and legal principles made and enforced by the Church's hierarchical authorities to regulate its external organization and government and to order and direct the activities of Catholics toward the mission of the Church., p. 3
In the Latin Church, positive ecclesiastical laws, based directly or indirectly upon immutable divine law or natural law, derive formal authority in the case of universal laws from the supreme legislator (i.e. the Supreme Pontiff), who possesses the totality of legislative, executive, and judicial power in his person,Canon 331, 1983 Code of Canon Law while particular laws derive formal authority from a legislator inferior to the supreme legislator. The actual subject material of the canons is not just doctrinal or moral in nature, but all-encompassing of the human condition.
The Catholic Church also includes the main five rites (groups) of churches which are in full union with the Holy See and the Latin Church:
Alexandrian Rite Churches which include the Coptic Catholic Church and Ethiopian Catholic Church.
West Syrian Rite which includes the Maronite Church, Syriac Catholic Church and the Syro-Malankara Church.
Armenian Rite Church which includes the Armenian Catholic Church.
Byzantine Rite Churches which include the Albanian Greek Catholic Church, Belarusian Greek Catholic Church, Bulgarian Church, Byzantine Catholic Church of Croatia and Serbia, Greek Church, Hungarian Greek Catholic Church, Italo-Albanian Church, Macedonian Greek Catholic Church, Melkite Church, Romanian Church United with Rome, Greek-Catholic, Russian Church, Ruthenian Church, Slovak Greek Catholic Church and Ukrainian Catholic Church.
East Syrian Rite Churches which includes the Chaldean Church and Syro-Malabar Church.
All of these church groups are in full communion with the Supreme Pontiff and are subject to the Code of Canons of the Eastern Churches.
History, sources of law, and codifications
thumb|left|300px|Image of pages from the Decretum of Burchard of Worms, the 11th-century book of canon law.
The Catholic Church has what is claimed to be the oldest continuously functioning internal legal system in Western Europe, much later than Roman law but predating the evolution of modern European civil law traditions. What began with rules ("canons") adopted by the Apostles at the Council of Jerusalem in the first century has developed into a highly complex legal system encapsulating not just norms of the New Testament, but some elements of the Hebrew (Old Testament), Roman, Visigothic, Saxon, and Celtic legal traditions.
The history of Latin canon law can be divided into four periods: the jus antiquum, the jus novum, the jus novissimum and the Code of Canon Law.Ramstein, pg. 13, #8 In relation to the Code, history can be divided into the jus vetus (all law before the Code) and the jus novum (the law of the Code, or jus codicis).
The canon law of the Eastern Catholic Churches, which had developed some different disciplines and practices, underwent its own process of codification, resulting in the Code of Canons of the Eastern Churches promulgated in 1990 by Pope John Paul II.
Catholic canon law as legal system
It is a fully developed legal system, with all the necessary elements: courts, lawyers, judges, a fully articulated legal codeRamstein, pg. 49 principles of legal interpretation, and coercive penalties, though it lacks civilly-binding force in most secular jurisdictions. The academic degrees in canon law are the J.C.B. (Juris Canonici Baccalaureatus, Bachelor of Canon Law, normally taken as a graduate degree), J.C.L. (Juris Canonici Licentiatus, Licentiate of Canon Law) and the J.C.D. (Juris Canonici Doctor, Doctor of Canon Law). Because of its specialized nature, advanced degrees in civil law or theology are normal prerequisites for the study of canon law.
Much of the legislative style was adapted from the Roman Law Code of Justinian. As a result, Roman ecclesiastical courts tend to follow the Roman Law style of continental Europe with some variation, featuring collegiate panels of judges and an investigative form of proceeding, called "inquisitorial", from the Latin "inquirere", to enquire. This is in contrast to the adversarial form of proceeding found in the common law system of English and U.S. law, which features such things as juries and single judges.
The institutions and practices of canon law paralleled the legal development of much of Europe, and consequently both modern civil law and common law (legal system) bear the influences of canon law. Edson Luiz Sampel, a Brazilian expert in canon law, says that canon law is contained in the genesis of various institutes of civil law, such as the law in continental Europe and Latin American countries. Sampel explains that canon law has significant influence in contemporary society."canon law." Encyclopædia Britannica. Encyclopædia Britannica Online Academic Edition. Encyclopædia Britannica Inc., 2013. Web. 9 August 2013.
Canonical jurisprudential theory generally follows the principles of Aristotelian-Thomistic legal philosophy. While the term "law" is never explicitly defined in the Code, the Catechism of the Catholic Church cites Aquinas in defining law as "...an ordinance of reason for the common good, promulgated by the one who is in charge of the community" and reformulates it as "...a rule of conduct enacted by competent authority for the sake of the common good."Catechism of the Catholic Church, The Moral Law§1951
The Code for the Eastern Churches
The law of the Eastern Catholic Churches in full union with Rome was in much the same state as that of the Latin or Western Church before 1917; much more diversity in legislation existed in the various Eastern Catholic Churches. Each had its own special law, in which custom still played an important part. In 1929 Pius XI informed the Eastern Churches of his intention to work out a Code for the whole of the Eastern Church. The publication of these Codes for the Eastern Churches regarding the law of persons was made between 1949 through 1958"In 1959, John XXIII, announced for the first time his decision to reform the existing corpus of canonical legislation"http://www.vatican.va/archive/ENG1104/__P1.HTM but finalized nearly 30 years later.
The first Code of Canon Law (1917) was almost exclusively for the Latin Church, with extremely limited application to the Eastern Churches.Canon 1, 1917 Code of Canon Law. After the Second Vatican Council, (1962 - 1965), another edition was published specifically for the Roman Rite in 1983. Most recently, 1990, the Vatican produced the Code of Canons of the Eastern Churches which became the 1st code of Eastern Catholic Canon Law.http://www.nyulawglobal.org/globalex/canon_law.htm
Orthodox Churches
The Greek-speaking Orthodox have collected canons and commentaries upon them in a work known as the Pēdálion (Greek: Πηδάλιον, "Rudder"), so named because it is meant to "steer" the Church. The Orthodox Christian tradition in general treats its canons more as guidelines than as laws, the bishops adjusting them to cultural and other local circumstances. Some Orthodox canon scholars point out that, had the Ecumenical Councils (which deliberated in Greek) meant for the canons to be used as laws, they would have called them nómoi/νόμοι (laws) rather than kanónes/κανόνες (rules), but almost all Orthodox conform to them. The dogmatic decisions of the Councils, though, are to be obeyed rather than to be treated as guidelines, since they are essential for the Church's unity.
Anglican Communion
In the Church of England, the ecclesiastical courts that formerly decided many matters such as disputes relating to marriage, divorce, wills, and defamation, still have jurisdiction of certain church-related matters (e.g. discipline of clergy, alteration of church property, and issues related to churchyards). Their separate status dates back to the 12th century when the Normans split them off from the mixed secular/religious county and local courts used by the Saxons. In contrast to the other courts of England the law used in ecclesiastical matters is at least partially a civil law system, not common law, although heavily governed by parliamentary statutes. Since the Reformation, ecclesiastical courts in England have been royal courts. The teaching of canon law at the Universities of Oxford and Cambridge was abrogated by Henry VIII; thereafter practitioners in the ecclesiastical courts were trained in civil law, receiving a Doctor of Civil Law (D.C.L.) degree from Oxford, or a Doctor of Laws (LL.D.) degree from Cambridge. Such lawyers (called "doctors" and "civilians") were centered at "Doctors Commons", a few streets south of St Paul's Cathedral in London, where they monopolized probate, matrimonial, and admiralty cases until their jurisdiction was removed to the common law courts in the mid-19th century.
Other churches in the Anglican Communion around the world (e.g., the Episcopal Church in the United States, and the Anglican Church of Canada) still function under their own private systems of canon law.
Currently, (2004), there are principles of canon law common to the churches within the Anglican Communion; their existence can be factually established; each province or church contributes through its own legal system to the principles of canon law common within the Communion; these principles have a strong persuasive authority and are fundamental to the self-understanding of each of the churches of the Communion; these principles have a living force, and contain in themselves the possibility of further development; and the existence of these principles both demonstrates unity and promotes unity within the Anglican Communion.
Presbyterian and Reformed churches
In Presbyterian and Reformed churches, canon law is known as "practice and procedure" or "church order", and includes the church's laws respecting its government, discipline, legal practice and worship.
Roman canon law had been criticized by the Presbyterians as early as 1572 in the Admonition to Parliament. The protest centered on the standard defense that canon law could be retained so long as it did not contradict the civil law. According to Polly Ha, the Reformed Church Government refuted this claiming that the bishops had been enforcing canon law for 1500 years.
Lutheranism
The Book of Concord is the historic doctrinal statement of the Lutheran Church, consisting of ten credal documents recognized as authoritative in Lutheranism since the 16th century.Bente, Friedrich., ed. and trans., Concordia Triglotta, (St. Louis: Concordia Publishing House, 1921), p. i However, the Book of Concord is a confessional document (stating orthodox belief) rather than a book of ecclesiastical rules or discipline, like canon law. Each Lutheran national church establishes its own system of church order and discipline, though these are referred to as "canons."
United Methodist Church
The Book of Discipline contains the laws, rules, policies and guidelines for The United Methodist Church. Its last edition was published in 2012.
See also
Abrogation of Old Covenant laws
Canon law (Church of England)
Canon law (Episcopal Church in the United States)
Collections of ancient canons
Decretum Gratiani
Doctor of both laws
Fetha Nagast
Halakha
Ius remonstrandi
List of canon lawyers
Religious law
Rule according to higher law
Sharia
State religion
References
Further reading
Baker, J.H. (2002) An Introduction to English Legal History, 4th ed. London : Butterworths, ISBN 0-406-93053-8
Brundage, James A., The Medieval Origins of the Legal Profession: Canonists, Civilians, and Courts, Chicago : University of Chicago Press, c2008.
Brundage, James A., Medieval Canon Law, London ; New York : Longman, 1995.
The Episcopal Church (2006) Constitution and Canons, together with the Rules of Order for the Government of the Protestant Episcopal Church in the United States of America, otherwise known as The Episcopal Church, New York : Church Publishing, Inc.
Hartmann, Wilfried and Kenneth Pennington eds. (2008) The History of Medieval Canon Law in the Classical Period, 1140-1234: From Gratian to the Decretals of Pope Gregory IX, Washington, D.C.: The Catholic University of America Press.
Hartmann, Wilfried and Kenneth Penningon eds. (2011) The History of Byzantine and Eastern Canon Law to 1500, Washington, D.C.: The Catholic University of America Press.
R. C. Mortimer, Western Canon Law, London: A. and C. Black, 1953.
Robinson, O.F.,Fergus, T.D. and Gordon, W.M. (2000) European Legal History, 3rd ed., London : Butterworths, ISBN 0-406-91360-9
External links
Catholic
Codex Iuris Canonici (1983), original text in Latin (the only official text)
Code of Canon Law (1983) but with the 1998 modification of canons 750 and 1371, English translation by the Canon Law Society of America, on the Vatican website
Code of Canon Law (1983), English translation by the Canon Law Society of Great Britain and Ireland, assisted by the Canon Law Society of Australia and New Zealand and the Canadian Canon Law Society
Codex canonum ecclesiarum orientalium (1990), original text in Latin
"Code of canons of Oriental Churchs" (1990), defective English translation
Codex Iuris Canonici (1917), original text in Latin
Catholic Encyclopedia: Canon Law: outdated, but useful
Salvific Law
1983 Code of Canon Law - Notes, Commentary, Articles, Bibliography
Anglican
"Canons of the Church of England"
"Ecclesiastical Law Society"
Canon law societies
Canadian Canon Law Society
Canon Law India
Canon Law Society of America
Canon Law Society of Australia and New Zealand
Canon Law Society of Great Britain & Ireland
Canon Law Society of the Philippines
Midwest Canon Law Society (the United States)
Sociedade Brasileira de Canonistas
Category:Christian law
Category:Christian terminology | 6,469 | 2017-01 |
Sumer | Sumer ()The name is from Akkadian ; Sumerian , approximately "land of the civilized kings" or "native land". means "native, local", in(ĝir NATIVE (7x: Old Babylonian) from The Pennsylvania Sumerian Dictionary). Literally, "land of the native (local, noble) lords". Stiebing (1994) has "Land of the Lords of Brightness" (William Stiebing, Ancient Near Eastern History and Culture). Postgate (1994) takes en as substituting eme "language", translating "land of the Sumerian heart" (. Postgate believes it not that eme, 'tongue', became en, 'lord', through consonantal assimilation.) was the first urban civilization in the historical region of southern Mesopotamia, modern-day southern Iraq, during the Chalcolithic and Early Bronze ages, and arguably the first civilization in the world with Ancient Egypt and the Indus Valley.King, Leonid W. (2015) "A History of Sumer and Akkad" (ISBN 1522847308) Living along the valleys of the Tigris and Euphrates, Sumerian farmers were able to grow an abundance of grain and other crops, the surplus of which enabled them to settle in one place.
Proto-writing in the prehistory dates back to c. 3000 BC. The earliest texts come from the cities of Uruk and Jemdet Nasr and date back to 3300 BC; early cuneiform writing emerged in 3000 BC.Cuneiform ancient.eu
Modern historians have suggested that Sumer was first permanently settled between c. 5500 and 4000 BC by a West Asian people who spoke the Sumerian language (pointing to the names of cities, rivers, basic occupations, etc., as evidence), an agglutinative language isolate. "The Ubaid Period (5500–4000 B.C.)" In Heilbrunn Timeline of Art History. Department of Ancient Near Eastern Art. The Metropolitan Museum of Art, New York (October 2003)"Ubaid Culture", The British Museum"Beyond the Ubaid", (Carter, Rober A. and Graham, Philip, eds.), University of Durham, April 2006
These conjectured, prehistoric people are now called "proto-Euphrateans" or "Ubaidians", and are theorized to have evolved from the Samarra culture of northern Mesopotamia. The Ubaidians (though never mentioned by the Sumerians themselves) are assumed by modern-day scholars to have been the first civilizing force in Sumer, draining the marshes for agriculture, developing trade, and establishing industries, including weaving, leatherwork, metalwork, masonry, and pottery.
Some scholars contest the idea of a Proto-Euphratean language or one substrate language. It has been suggested by them and others, that the Sumerian language was originally that of the hunter and fisher peoples, who lived in the marshland and the Eastern Arabia littoral region, and were part of the Arabian bifacial culture.Margarethe Uepermann (2007), "Structuring the Late Stone Age of Southeastern Arabia" (Arabian Archaeology and Epigraphy Arabian Archaeology and Epigraphy Volume 3, Issue 2, pages 65–109) Reliable historical records begin much later; there are none in Sumer of any kind that have been dated before Enmebaragesi (c. 26th century BC). Juris Zarins believes the Sumerians lived along the coast of Eastern Arabia, today's Persian Gulf region, before it flooded at the end of the Ice Age.
Sumerian civilization took form in the Uruk period (4th millennium BC), continuing into the Jemdet Nasr and Early Dynastic periods. During the 3rd millennium BC, a close cultural symbiosis developed between the Sumerians, who spoke a language isolate, and Akkadian-speakers, which included widespread bilingualism. Sumerian culture seems to have appeared as a fully formed civilization, with no pre-history.
The influence of Sumerian on Akkadian (and vice versa) is evident in all areas, from lexical borrowing on a massive scale, to syntactic, morphological, and phonological convergence. This has prompted scholars to refer to Sumerian and Akkadian in the 3rd millennium BC as a Sprachbund. Sumer was conquered by the Semitic-speaking kings of the Akkadian Empire around 2270 BC (short chronology), but Sumerian continued as a sacred language.
Native Sumerian rule re-emerged for about a century in the Neo-Sumerian Empire or Third Dynasty of Ur (Sumerian Renaissance) approximately 2100-2000 BC, but the Akkadian language also remained in use. The Sumerian city of Eridu, on the coast of the Persian Gulf, is considered to have been the world's first city, where three separate cultures may have fused — that of peasant Ubaidian farmers, living in mud-brick huts and practicing irrigation; that of mobile nomadic Semitic pastoralists living in black tents and following herds of sheep and goats; and that of fisher folk, living in reed huts in the marshlands, who may have been the ancestors of the Sumerians.Leick, Gwendolyn (2003), "Mesopotamia, the Invention of the City" (Penguin)
Origin of name
The term "Sumerian" is the common name given to the ancient non-Semitic-speaking inhabitants of Mesopotamia, Sumer, by the East Semitic-speaking Akkadians. The Sumerians referred to themselves as ùĝ saĝ gíg ga (cuneiform: ), phonetically /uŋ saŋ gi ga/, literally meaning "the black-headed people", and to their land as ki-en-gi(-r) ('place' + 'lords' + 'noble'), meaning "place of the noble lords". The Akkadian word Shumer may represent the geographical name in dialect, but the phonological development leading to the Akkadian term šumerû is uncertain. Hebrew Shinar, Egyptian Sngr, and Hittite Šanhar(a), all referring to southern Mesopotamia, could be western variants of Shumer.
City-states in Mesopotamia
thumb|Map of Sumer
In the late 4th millennium BC, Sumer was divided into many independent city-states, which were divided by canals and boundary stones. Each was centered on a temple dedicated to the particular patron god or goddess of the city and ruled over by a priestly governor (ensi) or by a king (lugal) who was intimately tied to the city's religious rites.
The five "first" cities, said to have exercised pre-dynastic kingship "before the flood":
Eridu (Tell Abu Shahrain)
Bad-tibira (probably Tell al-Madain)
Larsa (Tell as-Senkereh)
Sippar (Tell Abu Habbah)
Shuruppak (Tell Fara)
Other principal cities:
Uruk (Warka)
Kish (Tell Uheimir & Ingharra)
Ur (Tell al-Muqayyar)
Nippur (Afak)
Lagash (Tell al-Hiba)
Girsu (Tello or Telloh)
Umma (Tell Jokha)
Hamazi 1
Adab (Tell Bismaya)
Mari (Tell Hariri) 2
Akshak 1
Akkad 1
Isin (Ishan al-Bahriyat)
(1location uncertain)
(2an outlying city in northern Mesopotamia)
Minor cities (from south to north):
Kuara (Tell al-Lahm)
Zabala (Tell Ibzeikh)
Kisurra (Tell Abu Hatab)
Marad (Tell Wannat es-Sadum)
Dilbat (Tell ed-Duleim)
Borsippa (Birs Nimrud)
Kutha (Tell Ibrahim)
Der (al-Badra)
Eshnunna (Tell Asmar)
Nagar (Tell Brak) 2
(2an outlying city in northern Mesopotamia)
Apart from Mari, which lies full 330 kilometres (205 miles) north-west of Agade, but which is credited in the king list as having “exercised kingship” in the Early Dynastic II period, and Nagar, an outpost, these cities are all in the Euphrates-Tigris alluvial plain, south of Baghdad in what are now the Bābil, Diyala, Wāsit, Dhi Qar, Basra, Al-Muthannā and Al-Qādisiyyah governorates of Iraq.
History
The Sumerian city-states rose to power during the prehistoric Ubaid and Uruk periods. Sumerian written history reaches back to the 27th century BC and before, but the historical record remains obscure until the Early Dynastic III period, c. the 23rd century BC, when a now deciphered syllabary writing system was developed, which has allowed archaeologists to read contemporary records and inscriptions. Classical Sumer ends with the rise of the Akkadian Empire in the 23rd century BC. Following the Gutian period, there is a brief Sumerian Renaissance in the 21st century BC, cut short in the 20th century BC by invasions by the Amorites. The Amorite "dynasty of Isin" persisted until c. 1700 BC, when Mesopotamia was united under Babylonian rule. The Sumerians were eventually absorbed into the Akkadian (Assyro-Babylonian) population.
Ubaid period: 6500 – 4100 BC (Pottery Neolithic to Chalcolithic)
Uruk period: 4100 – 2900 BC (Late Chalcolithic to Early Bronze Age I)
Uruk XIV-V: 4100 – 3300 BC
Uruk IV period: 3300 – 3100 BC
Jemdet Nasr period (Uruk III): 3100 – 2900 BC
Early Dynastic period (Early Bronze Age II-IV)
Early Dynastic I period: 2900–2800 BC
Early Dynastic II period: 2800–2600 BC (Gilgamesh)
Early Dynastic IIIa period: 2600–2500 BC
Early Dynastic IIIb period: c. 2500–2334 BC
Akkadian Empire period: c. 2334–2218 BC (Sargon)
Gutian period: c. 2218–2047 BC (Early Bronze Age IV)
Ur III period: c. 2047–1940 BC
thumb|The Samarra bowl, at the Pergamonmuseum, Berlin. The swastika in the center of the design is a reconstruction.Stanley A. Freed, Research Pitfalls as a Result of the Restoration of Museum Specimens, Annals of the New York Academy of Sciences, Volume 376, The Research Potential of Anthropological Museum Collections pages 229–245, December 1981.
Ubaid period
The Ubaid period is marked by a distinctive style of fine quality painted pottery which spread throughout Mesopotamia and the Persian Gulf. During this time, the first settlement in southern Mesopotamia was established at Eridu (Cuneiform: ), c. 6500 BC, by farmers who brought with them the Hadji Muhammed culture, which first pioneered irrigation agriculture. It appears that this culture was derived from the Samarran culture from northern Mesopotamia. It is not known whether or not these were the actual Sumerians who are identified with the later Uruk culture. Eridu remained an important religious center when it was gradually surpassed in size by the nearby city of Uruk. The story of the passing of the me (gifts of civilization) to Inanna, goddess of Uruk and of love and war, by Enki, god of wisdom and chief god of Eridu, may reflect this shift in hegemony.Wolkstein, Dianna and Kramer, Samuel Noah "Innana: Queen of Heaven and Earth".
Uruk period
The archaeological transition from the Ubaid period to the Uruk period is marked by a gradual shift from painted pottery domestically produced on a slow wheel to a great variety of unpainted pottery mass-produced by specialists on fast wheels. The Uruk period is a continuation and an outgrowth of Ubaid with pottery being the main visible change.
By the time of the Uruk period (c. 4100–2900 BC calibrated), the volume of trade goods transported along the canals and rivers of southern Mesopotamia facilitated the rise of many large, stratified, temple-centered cities (with populations of over 10,000 people) where centralized administrations employed specialized workers. It is fairly certain that it was during the Uruk period that Sumerian cities began to make use of slave labor captured from the hill country, and there is ample evidence for captured slaves as workers in the earliest texts. Artifacts, and even colonies of this Uruk civilization have been found over a wide area—from the Taurus Mountains in Turkey, to the Mediterranean Sea in the west, and as far east as central Iran.Algaze, Guillermo (2005) "The Uruk World System: The Dynamics of Expansion of Early Mesopotamian Civilization", (Second Edition, University of Chicago Press)
The Uruk period civilization, exported by Sumerian traders and colonists (like that found at Tell Brak), had an effect on all surrounding peoples, who gradually evolved their own comparable, competing economies and cultures. The cities of Sumer could not maintain remote, long-distance colonies by military force.
Sumerian cities during the Uruk period were probably theocratic and were most likely headed by a priest-king (ensi), assisted by a council of elders, including both men and women.Jacobsen, Thorkild (Ed) (1939),"The Sumerian King List" (Oriental Institute of the University of Chicago; Assyriological Studies, No. 11., 1939) It is quite possible that the later Sumerian pantheon was modeled upon this political structure. There was little evidence of organized warfare or professional soldiers during the Uruk period, and towns were generally unwalled. During this period Uruk became the most urbanized city in the world, surpassing for the first time 50,000 inhabitants.
The ancient Sumerian king list includes the early dynasties of several prominent cities from this period. The first set of names on the list is of kings said to have reigned before a major flood occurred. These early names may be fictional, and include some legendary and mythological figures, such as Alulim and Dumizid.
The end of the Uruk period coincided with the Piora oscillation, a dry period from c. 3200 – 2900 BC that marked the end of a long wetter, warmer climate period from about 9,000 to 5,000 years ago, called the Holocene climatic optimum.Lamb, Hubert H. (1995). Climate, History, and the Modern World. London: Routledge. ISBN 0-415-12735-1
Early Dynastic Period
The dynastic period begins c. 2900 BC and was associated with a shift from the temple establishment headed by council of elders lead by a priestly "En" (a male figure when it was a temple for a goddess, or a female figure when headed by a male god)Jacobsen, Thorkild (1976), "The Harps that Once...; Sumerian Poetry in Translation" and "Treasures of Darkness: a history of Mesopotamian Religion" towards a more secular Lugal (Lu = man, Gal = great) and includes such legendary patriarchal figures as Enmerkar, Lugalbanda and Gilgamesh—who are supposed to have reigned shortly before the historic record opens c. 2700 BC, when the now deciphered syllabic writing started to develop from the early pictograms. The center of Sumerian culture remained in southern Mesopotamia, even though rulers soon began expanding into neighboring areas, and neighboring Semitic groups adopted much of Sumerian culture for their own.
The earliest dynastic king on the Sumerian king list whose name is known from any other legendary source is Etana, 13th king of the first dynasty of Kish. The earliest king authenticated through archaeological evidence is Enmebaragesi of Kish (c. 26th century BC), whose name is also mentioned in the Gilgamesh epic—leading to the suggestion that Gilgamesh himself might have been a historical king of Uruk. As the Epic of Gilgamesh shows, this period was associated with increased war. Cities became walled, and increased in size as undefended villages in southern Mesopotamia disappeared. (Both Enmerkar and Gilgamesh are credited with having built the walls of UrukGeorge, Andrew (Translater)(2003), "The Epic of Gilgamesh" (Penguin Classics)).
1st Dynasty of Lagash
thumb|left|Fragment of Eannatum's Stele of the Vultures
c. 2500–2270 BC
The dynasty of Lagash, though omitted from the king list, is well attested through several important monuments and many archaeological finds.
Although short-lived, one of the first empires known to history was that of Eannatum of Lagash, who annexed practically all of Sumer, including Kish, Uruk, Ur, and Larsa, and reduced to tribute the city-state of Umma, arch-rival of Lagash. In addition, his realm extended to parts of Elam and along the Persian Gulf. He seems to have used terror as a matter of policy. Eannatum's Stele of the Vultures depicts vultures pecking at the severed heads and other body parts of his enemies. His empire collapsed shortly after his death.
Later, Lugal-Zage-Si, the priest-king of Umma, overthrew the primacy of the Lagash dynasty in the area, then conquered Uruk, making it his capital, and claimed an empire extending from the Persian Gulf to the Mediterranean. He was the last ethnically Sumerian king before Sargon of Akkad.
Akkadian Empire
c. 2270–2083 BC (short chronology)
The Eastern Semitic Akkadian language is first attested in proper names of the kings of Kish c. 2800 BC, preserved in later king lists. There are texts written entirely in Old Akkadian dating from c. 2500 BC. Use of Old Akkadian was at its peak during the rule of Sargon the Great (c. 2270–2215 BC), but even then most administrative tablets continued to be written in Sumerian, the language used by the scribes. Gelb and Westenholz differentiate three stages of Old Akkadian: that of the pre-Sargonic era, that of the Akkadian empire, and that of the "Neo-Sumerian Renaissance" that followed it. Akkadian and Sumerian coexisted as vernacular languages for about one thousand years, but by around 1800 BC, Sumerian was becoming more of a literary language familiar mainly only to scholars and scribes. Thorkild Jacobsen has argued that there is little break in historical continuity between the pre- and post-Sargon periods, and that too much emphasis has been placed on the perception of a "Semitic vs. Sumerian" conflict.Toward the Image of Tammuz and Other Essays on Mesopotamian History and Culture by T. Jacobsen However, it is certain that Akkadian was also briefly imposed on neighboring parts of Elam that were previously conquered, by Sargon.
Gutian period
c. 2083–2050 BC (short chronology)
2nd Dynasty of Lagash
thumb|right|Gudea of Lagash
c. 2093–2046 BC (short chronology)
Following the downfall of the Akkadian Empire at the hands of Gutians, another native Sumerian ruler, Gudea of Lagash, rose to local prominence and continued the practices of the Sargonid kings' claims to divinity.
The previous Lagash dynasty, Gudea and his descendants also promoted artistic development and left a large number of archaeological artifacts.
Sumerian Renaissance
thumb|left|Great Ziggurat of Ur, near Nasiriyah, Iraq
c. 2047–1940 BC (short chronology)
Later, the 3rd dynasty of Ur under Ur-Nammu and Shulgi, whose power extended as far as southern Assyria, was the last great "Sumerian renaissance", but already the region was becoming more Semitic than Sumerian, with the rise in power of the Akkadian speaking Semites in Assyria and elsewhere, and the influx of waves of Semitic Martu (Amorites) who were to found several competing local powers including Isin, Larsa, Eshnunna and eventually Babylon. The last of these eventually came to dominate the south of Mesopotamia as the Babylonian Empire, just as the Old Assyrian Empire had already done so in the north from the late 21st century BC. The Sumerian language continued as a sacerdotal language taught in schools in Babylonia and Assyria, much as Latin was used in the Medieval period, for as long as cuneiform was utilized.
Fall and Transmission
This period is generally taken to coincide with a major shift in population from southern Mesopotamia toward the north. Ecologically, the agricultural productivity of the Sumerian lands was being compromised as a result of rising salinity. Soil salinity in this region had been long recognized as a major problem. Poorly drained irrigated soils, in an arid climate with high levels of evaporation, led to the buildup of dissolved salts in the soil, eventually reducing agricultural yields severely. During the Akkadian and Ur III phases, there was a shift from the cultivation of wheat to the more salt-tolerant barley, but this was insufficient, and during the period from 2100 BC to 1700 BC, it is estimated that the population in this area declined by nearly three fifths. This greatly upset the balance of power within the region, weakening the areas where Sumerian was spoken, and comparatively strengthening those where Akkadian was the major language. Henceforth Sumerian would remain only a literary and liturgical language, similar to the position occupied by Latin in medieval Europe.
Following an Elamite invasion and sack of Ur during the rule of Ibbi-Sin (c. 1940 BC), Sumer came under Amorites rule (taken to introduce the Middle Bronze Age). The independent Amorite states of the 20th to 18th centuries are summarized as the "Dynasty of Isin" in the Sumerian king list, ending with the rise of Babylonia under Hammurabi c. 1700 BC.
Later rulers who dominated Assyria and Babylonia occasionally assumed the old Sargonic title "King of Sumer and Akkad", such as Tukulti-Ninurta I of Assyria after ca. 1225 BC.
Population
right|350px|thumb|The first farmers from Samarra migrated to Sumer, and built shrines and settlements at Eridu.
Uruk, one of Sumer's largest cities, has been estimated to have had a population of 50,000-80,000 at its height;Harmansah, Ömür, The Archaeology of Mesopotamia: Ceremonial centers, urbanization and state formation in Southern Mesopotamia, 2007, p.699 given the other cities in Sumer, and the large agricultural population, a rough estimate for Sumer's population might be 0.8 million to 1.5 million. The world population at this time has been estimated at about 27 million.Colin McEvedy and Richard Jones, 1978, Atlas of World Population History, Facts on File, New York, ISBN 0-7139-1031-3.
The Sumerians spoke a language isolate; a number of linguists believe they could detect a substrate language beneath Sumerian because names of some of Sumer's major cities are not Sumerian, revealing influences of earlier inhabitants. However, the archaeological record shows clear uninterrupted cultural continuity from the time of the early Ubaid period (5300 – 4700 BC C-14) settlements in southern Mesopotamia. The Sumerian people who settled here farmed the lands in this region that were made fertile by silt deposited by the Tigris and the Euphrates.
It is speculated by some archaeologists that Sumerian speakers were farmers who moved down from the north, after perfecting irrigation agriculture there. The Ubaid period pottery of southern Mesopotamia has been connected via Choga Mami transitional ware to the pottery of the Samarra period culture (c. 5700 – 4900 BC C-14) in the north, who were the first to practice a primitive form of irrigation agriculture along the middle Tigris River and its tributaries. The connection is most clearly seen at Tell Awayli (Oueilli, Oueili) near Larsa, excavated by the French in the 1980s, where eight levels yielded pre-Ubaid pottery resembling Samarran ware. According to this theory, farming peoples spread down into southern Mesopotamia because they had developed a temple-centered social organization for mobilizing labor and technology for water control, enabling them to survive and prosper in a difficult environment.
Others have suggested a continuity of Sumerians, from the indigenous hunter-fisherfolk traditions, associated with the Arabian bifacial assemblages found on the Arabian littoral. Juris Zarins believes the Sumerians may have been the people living in the Persian Gulf region before it flooded at the end of the last Ice Age.http://www.ldolphin.org/eden/
Culture
Social and family life
thumb|right|A reconstruction in the British Museum of headgear and necklaces worn by the women in some Sumerian graves
In the early Sumerian period,the primitive pictograms suggest that
"Pottery was very plentiful, and the forms of the vases, bowls and dishes were manifold; there were special jars for honey, butter, oil and wine, which was probably made from dates. Some of the vases had pointed feet, and stood on stands with crossed legs; others were flat-bottomed, and were set on square or rectangular frames of wood. The oil-jars, and probably others also, were sealed with clay, precisely as in early Egypt. Vases and dishes of stone were made in imitation of those of clay."
"A feathered head-dress was worn. Beds, stools and chairs were used, with carved legs resembling those of an ox. There were fire-places and fire-altars."
"Knives, drills, wedges and an instrument which looks like a saw were all known. While spears, bows, arrows, and daggers (but not swords) were employed in war."
"Tablets were used for writing purposes. Daggers with metal blades and wooden handles were worn, and copper was hammered into plates, while necklaces or collars were made of gold."
"Time was reckoned in lunar months."
There is considerable evidence that the Sumerians loved music, which seems to have been an important part of religious and civic life in Sumer. Lyres were popular in Sumer, among the best-known examples being the Lyres of Ur.
Inscriptions describing the reforms of king Urukagina of Lagash (c. 2300 BC) say that he abolished the former custom of polyandry in his country, prescribing that a woman who took multiple husbands be stoned with rocks upon which her crime had been written.Gender and the Journal: Diaries and Academic Discourse p. 62 by Cinthia Gannett, 1992
Sumerian culture was male-dominated and stratified. The Code of Ur-Nammu, the oldest such codification yet discovered, dating to the Ur-III "Sumerian Renaissance", reveals a glimpse at societal structure in late Sumerian law. Beneath the lu-gal ("great man" or king), all members of society belonged to one of two basic strata: The "lu" or free person, and the slave (male, arad; female geme). The son of a lu was called a dumu-nita until he married. A woman (munus) went from being a daughter (dumu-mi), to a wife (dam), then if she outlived her husband, a widow (numasu) and she could then remarry.
Patriarchy
Anthropological evidence suggests that most societies before Sumer, as well as contemporary civilizations, were relatively egalitarian. The early periods of Sumer were also very egalitarian by nature, but that started to change with the rise of the Early Dynastic Period. By the time the Akkadian Empire rose to power, Patriarchy was a well-established cultural norm.
Language and writing
thumb|Early writing tablet recording the allocation of beer, 3100–3000 BC
The most important archaeological discoveries in Sumer are a large number of clay tablets written in cuneiform script. Sumerian writing, while proven to be not the oldest example of writing on earth, is considered to be a great milestone in the development of humanity's ability to not only create historical records but also in creating pieces of literature both in the form of poetic epics and stories as well as prayers and laws. Although pictures — that is, hieroglyphs — were first used, cuneiform and then Ideograms (where symbols were made to represent ideas) soon followed. Triangular or wedge-shaped reeds were used to write on moist clay. A large body of hundreds of thousands of texts in the Sumerian language have survived, such as personal or business letters, receipts, lexical lists, laws, hymns, prayers, stories, daily records, and even libraries full of clay tablets. Monumental inscriptions and texts on different objects like statues or bricks are also very common. Many texts survive in multiple copies because they were repeatedly transcribed by scribes-in-training. Sumerian continued to be the language of religion and law in Mesopotamia long after Semitic speakers had become dominant.
The Sumerian language is generally regarded as a language isolate in linguistics because it belongs to no known language family; Akkadian, by contrast, belongs to the Semitic branch of the Afroasiatic languages. There have been many failed attempts to connect Sumerian to other language families. It is an agglutinative language; in other words, morphemes ("units of meaning") are added together to create words, unlike analytic languages where morphemes are purely added together to create sentences. Some authors have proposed that there may be evidence of a substratum or adstratum language for geographic features and various crafts and agricultural activities, called variously Proto-Euphratean or Proto Tigrean, but this is disputed by others.
Understanding Sumerian texts today can be problematic even for experts. Most difficult are the earliest texts, which in many cases do not give the full grammatical structure of the language and seem to have been used as an "aide memoire" for knowledgeable scribes.
During the 3rd millennium BC a cultural symbiosis developed between the Sumerians and the Akkadians, which included widespread bilingualism. The influences between Sumerian on Akkadian are evident in all areas including lexical borrowing on a massive scale—and syntactic, morphological, and phonological convergence. This mutual influence has prompted scholars to refer to Sumerian and Akkadian of the 3rd millennium BC as a Sprachbund.
Akkadian gradually replaced Sumerian as a spoken language somewhere around the turn of the 3rd and the 2nd millennium BC,Woods C. 2006 “Bilingualism, Scribal Learning, and the Death of Sumerian”. In S.L. Sanders (ed) Margins of Writing, Origins of Culture: 91-120 Chicago but Sumerian continued to be used as a sacred, ceremonial, literary, and scientific language in Babylonia and Assyria until the 1st century CE.
Religion
Sumerian religion seems to have been founded upon two separate cosmogenic myths. The first saw creation as the result of a series of hieros gami or sacred marriages, involving the reconciliation of opposites, postulated as a coming together of male and female divine beings; the gods. This continued to influence the whole Mesopotamian mythos. Thus in the later Akkadian Enuma Elish the creation was seen as the union of fresh and salt water; as male Abzu, and female Tiamat. The products of that union, Lahm and Lahmu, "the muddy ones", were titles given to the gate keepers of the E-Abzu temple of Enki, in Eridu, the first Sumerian city. Describing the way that muddy islands emerge from the confluence of fresh and salty water at the mouth of the Euphrates, where the river deposited its load of silt, a second hieros gamos supposedly created Anshar and Kishar, the "sky-pivot" or axle, and the "earth pivot", parents in turn of Anu (the sky) and Ki (the earth). Another important Sumerian hieros gamos was that between Ki, here known as Ninhursag or "Lady of the Mountains", and Enki of Eridu, the god of fresh water which brought forth greenery and pasture.
At an early stage following the dawn of recorded history, Nippur in central Mesopotamia replaced Eridu in the south as the primary temple city, whose priests also conferred the status of political hegemony on the other city-states. Nippur retained this status throughout the Sumerian period.
Deities
thumb|Tell Asmar votive sculpture, 2750–2600 BC
Sumerians believed in an anthropomorphic polytheism, or the belief in many gods in human form. There was no common set of gods; each city-state had its own patrons, temples, and priest-kings, however they were not exclusive. The gods of one city were often acknowledged elsewhere. Sumerian speakers were among the earliest people to record their beliefs in writing, and were a major inspiration in later Mesopotamian mythology, religion, and astrology.
The Sumerians worshiped:
An as the full-time god equivalent to heaven; indeed, the word an in Sumerian means sky and his consort Ki, means earth.
Enki in the south at the temple in Eridu. Enki was the god of beneficence and of wisdom, ruler of the freshwater depths beneath the earth, a healer and friend to humanity who in Sumerian myth was thought to have given humans the arts and sciences, the industries and manners of civilization; the first law-book was considered his creation,
Enlil, god of the north wind, in Nippur, husband of Ninlil, the south wind. King of the Sumerian gods, he gave mankind the spells and incantations that the spirits of good or evil must obey,
Inanna, goddess of love and war, the deification of Venus, the morning (eastern) and evening (western) star, at the temple (shared with An) at Uruk.
The sun-god Utu at Larsa in the south and Sippar in the north,
The moon god Sin at Ur.
thumb|450px|left|Sumero-early Akkadian pantheon
These deities formed a core pantheon; there were additionally hundreds of minor ones. Sumerian gods could thus have associations with different cities, and their religious importance often waxed and waned with those cities' political power. The gods were said to have created human beings from clay for the purpose of serving them. The temples organized the mass labour projects needed for irrigation agriculture. Citizens had a labor duty to the temple, though they could avoid it by a payment of silver.
Cosmology
Sumerians believed that the universe consisted of a flat disk enclosed by a dome. The Sumerian afterlife involved a descent into a gloomy netherworld to spend eternity in a wretched existence as a Gidim (ghost).
The universe was divided into four quarters.
To the north were the hill-dwelling Subartu who were periodically raided for slaves, timber, and raw materials.
To the west were the tent-dwelling Martu, ancient Semitic-speaking peoples living as pastoral nomads tending herds of sheep and goats.
To the south was the land of Dilmun, a trading state associated with the land of the dead and the place of creation.
To the east were the Elamites, a rival people with whom the Sumerians were frequently at war.
Their known world extended from The Upper Sea or Mediterranean coastline, to The Lower Sea, the Persian Gulf and the land of Meluhha (probably the Indus Valley) and Magan (Oman), famed for its copper ores.
Temple and temple organisation
Ziggurats (Sumerian temples) each had an individual name and consisted of a forecourt, with a central pond for purification.Leick, Gwendolyn (2003), Mesopotamia: The Invention of the City' (Penguin) The temple itself had a central nave with aisles along either side. Flanking the aisles would be rooms for the priests. At one end would stand the podium and a mudbrick table for animal and vegetable sacrifices. Granaries and storehouses were usually located near the temples. After a time the Sumerians began to place the temples on top of multi-layered square constructions built as a series of rising terraces, giving rise to the Ziggurat style.Crawford, Harriet (1993), "Sumer and the Sumerians" (Cambridge University Press, (New York 1993)), ISBN 0-521-38850-3.
Funerary practices
It was believed that when people died, they would be confined to a gloomy world of Ereshkigal, whose realm was guarded by gateways with various monsters designed to prevent people entering or leaving. The dead were buried outside the city walls in graveyards where a small mound covered the corpse, along with offerings to monsters and a small amount of food. Those who could afford it sought burial at Dilmun.Bibby Geoffrey and Carl Phillips (2013), "Looking for Dilmun" (Alfred A. Knopf) Human sacrifice was found in the death pits at the Ur royal cemetery where Queen Puabi was accompanied in death by her servants.
It is also said that the Sumerians invented the first oboe-like instrument, and used them at royal funerals.
Agriculture and hunting
The Sumerians adopted an agricultural lifestyle perhaps as early as c. 5000 BC – 4500 BC. The region demonstrated a number of core agricultural techniques, including organized irrigation, large-scale intensive cultivation of land, mono-cropping involving the use of plough agriculture, and the use of an agricultural specialized labour force under bureaucratic control. The necessity to manage temple accounts with this organization led to the development of writing (c. 3500 BC).
thumb|300px|From the royal tombs of Ur, made of lapis lazuli and shell, shows peacetime
In the early Sumerian Uruk period, the primitive pictograms suggest that sheep, goats, cattle, and pigs were domesticated. They used oxen as their primary beasts of burden and donkeys or equids as their primary transport animal and "woollen clothing as well as rugs were made from the wool or hair of the animals. ... By the side of the house was an enclosed garden planted with trees and other plants; wheat and probably other cereals were sown in the fields, and the shaduf was already employed for the purpose of irrigation. Plants were also grown in pots or vases."
thumb|An account of barley rations issued monthly to adults and children written in cuneiform script on a clay tablet, written in year 4 of King Urukagina, c. 2350 BC
The Sumerians were one of the first known beer drinking societies. Cereals were plentiful and were the key ingredient in their early brew. They brewed multiple kinds of beer consisting of wheat, barley, and mixed grain beers. Beer brewing was very important to the Sumerians. It was referenced in the Epic of Gilgamesh when Enkidu was introduced to the food and beer of Gilgamesh's people: "Drink the beer, as is the custom of the land... He drank the beer-seven jugs! and became expansive and sang with joy!"
The Sumerians practiced similar irrigation techniques as those used in Egypt. American anthropologist Robert McCormick Adams says that irrigation development was associated with urbanization, and that 89% of the population lived in the cities.
They grew barley, chickpeas, lentils, wheat, dates, onions, garlic, lettuce, leeks and mustard. Sumerians caught many fish and hunted fowl and gazelle.
Sumerian agriculture depended heavily on irrigation. The irrigation was accomplished by the use of shaduf, canals, channels, dykes, weirs, and reservoirs. The frequent violent floods of the Tigris, and less so, of the Euphrates, meant that canals required frequent repair and continual removal of silt, and survey markers and boundary stones needed to be continually replaced. The government required individuals to work on the canals in a corvee, although the rich were able to exempt themselves.
As is known from the "Sumerian Farmer's Almanac", after the flood season and after the Spring Equinox and the Akitu or New Year Festival, using the canals, farmers would flood their fields and then drain the water. Next they made oxen stomp the ground and kill weeds. They then dragged the fields with pickaxes. After drying, they plowed, harrowed, and raked the ground three times, and pulverized it with a mattock, before planting seed. Unfortunately the high evaporation rate resulted in a gradual increase in the salinity of the fields. By the Ur III period, farmers had switched from wheat to the more salt-tolerant barley as their principal crop.
Sumerians harvested during the spring in three-person teams consisting of a reaper, a binder, and a sheaf handler.By the sweat of thy brow: Work in the Western world, Melvin Kranzberg, Joseph Gies, Putnam, 1975 The farmers would use threshing wagons, driven by oxen, to separate the cereal heads from the stalks and then use threshing sleds to disengage the grain. They then winnowed the grain/chaff mixture.
Architecture
The Tigris-Euphrates plain lacked minerals and trees. Sumerian structures were made of plano-convex mudbrick, not fixed with mortar or cement. Mud-brick buildings eventually deteriorate, so they were periodically destroyed, leveled, and rebuilt on the same spot. This constant rebuilding gradually raised the level of cities, which thus came to be elevated above the surrounding plain. The resultant hills, known as tells, are found throughout the ancient Near East.
According to Archibald Sayce, the primitive pictograms of the early Sumerian (i.e. Uruk) era suggest that "Stone was scarce, but was already cut into blocks and seals. Brick was the ordinary building material, and with it cities, forts, temples and houses were constructed. The city was provided with towers and stood on an artificial platform; the house also had a tower-like appearance. It was provided with a door which turned on a hinge, and could be opened with a sort of key; the city gate was on a larger scale, and seems to have been double. The foundation stones — or rather bricks — of a house were consecrated by certain objects that were deposited under them."
The most impressive and famous of Sumerian buildings are the ziggurats, large layered platforms which supported temples. Sumerian cylinder seals also depict houses built from reeds not unlike those built by the Marsh Arabs of Southern Iraq until as recently as 400 CE. The Sumerians also developed the arch, which enabled them to develop a strong type of dome. They built this by constructing and linking several arches. Sumerian temples and palaces made use of more advanced materials and techniques, such as buttresses, recesses, half columns, and clay nails.
Mathematics
The Sumerians developed a complex system of metrology c. 4000 BC. This advanced metrology resulted in the creation of arithmetic, geometry, and algebra. From c. 2600 BC onwards, the Sumerians wrote multiplication tables on clay tablets and dealt with geometrical exercises and division problems. The earliest traces of the Babylonian numerals also date back to this period.Duncan J. Melville (2003). Third Millennium Chronology, Third Millennium Mathematics. St. Lawrence University. The period c. 2700 – 2300 BC saw the first appearance of the abacus, and a table of successive columns which delimited the successive orders of magnitude of their sexagesimal number system. The Sumerians were the first to use a place value numeral system. There is also anecdotal evidence the Sumerians may have used a type of slide rule in astronomical calculations. They were the first to find the area of a triangle and the volume of a cube.
Economy and trade
thumb|Bill of sale of a male slave and a building in Shuruppak, Sumerian tablet, c. 2600 BC
Discoveries of obsidian from far-away locations in Anatolia and lapis lazuli from Badakhshan in northeastern Afghanistan, beads from Dilmun (modern Bahrain), and several seals inscribed with the Indus Valley script suggest a remarkably wide-ranging network of ancient trade centered on the Persian Gulf. For example, Imports to Ur came from many parts of the world. In particular, the metals of all types had to be imported.
The Epic of Gilgamesh refers to trade with far lands for goods such as wood that were scarce in Mesopotamia. In particular, cedar from Lebanon was prized. The finding of resin in the tomb of Queen Puabi at Ur, indicates it was traded from as far away as Mozambique.
The Sumerians used slaves, although they were not a major part of the economy. Slave women worked as weavers, pressers, millers, and porters.
Sumerian potters decorated pots with cedar oil paints. The potters used a bow drill to produce the fire needed for baking the pottery. Sumerian masons and jewelers knew and made use of alabaster (calcite), ivory, iron, gold, silver, carnelian, and lapis lazuli.Diplomacy by design: Luxury arts and an "international style" in the ancient Near East, 1400-1200 BC, Marian H. Feldman, University of Chicago Press, 2006, pp. 120-121
Money and credit
Large institutions kept their accounts in barley and silver, often with a fixed rate between them. The obligations, loans and prices in general were usually denominated in one of them. Many transactions involved debt, for example goods consigned to merchants by temple and beer advanced by "ale women".
Commercial credit and agricultural consumer loans were the main types of loans. The trade credit was usually extended by temples in order to finance trade expeditions and was nominated in silver. The interest rate was set at 1/60 a month (one shekel per mina) some time before 2000 BC and it remained at that level for about two thousand years.
Rural loans commonly arose as a result of unpaid obligations due to an institution (such as a temple), in this case the arrears were considered to be lent to the debtor. They were denominated in barley or other crops and the interest rate was typically much higher than for commercial loans and could amount to 1/3 to 1/2 of the loan principal.
Periodically "clean slate" decrees were signed by rulers which cancelled all the rural (but not commercial) debt and allowed bondservants to return to their homes. Customarily rulers did it at the beginning of the first full year of their reign, but they could also be proclaimed at times of military conflict or crop failure. The first known ones were made by Enmetena and Urukagina of Lagash in 2400-2350 BC. According to Hudson, the purpose of these decrees was to prevent debts mounting to a degree that they threatened fighting force which could happen if peasants lost the subsistence land or became bondservants due to the inability to repay the debt.
Military
thumb|Early chariots on the Standard of Ur, c. 2600 BC
thumb|Battle formations on a fragment of the Stele of the Vultures
The almost constant wars among the Sumerian city-states for 2000 years helped to develop the military technology and techniques of Sumer to a high level.Roux, Georges (1992), "Ancient Iraq" (Penguin) The first war recorded in any detail was between Lagash and Umma in c. 2525 BC on a stele called the Stele of the Vultures. It shows the king of Lagash leading a Sumerian army consisting mostly of infantry. The infantrymen carried spears, wore copper helmets, and carried rectangular shields. The spearmen are shown arranged in what resembles the phalanx formation, which requires training and discipline; this implies that the Sumerians may have made use of professional soldiers.Winter, Irene J. (1985). "After the Battle is Over: The 'Stele of the Vultures' and the Beginning of Historical Narrative in the Art of the Ancient Near East". In Kessler, Herbert L.; Simpson, Marianna Shreve. Pictorial Narrative in Antiquity and the Middle Ages. Center for Advanced Study in the Visual Arts, Symposium Series IV 16. Washington DC: National Gallery of Art. pp. 11–32. ISSN 0091-7338
The Sumerian military used carts harnessed to onagers. These early chariots functioned less effectively in combat than did later designs, and some have suggested that these chariots served primarily as transports, though the crew carried battle-axes and lances. The Sumerian chariot comprised a four or two-wheeled device manned by a crew of two and harnessed to four onagers. The cart was composed of a woven basket and the wheels had a solid three-piece design.
Sumerian cities were surrounded by defensive walls. The Sumerians engaged in siege warfare between their cities, but the mudbrick walls were able to deter some foes.
Technology
Examples of Sumerian technology include: the wheel, cuneiform script, arithmetic and geometry, irrigation systems, Sumerian boats, lunisolar calendar, bronze, leather, saws, chisels, hammers, braces, bits, nails, pins, rings, hoes, axes, knives, lancepoints, arrowheads, swords, glue, daggers, waterskins, bags, harnesses, armor, quivers, war chariots, scabbards, boots, sandals, harpoons and beer.
The Sumerians had three main types of boats:
clinker-built sailboats stitched together with hair, featuring bitumen waterproofing
skin boats constructed from animal skins and reeds
wooden-oared ships, sometimes pulled upstream by people and animals walking along the nearby banks
Legacy
Evidence of wheeled vehicles appeared in the mid 4th millennium BC, near-simultaneously in Mesopotamia, the Northern Caucasus (Maykop culture) and Central Europe. The wheel initially took the form of the potter's wheel. The new concept quickly led to wheeled vehicles and mill wheels. The Sumerians' cuneiform script is the oldest (or second oldest after the Egyptian hieroglyphs) which has been deciphered (the status of even older inscriptions such as the Jiahu symbols and Tartaria tablets is controversial). The Sumerians were among the first astronomers, mapping the stars into sets of constellations, many of which survived in the zodiac and were also recognized by the ancient Greeks. They were also aware of the five planets that are easily visible to the naked eye.
They invented and developed arithmetic by using several different number systems including a mixed radix system with an alternating base 10 and base 6. This sexagesimal system became the standard number system in Sumer and Babylonia. They may have invented military formations and introduced the basic divisions between infantry, cavalry, and archers. They developed the first known codified legal and administrative systems, complete with courts, jails, and government records. The first true city-states arose in Sumer, roughly contemporaneously with similar entities in what are now Syria and Lebanon. Several centuries after the invention of cuneiform, the use of writing expanded beyond debt/payment certificates and inventory lists to be applied for the first time, about 2600 BC, to messages and mail delivery, history, legend, mathematics, astronomical records, and other pursuits. Conjointly with the spread of writing, the first formal schools were established, usually under the auspices of a city-state's primary temple.
Finally, the Sumerians ushered in domestication with intensive agriculture and irrigation. Emmer wheat, barley, sheep (starting as mouflon), and cattle (starting as aurochs) were foremost among the species cultivated and raised for the first time on a grand scale.
See also
Judaism
Marsh Arabs (on the DNA distribution of Marsh Arabs)
History of Iraq
History of writing numbers
Ancient Mesopotamian units of measurement
Toynbee's law of challenge and response
Ancient Mesopotamian religion
Notes
References
Further reading
Ascalone, Enrico. 2007. Mesopotamia: Assyrians, Sumerians, Babylonians (Dictionaries of Civilizations; 1). Berkeley: University of California Press. ISBN 0-520-25266-7 (paperback).
Bottéro, Jean, André Finet, Bertrand Lafont, and George Roux. 2001. Everyday Life in Ancient Mesopotamia. Edinburgh: Edinburgh University Press, Baltimore: Johns Hopkins University Press.
Crawford, Harriet E. W. 2004. Sumer and the Sumerians. Cambridge: Cambridge University Press.
Leick, Gwendolyn. 2002. Mesopotamia: Invention of the City. London and New York: Penguin.
Lloyd, Seton. 1978. The Archaeology of Mesopotamia: From the Old Stone Age to the Persian Conquest. London: Thames and Hudson.
Nemet-Nejat, Karen Rhea. 1998. Daily Life in Ancient Mesopotamia. London and Westport, Conn.: Greenwood Press.
Kramer, Samuel Noah. Sumerian Mythology: A Study of Spiritual and Literary Achievement in the Third Millennium BC.
Roux, Georges. 1992. Ancient Iraq, 560 pages. London: Penguin (earlier printings may have different pagination: 1966, 480 pages, Pelican; 1964, 431 pages, London: Allen and Urwin).
Schomp, Virginia. Ancient Mesopotamia: The Sumerians, Babylonians, And Assyrians.
Sumer: Cities of Eden (Timelife Lost Civilizations). Alexandria, VA: Time-Life Books, 1993 (hardcover, ISBN 0-8094-9887-1).
Woolley, C. Leonard. 1929. The Sumerians. Oxford: Clarendon Press.
External links
Ancient Sumer History --- The History of the Ancient Near East Electronic Compendium
Iraq’s Ancient Past — Penn Museum
The Sumerians
Geography
The History Files: Ancient Mesopotamia
Language
Sumerian Language Page, perhaps the oldest Sumerian website on the web (it dates back to 1996), features compiled lexicon, detailed FAQ, extensive links, and so on.
ETCSL: The Electronic Text Corpus of Sumerian Literature has complete translations of more than 400 Sumerian literary texts.
PSD: The Pennsylvania Sumerian Dictionary, while still in its initial stages, can be searched on-line, from August 2004.
CDLI: Cuneiform Digital Library Initiative a large corpus of Sumerian texts in transliteration, largely from the Early Dynastic and Ur III periods, accessible with images.
Category:States and territories established in the 4th millennium BC
Category:States and territories established in the 3rd millennium BC
Category:States and territories disestablished in the 20th century BC
Category:Civilizations
Category:Lists of coordinates
Category:Archaeology of Iraq
Category:Levant
Category:Populated places established in the 6th millennium BC
Category:6th-millennium BC establishments | 50,521 | 2017-01 |
Modern history | Modern history, the modern period or the modern era, is the global historiographical approach to the timeframe after the Post-classical history.Intrinsic to the English language, "modern" denotes (in reference to history) a period that is opposed to either ancient or medieval—modern history comprising the history of the world since the close of the Middle Ages. Modern history can be further broken down into periods:
The early modern period began approximately in the early 16th century; notable historical milestones included the European Renaissance and the Age of Discovery.Dunan, Marcel. Larousse Encyclopedia of Modern History, From 1500 to the Present Day. New York: Harper & Row, 1964.
The late modern period began approximately in the mid-18th century; notable historical milestones included the French Revolution, American Revolution, the Industrial Revolution, and the Great Divergence. It took all of human history up to 1804 for the world's population to reach 1 billion; the next billion came just over a century later, in 1927.http://www.pbs.org/newshour/bb/world-july-dec11-population1_10-27/
Contemporary history is the span of historic events that are immediately relevant to the present time.
Study guide
Some events, while not without precedent, show a new way of perceiving the world. The concept of modernity interprets the general meaning of these events and seeks explanations for major developments.
Source text
The fundamental difficulty of studying modern history is the fact that a plethora of it has been documented up to the present day. It is imperative to consider the reliability of the information obtained from these records.
Terminology and usage
Pre-modern
In the pre-modern era, many people's sense of self and purpose was often expressed via a faith in some form of deity, be that in a single God or in many gods.Tirosh-Samuelson, H. (2003). Happiness in premodern Judaism: Virtue, knowledge, and well-being. Monographs of the Hebrew Union College, no. 29. Cincinnati, Ohio: Hebrew Union College Press. Pre-modern cultures have not been thought of creating a sense of distinct individuality,Nation, civil society and social movements: essays in political sociology by T. K. Oommen. Page 236.Premodern Japan: a historical survey by Mikiso HaneGriffin, D. R. (1990). Sacred interconnections: Postmodern spirituality, political economy, and art. SUNY series in constructive postmodern thought. Albany: State University of New York Press. though. Religious officials, who often held positions of power, were the spiritual intermediaries to the common person. It was only through these intermediaries that the general masses had access to the divine. Tradition was sacred to ancient cultures and was unchanging and the social order of ceremony and morals in a culture could be strictly enforced.Maine, H. S., & Dwight, T. W. (1888). Ancient law: Its connection with the early history of society and its relation to modern ideas. New York: H. Holt and Co.Boylan, P. (1922). Thoth, the Hermes of Egypt: A study of some aspects of theological thought in ancient Egypt. London: H. Milford, Oxford university pressFordyce, J. (1888). The new social order. London: Kegan Paul, Trench & Co.
Modern
The term "modern" was coined in the 16th century to indicate present or recent times (ultimately derived from the Latin adverb modo, meaning "just now"). The European Renaissance (about 1420–1630), which marked the transition between the Late Middle Ages and Early Modern times, started in Italy and was spurred in part by the rediscovery of classical art and literature, as well as the new perspectives gained from the Age of Discovery and the invention of the telescope and microscope, expanding the borders of thought and knowledge.
In contrast to the pre-modern era, Western civilization made a gradual transition from pre-modernity to modernity when scientific methods were developed which led many to believe that the use of science would lead to all knowledge, thus throwing back the shroud of myth under which pre-modern peoples lived. New information about the world was discovered via empirical observation,Baird, F. E., & Kaufmann, W. A. (2008). Philosophic classics: From Plato to Derrida. Upper Saddle River, N.J: Pearson/Prentice Hall. versus the historic use of reason and innate knowledge.
The term "Early Modern" was introduced in the English language in the 1930s.New Dictionary of the History of ideas, Volume 5, Detroit 2005. Modernism and Modern to distinguish the time between what we call Middle Ages and time of the late Enlightenment (1800) (when the meaning of the term Modern Ages was developing its contemporary form). It is important to note that these terms stem from European history. In usage in other parts of the world, such as in Asia, and in Muslim countries, the terms are applied in a very different way, but often in the context with their contact with European culture in the Age of Discovery.
Contemporary
In the Contemporary era, there were various socio-technological trends. Regarding the 21st century and the late modern world, the Information Age and computers were forefront in use, not completely ubiquitous but often present in everyday life. The development of Eastern powers was of note, with China and India becoming more powerful. In the Eurasian theater, the European Union and Russian Federation were two forces recently developed. A concern for Western world, if not the whole world, was the late modern form of terrorism and the warfare that has resulted from the contemporary terrorist acts.
Modern era
Significant developments
The modern period has been a period of significant development in the fields of science, politics, warfare, and technology. It has also been an age of discovery and globalization. During this time, the European powers and later their colonies, began a political, economic, and cultural colonization of the rest of the world.
By the late 19th and 20th centuries, modernist art, politics, science and culture has come to dominate not only Western Europe and North America, but almost every civilized area on the globe, including movements thought of as opposed to the west and globalization. The modern era is closely associated with the development of individualism,National, cultural, and ethnic identities: harmony beyond conflict by Jaroslav Hroch, David Hollan capitalism,Capitalism and modernity: the great debate by Jack Goody urbanization and a belief in the possibilities of technological and political progress.Progress and its discontents American Academy of Arts and Sciences. "Technology and politics." Western CenterA companion to the philosophy of technology by Jan-Kyrre Berg Olsen, Stig Andur Pedersen, Vincent F. Hendricks
The brutal wars and other problems of this era, many of which come from the effects of rapid change, and the connected loss of strength of traditional religious and ethical norms, have led to many reactions against modern development.Marx, Durkheim, Weber: formations of modern social thought by Kenneth L. Morrison. Page 294.William Schweiker, The Blackwell companion to religious ethics. 2005. Page 454. (cf., "In modernity, however, much of economic activity and theory seemed to be entirely cut off from religious and ethical norms, at least in traditional terms. Many see modern economic developments as entirely secular.") Optimism and belief in constant progress has been most recently criticized by postmodernism while the dominance of Western Europe and Anglo-America over other continents has been criticized by postcolonial theory.
One common conception of modernity is the condition of Western history since the mid-15th century, or roughly the European development of movable typeEarly European History by Henry Kitchell Webster and the printing press.The European Reformations by Carter Lindberg In this context the "modern" society is said to develop over many periods, and to be influenced by important events that represent breaks in the continuity.The new Cambridge modern history: Companion volume by Peter BurkePlains Indian History and Culture: Essays on Continuity and Change by John C. EwersWeber, irrationality, and social order by Alan Sica
Early
The modern era includes the early period, called the early modern period, which lasted from c. 1500 to around c. 1800 (most often 1815). Particular facets of early modernity include:
The Renaissance
The The Reformation and Counter Reformation.
The Age of Discovery
The Rise of capitalism
The Golden Age of Piracy
Important events in the early modern period include:
The invention of the printing press
The English Civil War
The Seven Years' War
Modern Age characteristics
The concept of the modern world as distinct from an ancient or medieval world rests on a sense that the modern world is not just another era in history, but rather the result of a new type of change. This is usually conceived of as progress driven by deliberate human efforts to better their situation.
Advances in all areas of human activity—politics, industry, society, economics, commerce, transport, communication, mechanization, automation, science, medicine, technology, and culture—appear to have transformed an Old World into the Modern or New World.Contemporary history of the world by Edwin Augustus GrosvenorA summary of modern history by Jules Michelet, Mary Charlotte Mair Simpson In each case, the identification of the old Revolutionary change can be used to demarcate the old and old-fashioned from the modern.
Portions of the Modern world altered its relationship with the Biblical value system, revalued the monarchical government system, and abolished the feudal economic system, with new democratic and liberal ideas in the areas of politics, science, psychology, sociology, and economics.
This combination of epoch events totally changed thinking and thought in the early modern period, and so their dates serve as well as any to separate the old from the new modes.
As an Age of Revolutions dawned, beginning with those revolts in America and France, political changes were then pushed forward in other countries partly as a result of upheavals of the Napoleonic Wars and their impact on thought and thinking, from concepts from nationalism to organizing armies.Crawley, C. W. (1965). The new Cambridge modern history. Volume 9., War and peace in an age of upheaval, 1793–1830. Cambridge: Cambridge University Press.Goldman, E. O., & Eliason, L. C. (2003). The diffusion of military technology and ideas. Stanford, Calif: Stanford University Press.Boot, M. (2006). War made new: Technology, warfare, and the course of history, 1500 to today. New York: Gotham Books.
The early period ended in a time of political and economic change as a result of mechanization in society, the American Revolution, the first French Revolution; other factors included the redrawing of the map of Europe by the Final Act of the Congress of Vienna and the peace established by Second Treaty of Paris which ended the Napoleonic Wars.Hazen, Charles Downer (1910). Europe since 1815. American historical series, H. Holt and Company.
Late
As a result of the Industrial Revolution and the earlier political revolutions, the worldviews of Modernism emerged. The industrialization of many nations was initiated with the industrialization of Britain. Particular facets of the late modernity period include:
Increasing role of science and technology
Mass literacy and proliferation of mass media
Spread of social movements
Institution of representative democracy
Individualism
Industrialization
Urbanization
Total fertility rates and ecological collapses occurring at geometric rates;
Other important events in the development of the Late modern period include:
The American Revolution
The French Revolution
The Revolutions of 1848
The Russian Revolution
The First World War and the Second World War
Our most recent eraModern Timesbegins with the end of these revolutions in the 19th century,Duruy, V., & Grosvenor, E. A. (1894). History of modern times: From the fall of Constantinople to the French revolution. New York: H. Holt and company. and includes the World Wars eraJohnson, P. (2001). Modern times: The world from the twenties to the nineties. New York: HarperPerennial. (encompassing World War I and World War II) and the emergence of socialist countries that led to the Cold War. The contemporary era follows shortly afterward with the explosion of research and increase of knowledge known as the Information Age in the latter 20th and the early 21st century. Today's Postmodern era is seen in widespread digitality.Martín Lister, New media: a critical introduction. Psychology Press, 2003. Page 14
Early modern period
thumb|center|550px|
Historians consider the early modern period to be approximately between 1500 and 1800. It follows the Late Middle Ages period and is marked by the first European colonies, the rise of strong centralized governments, and the beginnings of recognizable nation-states that are the direct antecedents of today's states.
In Africa and the Ottoman Empire, the Muslim expansion took place in North and East Africa. In West Africa, various native nations existed. The Indian Empires and civilizations of Southeast Asia were a vital link in the spice trade. On the Indian subcontinent, the Great Mughal Empire existed. The archipelagic empires, the Sultanate of Malacca and later the Sultanate of Johor, controlled the southern areas.
In Asia, various Chinese dynasties and Japanese shogunates controlled the Asian sphere. In Japan, the Edo period from 1600 to 1868 is also referred to as the early modern period. And in Korea, from the rising of Joseon Dynasty to the enthronement of King Gojong is referred to as the early modern period. In the Americas, Native Americans had built a large and varied civilization, including the Aztec Empire and alliance, the Inca civilization, the Mayan Empire and cities, and the Chibcha Confederation. In the west, the European kingdoms and movements were in a movement of reformation and expansion. Russia reached the Pacific coast in 1647 and consolidated its control over the Russian Far East in the 19th century.
Later religious trends of the period saw the end of the expansion of Muslims and the Muslim world. Christians and Christendom saw the end of the Crusades and end of religious unity under the Roman Catholic Church. It was during this time that the Inquisitions and Protestant reformations took place.
centre|thumb|750px|
During the early modern period, an age of discovery and trade was undertaken by the Western European nations. Portugal, Spain, the Netherlands, the United Kingdom and France went on a colonial expansion and took possession of lands and set up colonies in Africa, southern Asia, and North and South America. Turkey colonized Southeastern Europe, and parts of the West Asia and North Africa. Russia took possession in Eastern Europe, Asia, and North America.
Asia
China
In China, urbanization increased as the population grew and as the division of labor grew more complex. Large urban centers, such as Nanjing and Beijing, also contributed to the growth of private industry. In particular, small-scale industries grew up, often specializing in paper, silk, cotton, and porcelain goods. For the most part, however, relatively small urban centers with markets proliferated around the country. Town markets mainly traded food, with some necessary manufactures such as pins or oil. Despite the xenophobia and intellectual introspection characteristic of the increasingly popular new school of neo-Confucianism, China under the early Ming dynasty was not isolated. Foreign trade and other contacts with the outside world, particularly Japan, increased considerably. Chinese merchants explored all of the Indian Ocean, reaching East Africa with the treasure voyages of Zheng He.
The Qing dynasty (1644–1912) was founded after the fall of the Ming, the last Han Chinese dynasty, by the Manchus. The Manchus were formerly known as the Jurchens. When Beijing was captured by Li Zicheng's peasant rebels in 1644, the Chongzhen Emperor, the last Ming emperor, committed suicide. The Manchus then allied with former Ming general Wu Sangui and seized control of Beijing, which became the new capital of the Qing dynasty. The Manchus adopted the Confucian norms of traditional Chinese government in their rule of China proper. Schoppa, the editor of The Columbia Guide to Modern Chinese History argues, "A date around 1780 as the beginning of modern China is thus closer to what we know today as historical 'reality'. It also allows us to have a better baseline to understand the precipitous decline of the Chinese polity in the nineteenth and twentieth centuries."
Japan
In pre-modernor early-modern Japan following the Sengoku Period of "warring states", central government had been largely reestablished by Oda Nobunaga and Toyotomi Hideyoshi during the Azuchi–Momoyama period. After the Battle of Sekigahara in 1600, central authority fell to Tokugawa Ieyasu who completed this process and received the title of shogun in 1603.
Society in the Japanese "Tokugawa period" (Edo society), unlike the shogunates before it, was based on the strict class hierarchy originally established by Toyotomi Hideyoshi. The daimyōs (feudal lords) were at the top, followed by the warrior-caste of samurai, with the farmers, artisans, and traders ranking below. In some parts of the country, particularly smaller regions, daimyōs and samurai were more or less identical, since daimyōs might be trained as samurai, and samurai might act as local lords. Otherwise, the largely inflexible nature of this social stratification system unleashed disruptive forces over time. Taxes on the peasantry were set at fixed amounts which did not account for inflation or other changes in monetary value. As a result, the tax revenues collected by the samurai landowners were worth less and less over time. This often led to numerous confrontations between noble but impoverished samurai and well-to-do peasants, ranging from simple local disturbances to much bigger rebellions. None, however, proved compelling enough to seriously challenge the established order until the arrival of foreign powers.
India
On the Indian subcontinent, the Mughal Empire ruled most of India in the early 18th century. The "classic period" ended with the death and defeat of Emperor Aurangzeb in 1707 by the rising Hindu Maratha Empire, although the dynasty continued for another 150 years. During this period, the Empire was marked by a highly centralized administration connecting the different regions. All the significant monuments of the Mughals, their most visible legacy, date to this period which was characterised by the expansion of Persian cultural influence in the Indian subcontinent, with brilliant literary, artistic, and architectural results. The Maratha Empire was located in the south west of present-day India and expanded greatly under the rule of the Peshwas, the prime ministers of the Maratha empire. In 1761, the Maratha army lost the Third Battle of Panipat which halted imperial expansion and the empire was then divided into a confederacy of Maratha states.
British and Dutch colonization
The development of New Imperialism saw the conquest of nearly all eastern hemisphere territories by colonial powers. The commercial colonization of India commenced in 1757, after the Battle of Plassey, when the Nawab of Bengal surrendered his dominions to the British East India Company, in 1765, when the Company was granted the diwani, or the right to collect revenue, in Bengal and Bihar,, or in 1772, when the Company established a capital in Calcutta, appointed its first Governor-General, Warren Hastings, and became directly involved in governance.
The Maratha states, following the Anglo-Maratha wars, eventually lost to the British East India Company in 1818 with the Third Anglo-Maratha War. The rule lasted until 1858, when, after the Indian rebellion of 1857 and consequent of the Government of India Act 1858, the British government assumed the task of directly administering India in the new British Raj. In 1819 Stamford Raffles established Singapore as a key trading post for Britain in their rivalry with the Dutch. However, their rivalry cooled in 1824 when an Anglo-Dutch treaty demarcated their respective interests in Southeast Asia. From the 1850s onwards, the pace of colonization shifted to a significantly higher gear.
The Dutch East India Company (1800) and British East India Company (1858) were dissolved by their respective governments, who took over the direct administration of the colonies. Only Thailand was spared the experience of foreign rule, although, Thailand itself was also greatly affected by the power politics of the Western powers. Colonial rule had a profound effect on Southeast Asia. While the colonial powers profited much from the region's vast resources and large market, colonial rule did develop the region to a varying extent.Commercial agriculture, mining and an export based economy developed rapidly during this period.
Europe
Many major events caused Europe to change around the start of the 16th century, starting with the Fall of Constantinople in 1453, the fall of Muslim Spain and the discovery of the Americas in 1492, and Martin Luther's Protestant Reformation in 1517. In England the modern period is often dated to the start of the Tudor period with the victory of Henry VII over Richard III at the Battle of Bosworth in 1485.Helen Miller, Aubrey Newman. Early modern British history, 1485–1760: a select bibliography, Historical Association, 1970Early Modern Period (1485–1800), Sites Organized by Period, Rutgers University Libraries Early modern European history is usually seen to span from the start of the 15th century, through the Age of Reason and the Age of Enlightenment in the 17th and 18th centuries, until the beginning of the Industrial Revolution in the late 18th century.
Tsardom of Russia
Russia experienced territorial growth through the 17th century, which was the age of Cossacks. Cossacks were warriors organized into military communities, resembling pirates and pioneers of the New World. In 1648, the peasants of Ukraine joined the Zaporozhian Cossacks in rebellion against Poland-Lithuania during the Khmelnytsky Uprising, because of the social and religious oppression they suffered under Polish rule. In 1654 the Ukrainian leader, Bohdan Khmelnytsky, offered to place Ukraine under the protection of the Russian Tsar, Aleksey I. Aleksey's acceptance of this offer led to another Russo-Polish War (1654–1667). Finally, Ukraine was split along the river Dnieper, leaving the western part (or Right-bank Ukraine) under Polish rule and eastern part (Left-bank Ukraine and Kiev) under Russian. Later, in 1670–71 the Don Cossacks led by Stenka Razin initiated a major uprising in the Volga region, but the Tsar's troops were successful in defeating the rebels. In the east, the rapid Russian exploration and colonisation of the huge territories of Siberia was led mostly by Cossacks hunting for valuable furs and ivory. Russian explorers pushed eastward primarily along the Siberian river routes, and by the mid-17th century there were Russian settlements in the Eastern Siberia, on the Chukchi Peninsula, along the Amur River, and on the Pacific coast. In 1648 the Bering Strait between Asia and North America was passed for the first time by Fedot Popov and Semyon Dezhnyov.
Reason and Enlightenment
Traditionally, the European intellectual transformation of and after the Renaissance bridged the Middle Ages and the Modern era. The Age of Reason in the Western world is generally regarded as being the start of modern philosophy,For example, Conversations on the Plurality of Worlds offered an explanation of the heliocentric model of the Universe. and a departure from the medieval approach, especially Scholasticism. Early 17th-century philosophy is often called the Age of Rationalism and is considered to succeed Renaissance philosophy and precede the Age of Enlightenment, but some consider it as the earliest part of the Enlightenment era in philosophy, extending that era to two centuries. The 18th century saw the beginning of secularization in Europe, rising to notability in the wake of the French Revolution.
The Age of Enlightenment is a time in Western philosophy and cultural life centered upon the 18th century in which reason was advocated as the primary source and legitimacy for authority. Enlightenment gained momentum more or less simultaneously in many parts of Europe and America. Developing during the Enlightenment era, Renaissance humanism as an intellectual movement spread across Europe. The basic training of the humanist was to speak well and write (typically, in the form of a letter). The term umanista comes from the latter part of the 15th century. The people were associated with the studia humanitatis, a novel curriculum that was competing with the quadrivium and scholastic logic.Paul Oskar Kristeller, Humanism, pp. 113-4, in Charles B. Schmitt, Quentin Skinner (editors), The Cambridge History of Renaissance Philosophy (1990).
Renaissance humanism took a close study of the Latin and Greek classical texts, and was antagonistic to the values of scholasticism with its emphasis on the accumulated commentaries; and humanists were involved in the sciences, philosophies, arts and poetry of classical antiquity. They self-consciously imitated classical Latin and deprecated the use of medieval Latin. By analogy with the perceived decline of Latin, they applied the principle of ad fontes, or back to the sources, across broad areas of learning.
The quarrel of the Ancients and the Moderns was a literary and artistic quarrel that heated up in the early 1690s and shook the Académie française. The opposing two sides were, the Ancients (Anciens) who constrain choice of subjects to those drawn from the literature of Antiquity and the Moderns (Modernes), who supported the merits of the authors of the century of Louis XIV. Fontenelle quickly followed with his Digression sur les anciens et les modernes (1688), in which he took the Modern side, pressing the argument that modern scholarship allowed modern man to surpass the ancients in knowledge.
Scientific Revolution
The Scientific Revolution was a period when European ideas in classical physics, astronomy, biology, human anatomy, chemistry, and other classical sciences were rejected and led to doctrines supplanting those that had prevailed from Ancient Greece to the Middle Ages which would lead to a transition to modern science. This period saw a fundamental transformation in scientific ideas across physics, astronomy, and biology, in institutions supporting scientific investigation, and in the more widely held picture of the universe. Individuals started to question all manners of things and it was this questioning that led to the Scientific Revolution, which in turn formed the foundations of contemporary sciences and the establishment of several modern scientific fields.
The French Revolutions
Toward the middle and latter stages of the Age of Revolution, the French political and social revolutions and radical change saw the French governmental structure, previously an absolute monarchy with feudal privileges for the aristocracy and Catholic clergy transform, changing to forms based on Enlightenment principles of citizenship and inalienable rights. The first revolution led to government by the National Assembly, the second by the Legislative Assembly, and the third by the Directory.
The changes were accompanied by violent turmoil which included the trial and execution of the king, vast bloodshed and repression during the Reign of Terror, and warfare involving every other major European power. Subsequent events that can be traced to the Revolution include the Napoleonic Wars, two separate restorations of the monarchy, and two additional revolutions as modern France took shape. In the following century, France would be governed at one point or another as a republic, constitutional monarchy, and two different empires.
National and Legislative Assembly
During the French Revolution, the National Assembly, which existed from June 17 to July 9 of 1789, was a transitional body between the Estates-General and the National Constituent Assembly.
The Legislative Assembly was the legislature of France from October 1, 1791 to September 1792. It provided the focus of political debate and revolutionary law-making between the periods of the National Constituent Assembly and of the National Convention.
The Directory and Napoleonic Era
The Executive Directory was a body of five Directors that held executive power in France following the Convention and preceding the Consulate. The period of this regime (2 November 1795 until 10 November 1799), commonly known as the Directory (or Directoire) era, constitutes the second to last stage of the French Revolution. Napoleon, before seizing the title of Emperor, was elected as First Consul of the Consulate of France.
The campaigns of French Emperor and General Napoleon Bonaparte characterized the Napoleonic Era. Born on Corsica as the French invaded, and dying suspiciously on the tiny British Island of St. Helena, this brilliant commander, controlled a French Empire that, at its height, ruled a large portion of Europe directly from Paris, while many of his friends and family ruled countries such as Spain, Poland, several parts of Italy and many other Kingdoms Republics and dependencies. The Napoleonic Era changed the face of Europe forever, and old Empires and Kingdoms fell apart as a result of the mighty and "Glorious" surge of Republicanism.
Italian unification
Italian unification was the political and social movement that annexed different states of the Italian peninsula into the single state of Italy in the 19th century. There is a lack of consensus on the exact dates for the beginning and the end of this period, but many scholars agree that the process began with the end of Napoleonic rule and the Congress of Vienna in 1815, and approximately ended with the Franco-Prussian War in 1871, though the last città irredente did not join the Kingdom of Italy until after World War I.
End of the early modern period
Toward the end of the early modern period, Europe was dominated by the evolving system of mercantile capitalism in its trade and the New Economy. European states and politics had the characteristic of Absolutism. The French power and English revolutions dominated the political scene. There eventually evolved an international balance of power that held at bay a great conflagration until years later.
The end date of the early modern period is usually associated with the Industrial Revolution, which began in Britain in about 1750. Another significant date is 1789, the beginning of the French Revolution, which drastically transformed the state of European politics and ushered in the Prince Edward Era and modern Europe.
North America
The French and Indian Wars were a series of conflicts in North America that represented the actions there that accompanied the European dynastic wars. In Quebec, the wars are generally referred to as the Intercolonial Wars. While some conflicts involved Spanish and Dutch forces, all pitted Great Britain, its colonies and American Indian allies on one side and France, its colonies and Indian allies on the other.
The expanding French and British colonies were contending for control of the western, or interior, territories. Whenever the European countries went to war, there were actions within and by these colonies although the dates of the conflict did not necessarily exactly coincide with those of the larger conflicts.
thumb|John Trumbull's Declaration of Independence, showing the five-man committee in charge of drafting the Declaration in 1776 as it presents its work to the Second Continental Congress in Philadelphia
Beginning the Age of Revolution, the American Revolution and the ensuing political upheaval during the last half of the 18th century saw the Thirteen Colonies of North America overthrow the governance of the Parliament of Great Britain, and then reject the British monarchy itself to become the sovereign United States of America. In this period the colonies first rejected the authority of the Parliament to govern them without representation, and formed self-governing independent states. The Second Continental Congress then joined together against the British to defend that self-governance in the armed conflict from 1775 to 1783 known as the American Revolutionary War (also called American War of Independence).
The American Revolution began with fighting at Lexington and Concord. On July 4, 1776, they issued the Declaration of Independence, which proclaimed their independence from Great Britain and their formation of a cooperative union. In June 1776, Benjamin Franklin was appointed a member of the Committee of Five that drafted the Declaration of Independence. Although he was temporarily disabled by gout and unable to attend most meetings of the Committee, Franklin made several small changes to the draft sent to him by Thomas Jefferson.
The rebellious states defeated Great Britain in the American Revolutionary War, the first successful colonial war of independence. While the states had already rejected the governance of Parliament, through the Declaration the new United States now rejected the legitimacy of the monarchy to demand allegiance. The war raged for seven years, with effective American victory, followed by formal British abandonment of any claim to the United States with the Treaty of Paris.
thumb|right|North America 1797
The Philadelphia Convention set up the current United States; the United States Constitution ratification the following year made the states part of a single republic with a limited central government. The Bill of Rights, comprising ten constitutional amendments guaranteeing many fundamental civil rights and freedoms, was ratified in 1791.
Decolonization of North and South Americas
The decolonization of the Americas was the process by which the countries in the Americas gained their independence from European rule. Decolonization began with a series of revolutions in the late 18th and early-to-mid-19th centuries. The Spanish American wars of independence were the numerous wars against Spanish rule in Spanish America that took place during the early 19th century, from 1808 until 1829, directly related to the Napoleonic French invasion of Spain. The conflict started with short-lived governing juntas established in Chuquisaca and Quito opposing the composition of the Supreme Central Junta of Seville.
Decolonization of the Americas
When the Central Junta fell to the French, numerous new Juntas appeared all across the Americas, eventually resulting in a chain of newly independent countries stretching from Argentina and Chile in the south, to Mexico in the north. After the death of the king Ferdinand VII, in 1833, only Cuba and Puerto Rico remained under Spanish rule, until the Spanish–American War in 1898. Unlike the Spanish, the Portuguese did not divide their colonial territory in America. The captaincies they created were subdued to a centralized administration in Salvador (later relocated to Rio de Janeiro) which reported directly to the Portuguese Crown until its independence in 1822, becoming the Empire of Brazil.
Late modern period
Timeline
Dates are approximate range (based upon influence), consult particular article for details
Modern Age Other
Industrial revolutions
thumb|A Watt steam engine. The development of the steam engine started the Industrial Revolution in Great Britain.Watt steam engine image: located in the lobby of into the Superior Technical School of Industrial Engineers of the UPM (Madrid) The steam engine was created to pump water from coal mines, enabling them to be deepened beyond groundwater levels. The date of the Industrial Revolution is not exact. Eric Hobsbawm held that it 'broke out' in the 1780s and was not fully felt until the 1830s or 1840s,Eric Hobsbawm, The Age of Revolution: Europe 1789–1848, Weidenfeld and Nicholson Ltd. ISBN 0-349-10484-0 while T.S. Ashton held that it occurred roughly between 1760 and 1830 (in effect the reigns of George III, The Regency, and George IV).Joseph E Inikori. Africans and the Industrial Revolution in England, Cambridge University Press. ISBN 0-521-01079-9. The great changes of centuries before the 19th were more connected with ideas, religion or military conquest, and technological advance had only made small changes in the material wealth of ordinary people.
The first Industrial Revolution merged into the Second Industrial Revolution around 1850, when technological and economic progress gained momentum with the development of steam-powered ships and railways, and later in the 19th century with the internal combustion engine and electric power generation. The Second Industrial Revolution was a phase of the Industrial Revolution; labeled as the separate Technical Revolution. From a technological and a social point of view there is no clean break between the two. Major innovations during the period occurred in the chemical, electrical, petroleum, and steel industries. Specific advancements included the introduction of oil fired steam turbine and internal combustion driven steel ships, the development of the airplane, the practical commercialization of the automobile, mass production of consumer goods, the perfection of canning, mechanical refrigeration and other food preservation techniques, and the invention of the telephone.
Industrialization
Industrialization is the process of social and economic change whereby a human group is transformed from a pre-industrial society into an industrial one. It is a subdivision of a more general modernization process, where social change and economic development are closely related with technological innovation, particularly with the development of large-scale energy and metallurgy production. It is the extensive organization of an economy for the purpose of manufacturing. Industrialization also introduces a form of philosophical change, where people obtain a different attitude towards their perception of nature.
Revolution in manufacture and power
An economy based on manual labour was replaced by one dominated by industry and the manufacture of machinery. It began with the mechanization of the textile industries and the development of iron-making techniques, and trade expansion was enabled by the introduction of canals, improved roads, and then railways.
The introduction of steam power (fuelled primarily by coal) and powered machinery (mainly in textile manufacturing) underpinned the dramatic increases in production capacity.Business and Economics. Leading Issues in Economic Development, Oxford University Press US. ISBN 0-19-511589-9. The development of all-metal machine tools in the first two decades of the 19th century facilitated the manufacture of more production machines for manufacturing in other industries.
The modern petroleum industry started in 1846 with the discovery of the process of refining kerosene from coal by Nova Scotian Abraham Pineo Gesner. Ignacy Łukasiewicz improved Gesner's method to develop a means of refining kerosene from the more readily available "rock oil" ("petr-oleum") seeps in 1852 and the first rock oil mine was built in Bóbrka, near Krosno in Galicia in the following year. In 1854, Benjamin Silliman, a science professor at Yale University in New Haven, was the first to fractionate petroleum by distillation. These discoveries rapidly spread around the world.
Notable engineers
Engineering achievements of the revolution ranged from electrification to developments in materials science. The advancements made a great contribution to the quality of life. In the first revolution, Lewis Paul was the original inventor of roller spinning, the basis of the water frame for spinning cotton in a cotton mill. Matthew Boulton and James Watt's improvements to the steam engine were fundamental to the changes brought by the Industrial Revolution in both the Kingdom of Great Britain and the world.
thumb|right|Nikola Tesla sits in front of the spiral coil of his high-frequency transformer at East Houston Street, New York.
In the latter part of the second revolution, Thomas Alva Edison developed many devices that greatly influenced life around the world and is often credited with the creation of the first industrial research laboratory. In 1882, Edison switched on the world's first large-scale electrical supply network that provided 110 volts direct current to fifty-nine customers in lower Manhattan. Also toward the end of the second industrial revolution, Nikola Tesla made many contributions in the field of electricity and magnetism in the late 19th and early 20th centuries.
Social effects and classes
The Industrial Revolutions were major technological, socioeconomic, and cultural changes in late 18th and early 19th centuries that began in Britain and spread throughout the world. The effects spread throughout Western Europe and North America during the 19th century, eventually affecting the majority of the world. The impact of this change on society was enormous and is often compared to the Neolithic revolution, when mankind developed agriculture and gave up its nomadic lifestyle.Russell Brown, Lester. Eco-Economy, James & James, Earthscan. ISBN 1-85383-904-3.
It has been argued that GDP per capita was much more stable and progressed at a much slower rate until the industrial revolution and the emergence of the modern capitalist economy, and that it has since increased rapidly in capitalist countries.Federal Reserve Bank of Minneapolis essay retrieved March 11, 2006
Mid-19th-century European revolts
The European Revolutions of 1848, known in some countries as the Spring of Nations or the Year of Revolution, were a series of political upheavals throughout the European continent. Described as a revolutionary wave, the period of unrest began in France and then, further propelled by the French Revolution of 1848, soon spread to the rest of Europe.Cayley, E. S. (1856). The European revolutions of 1848. London: Smith, Elder & Co. Vol. I and II.Harding, S. B., & Hart, A. B. (1918). New medieval and modern history. New York: American book company. Although most of the revolutions were quickly put down, there was a significant amount of violence in many areas, with tens of thousands of people tortured and killed. While the immediate political effects of the revolutions were reversed, the long-term reverberations of the events were far-reaching.
Industrial age reformism
Industrial age reform movements began the gradual change of society rather than with episodes of rapid fundamental changes. The reformists' ideas were often grounded in liberalism, although they also possessed aspects of utopian, socialist or religious concepts. The Radical movement campaigned for electoral reform, a reform of the Poor Laws, free trade, educational reform, postal reform, prison reform, and public sanitation.
Following the Enlightenment's ideas, the reformers looked to the Scientific Revolution and industrial progress to solve the social problems which arose with the Industrial Revolution. Newton's natural philosophy combined a mathematics of axiomatic proof with the mechanics of physical observation, yielding a coherent system of verifiable predictions and replacing a previous reliance on revelation and inspired truth. Applied to public life, this approach yielded several successful campaigns for changes in social policy.
Imperial Russia
Under Peter I (the Great), Russia was proclaimed an Empire in 1721 and became recognized as a world power. Ruling from 1682 to 1725, Peter defeated Sweden in the Great Northern War, forcing it to cede West Karelia and Ingria (two regions lost by Russia in the Time of Troubles), as well as Estland and Livland, securing Russia's access to the sea and sea trade. On the Baltic Sea Peter founded a new capital called Saint Petersburg, later known as Russia's Window to Europe. Peter the Great's reforms brought considerable Western European cultural influences to Russia. Catherine II (the Great), who ruled in 1762–96, extended Russian political control over the Polish-Lithuanian Commonwealth and incorporated most of its territories into Russia during the Partitions of Poland, pushing the Russian frontier westward into Central Europe. In the south, after successful Russo-Turkish Wars against the Ottoman Empire, Catherine advanced Russia's boundary to the Black Sea, defeating the Crimean khanate.
European dominance and the 19th century
400px|thumb|alt=Painting of a group of standing and seated heads of state in a variety of national uniforms and formal dress|"The World's Sovereigns", 1889.
Historians define the 19th century historical era as stretching from 1815 (the Congress of Vienna) to 1914 (the outbreak of the First World War); alternatively, Eric Hobsbawm defined the "Long Nineteenth Century" as spanning the years 1789 to 1914.
Imperialism and empires
In the 1800s and early 1900s, once great and powerful Empires such as Spain, Ottoman Turkey, the Mughal Empire, and the Kingdom of Portugal began to break apart. Spain, which was at one time unrivaled in Europe, had been declining for a long time when it was crippled by Napoleon Bonaparte's invasion. Sensing the time was right, Spain's vast colonies in South America began a series of rebellions that ended with almost all of the Spanish territories gaining their independence.
The once mighty Ottoman Empire was wracked with a series of revolutions, resulting with the Ottoman's only holding a small region that surrounded the capital, Istanbul.
The Mughal empire, which was descended from the Mongol Khanate, was bested by the upcoming Maratha Confederacy. All was going well for the Marathas until the British took an interest in the riches of India and the British ended up ruling not just the boundaries of Modern India, but also Pakistan, Burma, Nepal, Bangladesh and some Southern Regions of Afghanistan.
The King of Portugal's vast territory of Brazil reformed into the independent Empire of Brazil.
With the defeat of Napoleonic France, Britain became undoubtedly the most powerful country in the world, and by the end of the First World War controlled a Quarter of the world's population and a third of its surface. However, the power of the British Empire did not end on land, since it had the greatest navy on the planet.
Electricity, steel, and petroleum enabled Germany to become a great international power that raced to create empires of its own.
The Meiji Restoration was a chain of events that led to enormous changes in Japan's political and social structure that was taking a firm hold at the beginning of the Meiji Era which coincided the opening of Japan by the arrival of the Black Ships of Commodore Matthew Perry and made Imperial Japan a great power.
Russia and Qing Dynasty China failed to keep pace with the other world powers which led to massive social unrest in both empires. The Qing Dynasty's military power weakened during the 19th century, and faced with international pressure, massive rebellions and defeats in wars, the dynasty declined after the mid-19th century.
European powers controlled parts of Oceania, with French New Caledonia from 1853 and French Polynesia from 1889; the Germans established colonies in New Guinea in 1884, and Samoa in 1900.
The United States expanded into the Pacific with Hawaii becoming a U.S. territory from 1898.
Disagreements between the US, Germany and UK over Samoa led to the Tripartite Convention of 1899.
British Victorian era
thumb|National flag of the United Kingdom.
The Victorian era of the United Kingdom was the period of Queen Victoria's reign from June 1837 to January 1901. This was a long period of prosperity for the British people, as profits gained from the overseas British Empire, as well as from industrial improvements at home, allowed a large, educated middle class to develop. Some scholars would extend the beginning of the period—as defined by a variety of sensibilities and political games that have come to be associated with the Victorians—back five years to the passage of the Reform Act 1832.
thumb|The British Empire in 1897, marked in the traditional colour for imperial British dominions on maps
In Britain's "imperial century",For more, see Pax Britannica. victory over Napoleon left Britain without any serious international rival, other than Russia in central Asia. Unchallenged at sea, Britain adopted the role of global policeman, a state of affairs later known as the Pax Britannica, and a foreign policy of "splendid isolation". Alongside the formal control it exerted over its own colonies, Britain's dominant position in world trade meant that it effectively controlled the economies of many nominally independent countries, such as China, Argentina and Siam, which has been generally characterized as "informal empire".Edwards, B. T. (2004). Informal empire: Mexico and Central America in Victorian culture. Minneapolis, Minn: Univ. of Minnesota Press Of note during this time was the Anglo-Zulu War, which was fought in 1879 between the British Empire and the Zulu Empire.
British imperial strength was underpinned by the steamship and the telegraph, new technologies invented in the second half of the 19th century, allowing it to control and defend the Empire. By 1902, the British Empire was linked together by a network of telegraph cables, the so-called All Red Line. Growing until 1922, around of territory and roughly 458 million people were added to the British Empire.Maddison, Angus (2001). The World Economy: A Millennial Perspective. Organisation for Economic Co-operation and Development. ISBN 92-64-18654-9. pp. 98, 242.Ferguson, Niall (2004). Colossus: The Price of America's Empire. Penguin. ISBN 1-59420-013-0. p. 15 The British established colonies in Australia in 1788, New Zealand in 1840 and Fiji in 1872, with much of Oceania becoming part of the British Empire.
French governments and conflicts
The Bourbon Restoration followed the ousting of Napoleon I of France in 1814. The Allies restored the Bourbon Dynasty to the French throne. The ensuing period is called the Restoration, following French usage, and is characterized by a sharp conservative reaction and the re-establishment of the Roman Catholic Church as a power in French politics. The July Monarchy was a period of liberal constitutional monarchy in France under King Louis-Philippe starting with the July Revolution (or Three Glorious Days) of 1830 and ending with the Revolution of 1848. The Second Empire was the Imperial Bonapartist regime of Napoleon III from 1852 to 1870, between the Second Republic and the Third Republic, in France.
thumb|Napoleon III and Bismarck after the Battle of Sedan
The Franco-Prussian War was a conflict between France and Prussia, while Prussia was backed up by the North German Confederation, of which it was a member, and the South German states of Baden, Württemberg and Bavaria. The complete Prussian and German victory brought about the final unification of Germany under King Wilhelm I of Prussia. It also marked the downfall of Napoleon III and the end of the Second French Empire, which was replaced by the Third Republic. As part of the settlement, almost all of the territory of Alsace-Lorraine was taken by Prussia to become a part of Germany, which it would retain until the end of World War I.
The French Third Republic was the republican government of France between the end of the Second French Empire following the defeat of Louis-Napoléon in the Franco-Prussian war in 1870 and the Vichy Regime after the invasion of France by the German Third Reich in 1940. The Third Republic endured seventy years, making it the most long-lasting regime in France since the collapse of the Ancien Régime in the French Revolution of 1789.
Slavery and abolition
Slavery was greatly reduced around the world in the 19th century. Following a successful slave revolt in Haiti, Britain forced the Barbary pirates to halt their practice of kidnapping and enslaving Europeans, banned slavery throughout its domain, and charged its navy with ending the global slave trade. Slavery was then abolished in Russia, America, and Brazil.
African colonization
Following the abolition of the slave trade in 1807 and propelled by economic exploitation, the Scramble for Africa was initiated formally at the Berlin West Africa Conference in 1884–1885. The Berlin Conference attempted to avoid war among the European powers by allowing the European rival countries to carve up the continent of Africa into national colonies. Africans were not consulted.
The major European powers laid claim to the areas of Africa where they could exhibit a sphere of influence over the area. These claims did not have to have any substantial land holdings or treaties to be legitimate. The European power that demonstrated its control over a territory accepted the mandate to rule that region as a national colony. The European nation that held the claim developed and benefited from their colony’s commercial interests without having to fear rival European competition. With the colonial claim came the underlying assumption that the European power that exerted control would use its mandate to offer protection and provide welfare for its colonial peoples, however, this principle remained more theory than practice. There were many documented instances of material and moral conditions deteriorating for native Africans in the late nineteenth and early twentieth centuries under European colonial rule, to the point where the colonial experience for them has been described as "hell on earth."
At the time of the Berlin Conference, Africa contained one-fifth of the world’s population living in one-quarter of the world’s land area. However, from Europe's perspective, they were dividing an unknown continent. European countries established a few coastal colonies in Africa by the mid-nineteenth century, which included Cape Colony (Great Britain), Angola (Portugal), and Algeria (France), but until the late nineteenth century Europe largely traded with free African states without feeling the need for territorial possession. Until the 1880s most of Africa remained unchartered, with western maps from the period generally showing blank spaces for the continent’s interior.
From the 1880s to 1914, the European powers expanded their control across the African continent, competing with each other for Africa’s land and resources. Great Britain controlled various colonial holdings in East Africa that spanned the length of the African continent from Egypt in the north to South Africa. The French gained major ground in West Africa, and the Portuguese held colonies in southern Africa. Germany, Italy, and Spain established a small number of colonies at various points throughout the continent, which included German East Africa (Tanganyika) and German Southwest Africa for Germany, Eritrea and Libya for Italy, and the Canary Islands and Rio de Oro in northwestern Africa for Spain. Finally, for King Leopold (ruled from 1865–1909), there was the large “piece of that great African cake” known as the Congo, which, unfortunately for the native Congolese, became his personal fiefdom to do with as he pleased in Central Africa. By 1914, almost the entire continent was under European control. Liberia, which was settled by freed American slaves in the 1820s, and Abyssinia (Ethiopia) in eastern Africa were the last remaining independent African states. (John Merriman, A History of Modern Europe, Volume Two: From the French Revolution to the Present, Third Edition (New York: W. W. Norton & Company, 2010), pp. 819–859).
Meiji Japan
Around the end of the 19th century and into the 20th century, the Meiji era was marked by the reign of the Meiji Emperor. During this time, Japan started its modernization and rose to world power status. This era name means "Enlightened Rule". In Japan, the Meiji Restoration started in the 1860s, marking the rapid modernization by the Japanese themselves along European lines. Much research has focused on the issues of discontinuity versus continuity with the previous Tokugawa Period.Kenneth B. Pyle, "Profound Forces in the Making of Modern Japan," Journal of Japanese Studies (2006) 32#2 pp 393-418 in Project MUSE In the 1960s younger Japanese scholars led by Irokawa Daikichi, reacted against the bureaucratic superstate, and began searching for the historic role of the common people . They avoided the elite, and focused not on political events but on social forces and attitudes. They rejected both Marxism and modernization theory as alien and confining. They stressed the importance of popular energies in the development of modern Japan. They enlarged history by using the methods of social history.<ref>Carol Gluck, "The People in History: Recent Trends in Japanese Historiography," Journal of Asian Studies. (1978) 38#1 pp 25-50. in JSTOR</ref> It was not until the beginning of the Meiji Era that the Japanese government began taking modernization seriously. Japan expanded its military production base by opening arsenals in various locations. The hyobusho (war office) was replaced with a War Department and a Naval Department. The samurai class suffered great disappointment the following years.
Laws were instituted that required every able-bodied male Japanese citizen, regardless of class, to serve a mandatory term of three years with the first reserves and two additional years with the second reserves. This action, the deathblow for the samurai warriors and their daimyōs, initially met resistance from both the peasant and warrior alike. The peasant class interpreted the term for military service, ketsu-eki ("blood tax") literally, and attempted to avoid service by any means necessary. The Japanese government began modelling their ground forces after the French military. The French government contributed greatly to the training of Japanese officers. Many were employed at the military academy in Kyoto, and many more still were feverishly translating French field manuals for use in the Japanese ranks.
After the death of the Meiji Emperor, the Taishō Emperor took the throne, thus beginning the Taishō period. A key foreign observer of the remarkable and rapid changes in Japanese society in this period was Ernest Mason Satow.
Representative Western scholars include George Akita,George Akita, "Trends in Modern Japanese Political History: The 'Positivist'", Monumenta Nipponica (1992) 37#4 pp 497-522 William Beasley, James B. Crowley, John W. Dower, Peter Duus, Carol Gluck, Norman Herbert, John W. Hall, Mikiso Hane, Akira Iriye, Marius Jansen, Edwin O. Reischauer, George B. Sansom, Bernard Silberman, Richard Storry, Karel van Wolfram, and Ezra Vogel.John Whitney Hall, "Japanese History: New Dimensions of Approach and Understanding" (2nd ed. 1966Jean-Pierre Lehmann and Sue Henny, eds. Themes and Theories in Modern Japanese History (2013)
United States See also: 19th-century North American Natives Antebellum expansion
thumb|American westward expansion is idealized in Emanuel Leutze's famous painting Westward the Course of Empire Takes its Way (1861).
The Antebellum Age was a period of increasing division in the country based on the growth of slavery in the American South and in the western territories of Kansas and Nebraska that eventually lead to the Civil War in 1861. The Antebellum Period is often considered to have begun with the Kansas–Nebraska Act of 1854, although it may have begun as early as 1812. This period is also significant because it marked the transition of American manufacturing to the industrial revolution.
"Manifest destiny" was the belief that the United States was destined to expand across the North American continent, from the Atlantic seaboard to the Pacific Ocean. During this time, the United States expanded to the Pacific Ocean—"from sea to shining sea"—largely defining the borders of the contiguous United States as they are today.
Civil War and Reconstruction
The American Civil War came when seven (later eleven) Southern slave states declared their secession from the U.S. and formed the Confederate States of America (the Confederacy). Led by Jefferson Davis, they fought against the U.S. federal government (the Union) under President Abraham Lincoln, which was supported by all the free states and the five border slave states in the north.
Northern leaders agreed that victory would require more than the end of fighting. Secession and Confederate nationalism had to be totally repudiated and all forms of slavery or quasi-slavery had to be eliminated. Lincoln proved effective in mobilizing support for the war goals, raising large armies and supplying them, avoiding foreign interference, and making the end of slavery a war goal. The Confederacy had a larger area than it could defend, and it failed to keep its ports open and its rivers clear. The North kept up the pressure as the South could barely feed and clothe its soldiers. Its soldiers, especially those in the East under the command of General Robert E. Lee proved highly resourceful until they finally were overwhelmed by Generals Ulysses S. Grant and William T. Sherman in 1864–65, The Reconstruction Era (1863–77) began with the Emancipation proclamation in 1863, and included freedom, full citizenship and the vote for the Southern blacks. It was followed by a reaction that left the blacks in a second class status legally, politically, socially and economically until the 1960s.
The Gilded Age and legacy
During the Gilded Age, there was substantial growth in population in the United States and extravagant displays of wealth and excess of America's upper-class during the post-Civil War and post-Reconstruction era, in the late 19th century. The wealth polarization derived primarily from industrial and population expansion. The businessmen of the Second Industrial Revolution created industrial towns and cities in the Northeast with new factories, and contributed to the creation of an ethnically diverse industrial working class which produced the wealth owned by rising super-rich industrialists and financiers called the "robber barons". An example is the company of John D. Rockefeller, who was an important figure in shaping the new oil industry. Using highly effective tactics and aggressive practices, later widely criticized, Standard Oil absorbed or destroyed most of its competition.
The creation of a modern industrial economy took place. With the creation of a transportation and communication infrastructure, the corporation became the dominant form of business organization and a managerial revolution transformed business operations. In 1890, Congress passed the Sherman Antitrust Act—the source of all American anti-monopoly laws. The law forbade every contract, scheme, deal, or conspiracy to restrain trade, though the phrase "restraint of trade" remained subjective. By the beginning of the 20th century, per capita income and industrial production in the United States exceeded that of any other country except Britain. Long hours and hazardous working conditions led many workers to attempt to form labor unions despite strong opposition from industrialists and the courts. But the courts did protect the marketplace, declaring the Standard Oil group to be an "unreasonable" monopoly under the Sherman Antitrust Act in 1911. It ordered Standard to break up into 34 independent companies with different boards of directors.See generally Standard Oil Co. of New Jersey v. United States, 221 U.S. 1 (1911).
Science and philosophy
Replacing the classical physics in use since the end of the scientific revolution, modern physics arose in the early 20th century with the advent of quantum physics,F.K Richtmyer, E.H Kennard, T. Lauristen (1955). "Introduction". Introduction to Modern Physics (5th edition ed.). New York: McGraw-Hill Book Company. p. 1. LCCN 55-6862. substituting mathematical studies for experimental studies and examining equations to build a theoretical structure.The concepts derived are at times abstractions from nature for baselines or reference states. These can be unattainable in practice, such as free space (electromagnetism) and practical absolute zero temperature (ed. Special negative temperatures values are "colder" than the zero points of those scales but still warmer than absolute zero). The old quantum theory was a collection of results which predate modern quantum mechanics, but were never complete or self-consistent. The collection of heuristic prescriptions for quantum mechanics were the first corrections to classical mechanics.Matrix mechanics and wave mechanics supplanted other studies to end the era of the old-quantum theory. Outside the realm of quantum physics, the various aether theories in classical physics, which supposed a "fifth element" such as the Luminiferous aether,a substance in early physics considered to be the medium through which light propagates. were nullified by the Michelson-Morley experiment—an attempt to detect the motion of earth through the aether. In biology, Darwinism gained acceptance, promoting the concept of adaptation in the theory of natural selection. The fields of geology, astronomy and psychology also made strides and gained new insights. In medicine, there were advances in medical theory and treatments.
thumb|right|Xinhai Revolution in Shanghai; Chen Qimei organized Shanghainese civilians to start the uprising and was successful. The picture above is Nanjing Road after the uprising, hung with the Five Races Under One Union Flags then used by the revolutionaries.
The assertions of Chinese philosophyThe Chinese Enlightenment. By Vera Schwarcz. p4. began to integrate concepts of Western philosophy, as steps toward modernization. By the time of the Xinhai Revolution in 1911, there were many calls, such as the May Fourth Movement, to completely abolish the old imperial institutions and practices of China. There were attempts to incorporate democracy, republicanism, and industrialism into Chinese philosophy, notably by Sun Yat-Sen (Sūn yì xiān, in one Mandarin form of the name) at the beginning of the 20th century. Mao Zedong (Máo zé dōng) added Marxist-Leninist thought. When the Communist Party of China took over power, previous schools of thought, excepting notably Legalism, were denounced as backward, and later even purged during the Cultural Revolution.
Starting one-hundred years before the 20th century, the enlightenment spiritual philosophy was challenged in various quarters around the 1900s.Ralph Adams Cram. "The Second Coming of Art". The Atlantic Monthly, Volume 119. Philip Gengembre Hubert. p193Enlightenment Contested. By Jonathan I. Israel. p765Modern Christian Thought: The twentieth century, Volume 2. By James C. Livingston, Francis Schüssler Fiorenza. p2."Herman Dooyeweerd". Routledge Encyclopedia of Philosophy. By Routledge (COR), Luciano Floridi, Edward Craig. p113.
See also: D. H. Th. Vollenhoven.Counter-Enlightenments: From the Eighteenth Century to the Present. By Graeme Garrard. Routledge, 2004. p13..See also: Counter-Enlightenment, Max Weber, and Émile Durkheim. Developed from earlier secular traditions,Known as continental philosophy. modern Humanist ethical philosophies affirmed the dignity and worth of all people, based on the ability to determine right and wrong by appealing to universal human qualities, particularly rationality, without resorting to the supernatural or alleged divine authority from religious texts. Typically, abridgments of this definition omit all senses except #1, such as in the Cambridge Advanced Learner's Dictionary, Collins Essential English Dictionary, and For liberal humanists such as Rousseau and Kant, the universal law of reason guided the way toward total emancipation from any kind of tyranny. These ideas were challenged, for example by the young Karl Marx, who criticized the project of political emancipation (embodied in the form of human rights), asserting it to be symptomatic of the very dehumanization it was supposed to oppose. For Friedrich Nietzsche, humanism was nothing more than a secular version of theism. In his Genealogy of Morals, he argues that human rights exist as a means for the weak to collectively constrain the strong. On this view, such rights do not facilitate emancipation of life, but rather deny it. In the 20th century, the notion that human beings are rationally autonomous was challenged by the concept that humans were driven by unconscious irrational desires.
Notable persons
Sigmund Freud is renowned for his redefinition of sexual desire as the primary motivational energy of human life, as well as his therapeutic techniques, including the use of free association, his theory of transference in the therapeutic relationship, and the interpretation of dreams as sources of insight into unconscious desires.
Albert Einstein is known for his theories of special relativity and general relativity. He also made important contributions to statistical mechanics, especially his mathematical treatment of Brownian motion, his resolution of the paradox of specific heats, and his connection of fluctuations and dissipation. Despite his reservations about its interpretation, Einstein also made contributions to quantum mechanics and, indirectly, quantum field theory, primarily through his theoretical studies of the photon.
Social Darwinism
At the end of the 19th century, Social Darwinism was promoted and included the various ideologies based on a concept that competition among all individuals, groups, nations, or ideas was a "natural" framework for social evolution in human societies. In this view, society's advancement is dependent on the "survival of the fittest", the term was in fact coined by Herbert Spencer and referred to in "The Gospel of Wealth" written by Andrew Carnegie.
Marxist society
thumb|The Communist Manifesto
Karl Marx summarized his approach to history and politics in the opening line of the first chapter of The Communist Manifesto (1848). He wrote:The history of all hitherto existing society is the history of class struggles.In the 1888 English edition of the Communist Manifest, Friedrich Engels added a footnote with the commentary: "That is, all written history. In 1847, the prehistory of society, the social organization existing previous to recorded history, was all but unknown. Since then Haxthausen discovered common ownership of land In Russia, Maurer concluded it to be the social foundation from which all Teutonic races started in history, and by and by village communities were found to be, or to have been, the primitive form of society everywhere from India to Ireland. The Inner organization of this primitive Communistic society was laid bare, In its typical form, by Morgan's work on the true nature of the gens and Its relation to the tribe. With the dissolution of these primaeval communities society begins to be differentiated into separate and finally antagonistic classes. I have attempte to retrace this process of dissolution in "Der Ursprung der Familie, des Privateigenthums und des Staats"", from Marx, Karl, Friedrich Engels, Leon Trotsky, and Karl Marx. The Communist Manifesto and Its Relevance for Today. Chippendale, N.S.W.: Resistance Books, 1998. p. 46, see also Cornelius Castoriadis, Political and Social Writings. Minneapolis: University of Minnesota Press, 1993. p. 204
The Manifesto went through a number of editions from 1872 to 1890; notable new prefaces were written by Marx and Engels for the 1872 German edition, the 1882 Russian edition, the 1883 German edition, and the 1888 English edition. In general, Marxism identified five (and one transitional) successive stages of development in Western Europe.Marx makes no claim to have produced a master key to history. Historical materialism is not "an historico-philosophic theory of the marche generale imposed by fate upon every people, whatever the historic circumstances in which it finds itself". (Marx, Karl, Letter to editor of the Russian paper Otetchestvennye Zapiskym, 1877) His ideas, he explains, are based on a concrete study of the actual conditions that pertained in Europe.
Primitive Communism: as seen in cooperative tribal societies.
Slave Society: which develops when the tribe becomes a city-state. Aristocracy is born.
Feudalism: aristocracy is the ruling class. Merchants develop into capitalists.
Capitalism: capitalists are the ruling class, who create and employ the true working class.
Dictatorship of the proletariat: workers gain class consciousness, overthrow the capitalists and take control over the state.
Communism: a classless and stateless society.
European decline and the 20th century
Major political developments saw the former British Empire lose most of its remaining political power over commonwealth countries.Most notably by dividing the British crown into several sovereignties by the Statute of Westminster, the patriation of constitutions by the Canada Act 1982 and the Australia Act 1986, and by the independence of countries such as India, Pakistan, South Africa, and Ireland, along with the 1997 return of Hong Kong to the People's Republic of China. The Trans-Siberian Railway, crossing Asia by train, was complete by 1916. Other events include the Israeli–Palestinian conflict, two world wars, and the Cold War.
Australian Constitution
In 1901, the Federation of Australia was the process by which the six separate British self-governing colonies of New South Wales, Queensland, South Australia, Tasmania, Victoria and Western Australia formed one nation. They kept the systems of government that they had developed as separate colonies but also would have a federal government that was responsible for matters concerning the whole nation. When the Constitution of Australia came into force, the colonies collectively became states of the Commonwealth of Australia.
Eastern warlords
The last days of the Qing Dynasty were marked by civil unrest and foreign invasions. Responding to these civil failures and discontent, the Qing Imperial Court did attempt to reform the government in various ways, such as the decision to draft a constitution in 1906, the establishment of provincial legislatures in 1909, and the preparation for a national parliament in 1910. However, many of these measures were opposed by the conservatives of the Qing Court, and many reformers were either imprisoned or executed outright. The failures of the Imperial Court to enact such reforming measures of political liberalization and modernization caused the reformists to steer toward the road of revolution.
In 1912, the Republic of China was established and Sun Yat-sen was inaugurated in Nanjing as the first Provisional President. But power in Beijing already had passed to Yuan Shikai, who had effective control of the Beiyang Army, the most powerful military force in China at the time. To prevent civil war and possible foreign intervention from undermining the infant republic, leaders agreed to Army's demand that China be united under a Beijing government. On March 10, in Beijing, Shikai was sworn in as the second Provisional President of the Republic of China.
After the early 20th century revolutions, shifting alliances of China's regional warlords waged war for control of the Beijing government. Despite the fact that various warlords gained control of the government in Beijing during the warlord era, this did not constitute a new era of control or governance, because other warlords did not acknowledge the transitory governments in this period and were a law unto themselves. These military-dominated governments were collectively known as the Beiyang government. The warlord era ended around 1927.Joseph, W. A. (2010). Politics in China: An introduction. Oxford: Oxford University Press. Page 423.
World Wars era
Start of the 20th century
Four years into the 20th century saw the Russo-Japanese War with the Battle of Port Arthur establishing the Empire of Japan as a world power. The Russians were in constant pursuit of a warm water port on the Pacific Ocean, for their navy as well as for maritime trade. The Manchurian Campaign of the Russian Empire was fought against the Japanese over Manchuria and Korea. The major theatres of operations were Southern Manchuria, specifically the area around the Liaodong Peninsula and Mukden, and the seas around Korea, Japan, and the Yellow Sea. The resulting campaigns, in which the fledgling Japanese military consistently attained victory over the Russian forces arrayed against them, were unexpected by world observers. These victories, as time transpired, would dramatically transform the distribution of power in East Asia, resulting in a reassessment of Japan's recent entry onto the world stage. The embarrassing string of defeats increased Russian popular dissatisfaction with the inefficient and corrupt Tsarist government.
The Russian Revolution of 1905 was a wave of mass political unrest through vast areas of the Russian Empire. Some of it was directed against the government, while some was undirected. It included terrorism, worker strikes, peasant unrests, and military mutinies. It led to the establishment of the limited constitutional monarchy,Russian Constitution of 1906 the establishment of State Duma of the Russian Empire, and the multi-party system.
In China, the Qing Dynasty was overthrown following the Xinhai Revolution. The Xinhai Revolution began with the Wuchang Uprising on October 10, 1911 and ended with the abdication of Emperor Puyi on February 12, 1912. The primary parties to the conflict were the Imperial forces of the Qing Dynasty (1644–1911), and the revolutionary forces of the Chinese Revolutionary Alliance (Tongmenghui).
Edwardian Britain
The Edwardian era in the United Kingdom is the period spanning the reign of King Edward VII up to the end of the First World War, including the years surrounding the sinking of the RMS Titanic. In the early years of the period, the Second Boer War in South Africa split the country into anti- and pro-war factions. The imperial policies of the Conservatives eventually proved unpopular and in the general election of 1906 the Liberals won a huge landslide. The Liberal government was unable to proceed with all of its radical programme without the support of the House of Lords, which was largely Conservative. Conflict between the two Houses of Parliament over the People's Budget led to a reduction in the power of the peers in 1910. The general election in January that year returned a hung parliament with the balance of power held by Labour and Irish Nationalist members.
World War I
The causes of World War I included many factors, including the conflicts and antagonisms of the four decades leading up to the war. The Triple Entente was the name given to the loose alignment between the United Kingdom, France, and Russia after the signing of the Anglo-Russian Entente in 1907. The alignment of the three powers, supplemented by various agreements with Japan, the United States, and Spain, constituted a powerful counterweight to the Triple Alliance of Germany, Austria-Hungary, and Italy, the third having concluded an additional secret agreement with France effectively nullifying her Alliance commitments. Militarism, alliances, imperialism, and nationalism played major roles in the conflict. The immediate origins of the war lay in the decisions taken by statesmen and generals during the July Crisis of 1914, the spark (or casus belli) for which was the assassination of Archduke Franz Ferdinand of Austria.
However, the crisis did not exist in a void; it came after a long series of diplomatic clashes between the Great Powers over European and colonial issues in the decade prior to 1914 which had left tensions high. The diplomatic clashes can be traced to changes in the balance of power in Europe since 1870. An example is the Baghdad Railway which was planned to connect the Ottoman Empire cities of Konya and Baghdad with a line through modern-day Turkey, Syria and Iraq. The railway became a source of international disputes during the years immediately preceding World War I. Although it has been argued that they were resolved in 1914 before the war began, it has also been argued that the railroad was a cause of the First World War.(Jastrow 1917) Fundamentally the war was sparked by tensions over territory in the Balkans. Austria-Hungary competed with Serbia and Russia for territory and influence in the region and they pulled the rest of the great powers into the conflict through their various alliances and treaties. The Balkan Wars were two wars in South-eastern Europe in 1912–1913 in the course of which the Balkan League (Bulgaria, Montenegro, Greece, and Serbia) first captured Ottoman-held remaining part of Thessaly, Macedonia, Epirus, Albania and most of Thrace and then fell out over the division of the spoils, with incorporation of Romania this time.
thumb||750px|center|Various periods of World War I; 1914.07.28 (Tsar Nicholas II of Russia orders a partial mobilization against Austria-Hungary), 1914.08.01 (Germany declares war on Russia), 1914.08.03 (Germany declares war on Russia's ally France), 1914.08.04 (Britain declares war on Germany), 1914.12 (British and German Christmas truce), 1915.12 (French and German Christmas truce), 1916.12 (Battle of Magdhaba), 1917.12 (British troops take Jerusalem from the Ottoman Empire), and 1918.11.11 (World War I ends: Germany signs an armistice agreement with the Allies).
Allies and Central Powers in the First World War Allied powers and areas Central powers and colonies or occupied territory Neutral countries
The First World War began in 1914 and lasted to the final Armistice in 1918. The Allied Powers, led by the British Empire, France, Russia until March 1918, Japan and the United States after 1917, defeated the Central Powers, led by the German Empire, Austro-Hungarian Empire and the Ottoman Empire. The war caused the disintegration of four empires—the Austro-Hungarian, German, Ottoman, and Russian ones—as well as radical change in the European and West Asian maps. The Allied powers before 1917 are referred to as the Triple Entente, and the Central Powers are referred to as the Triple Alliance.
Much of the fighting in World War I took place along the Western Front, within a system of opposing manned trenches and fortifications (separated by a "No man's land") running from the North Sea to the border of Switzerland. On the Eastern Front, the vast eastern plains and limited rail network prevented a trench warfare stalemate from developing, although the scale of the conflict was just as large. Hostilities also occurred on and under the sea and—for the first time—from the air. More than 9 million soldiers died on the various battlefields, and nearly that many more in the participating countries' home fronts on account of food shortages and genocide committed under the cover of various civil wars and internal conflicts. Notably, more people died of the worldwide influenza outbreak at the end of the war and shortly after than died in the hostilities. The unsanitary conditions engendered by the war, severe overcrowding in barracks, wartime propaganda interfering with public health warnings, and migration of so many soldiers around the world helped the outbreak become a pandemic.
Ultimately, World War I created a decisive break with the old world order that had emerged after the Napoleonic Wars, which was modified by the mid-19th century's nationalistic revolutions. The results of World War I would be important factors in the development of World War II approximately 20 years later. More immediate to the time, the partitioning of the Ottoman Empire was a political event that redrew the political boundaries of West Asia. The huge conglomeration of territories and peoples formerly ruled by the Sultan of the Ottoman Empire was divided into several new nations.Roderic H. Davison; Review "From Paris to Sèvres: The Partition of the Ottoman Empire at the Peace Conference of 1919–1920. by Paul C. Helmreich" in Slavic Review, Vol. 34, No. 1 (March , 1975), pp. 186-187 The partitioning brought the creation of the modern Arab world and the Republic of Turkey. The League of Nations granted France mandates over Syria and Lebanon and granted the United Kingdom mandates over Mesopotamia and Palestine (which was later divided into two regions: Palestine and Transjordan). Parts of the Ottoman Empire on the Arabian Peninsula became parts of what are today Saudi Arabia and Yemen.
Revolutions and war
thumb|right|National flag of the Soviet Union.
The Russian Revolution is the series of revolutions in Russia in 1917, which destroyed the Tsarist autocracy and led to the creation of the Soviet Union. Following the abdication of Nicholas II of Russia, the Russian Provisional Government was established. In October 1917, a red faction revolution occurred in which the Red Guard, armed groups of workers and deserting soldiers directed by the Bolshevik Party, seized control of Saint Petersburg (then known as Petrograd) and began an immediate armed takeover of cities and villages throughout the former Russian Empire.
Another action in 1917 that is of note was the armistice signed between Russia and the Central Powers at Brest-Litovsk.Evan Mawdsley (2008) The Russian Civil War: 42 As a condition for peace, the treaty by the Central Powers conceded huge portions of the former Russian Empire to Imperial Germany and the Ottoman Empire, greatly upsetting nationalists and conservatives. The Bolsheviks made peace with the German Empire and the Central Powers, as they had promised the Russian people prior to the Revolution. Vladimir Lenin's decision has been attributed to his sponsorship by the foreign office of Wilhelm II, German Emperor, offered by the latter in hopes that with a revolution, Russia would withdraw from World War I. This suspicion was bolstered by the German Foreign Ministry's sponsorship of Lenin's return to Petrograd. The Western Allies expressed their dismay at the Bolsheviks, upset at:
the withdrawal of Russia from the war effort,
worried about a possible Russo-German alliance, and
galvanized by the prospect of the Bolsheviks making good their threats to assume no responsibility for, and so default on, Imperial Russia's massive foreign loans.the legal notion of Odious debt had not yet been formulated
In addition, there was a concern, shared by many Central Powers as well, that the socialist revolutionary ideas would spread to the West. Hence, many of these countries expressed their support for the Whites, including the provision of troops and supplies. Winston Churchill declared that Bolshevism must be "strangled in its cradle".Cover Story: Churchill's Greatness. Interview with Jeffrey Wallin. (The Churchill Centre)
The Russian Civil War was a multi-party war that occurred within the former Russian Empire after the Russian provisional government collapsed and the Soviets under the domination of the Bolshevik party assumed power, first in Petrograd (St. Petersburg) and then in other places. In the wake of the October Revolution, the old Russian Imperial Army had been demobilized; the volunteer-based Red Guard was the Bolsheviks' main military force, augmented by an armed military component of the Cheka, the Bolshevik state security apparatus. There was an instituted mandatory conscription of the rural peasantry into the Red Army.Read, Christopher, From Tsar to Soviets, Oxford University Press (1996), p. 237: By 1920, 77% of the Red Army's enlisted ranks were composed of peasant conscripts. Opposition of rural Russians to Red Army conscription units was overcome by taking hostages and shooting them when necessary in order to force compliance.Williams, Beryl, The Russian Revolution 1917–1921, Blackwell Publishing Ltd. (1987), ISBN 978-0-631-15083-1: Typically, men of conscriptible age (17-40) in a village would vanish when Red Army draft units approached. The taking of hostages and a few exemplary executions usually brought the men back. Former Tsarist officers were utilized as "military specialists" (voenspetsy),Overy, R.J., The Dictators: Hitler's Germany and Stalin's Russia, W.W. Norton & Company (2004), ISBN 978-0-393-02030-4, p. 446: By the end of the civil war, one-third of all Red Army officers were ex-Tsarist voenspetsy. taking their families hostage in order to ensure loyalty.Williams, Beryl, The Russian Revolution 1917–1921, Blackwell Publishing Ltd. (1987), ISBN 978-0-631-15083-1 At the start of the war, three-fourths of the Red Army officer corps was composed of former Tsarist officers. By its end, 83% of all Red Army divisional and corps commanders were ex-Tsarist soldiers.Overy, R.J., The Dictators: Hitler's Germany and Stalin's Russia, W.W. Norton & Company (2004), ISBN 978-0-393-02030-4, p. 446:
The principal fighting occurred between the Bolshevik Red Army and the forces of the White Army. Many foreign armies warred against the Red Army, notably the Allied Forces, yet many volunteer foreigners fought in both sides of the Russian Civil War. Other nationalist and regional political groups also participated in the war, including the Ukrainian nationalist Green Army, the Ukrainian anarchist Black Army and Black Guards, and warlords such as Ungern von Sternberg. The most intense fighting took place from 1918 to 1920. Major military operations ended on 25 October 1922 when the Red Army occupied Vladivostok, previously held by the Provisional Priamur Government. The last enclave of the White Forces was the Ayano-Maysky District on the Pacific coast. The majority of the fighting ended in 1920 with the defeat of General Pyotr Wrangel in the Crimea, but a notable resistance in certain areas continued until 1923 (e.g., Kronstadt Uprising, Tambov Rebellion, Basmachi Revolt, and the final resistance of the White movement in the Far East).
In 1917, China declared war on Germany in the hope of recovering its lost province, then under Japanese control. The New Culture Movement occupied the period from 1917 to 1923. Chinese representatives refused to sign the Treaty of Versailles, due to intense pressure from the student protesters and public opinion alike.
The May Fourth Movement helped to rekindle the then-fading cause of republican revolution. In 1917 Sun Yat-sen had become commander-in-chief of a rival military government in Guangzhou in collaboration with southern warlords. Sun's efforts to obtain aid from the Western democracies were ignored, however, and in 1920 he turned to the Soviet Union, which had recently achieved its own revolution. The Soviets sought to befriend the Chinese revolutionists by offering scathing attacks on Western imperialism. But for political expediency, the Soviet leadership initiated a dual policy of support for both Sun and the newly established Chinese Communist Party (CCP).
thumb|right|The flag of the Kuomintang, one canton of the flag of the Republic of China.
The policy of working with the Kuomintang and Chiang Kai-shek had been recommended by the Dutch Communist Henk Sneevliet, chosen in 1923 to be the Comintern representative in China due to his revolutionary experience in the Dutch Indies, where he had a major role in founding the Partai Komunis Indonesia (PKI) – and who felt that the Chinese party was too small and weak to undertake a major effort on its own (see Henk Sneevliet's work for the Comintern).
In early 1927, the Kuomintang-CCP rivalry led to a split in the revolutionary ranks. The CCP and the left wing of the Kuomintang had decided to move the seat of the Nationalist government from Guangzhou to Wuhan. But Chiang Kai-shek, whose Northern Expedition was proving successful, set his forces to destroying the Shanghai CCP apparatus and established an anti-Communist government at Nanjing in April 1927.
The 1920s and the Depression
The interwar period was the period between the end of the First World War and the beginning of the Second World War. This period was marked by turmoil in much of the world, as Europe struggled to recover from the devastation of the First World War.
In North America, especially the first half of this period, people experienced considerable prosperity in the Roaring Twenties. The social and societal upheaval known as the Roaring Twenties began in North America and spread to Europe in the aftermath of World War I. The Roaring Twenties, often called "The Jazz Age", saw an exposition of social, artistic, and cultural dynamism. 'Normalcy' returned to politics, jazz music blossomed, the flapper redefined modern womanhood, Art Deco peaked. The spirit of the Roaring Twenties was marked by a general feeling of discontinuity associated with modernity, a break with traditions. Everything seemed to be feasible through modern technology. New technologies, especially automobiles, movies and radio proliferated 'modernity' to a large part of the population. The 1920s saw the general favor of practicality, in architecture as well as in daily life. The 1920s was further distinguished by several inventions and discoveries, extensive industrial growth and the rise in consumer demand and aspirations, and significant changes in lifestyle.
thumb|right|300px|Europe between 1920 and 1938.
Europe spent these years rebuilding and coming to terms with the vast human cost of the conflict. The economy of the United States became increasingly intertwined with that of Europe. In Germany, the Weimar Republic gave way to episodes of political and economic turmoil, which culminated with the German hyperinflation of 1923 and the failed Beer Hall Putsch of that same year. When Germany could no longer afford war payments, Wall Street invested heavily in European debts to keep the European economy afloat as a large consumer market for American mass-produced goods. By the middle of the decade, economic development soared in Europe, and the Roaring Twenties broke out in Germany, Britain and France, the second half of the decade becoming known as the "Golden Twenties". In France and francophone Canada, they were also called the "années folles" ("Crazy Years").
Worldwide prosperity changed dramatically with the onset of the Great Depression in 1929. The Wall Street Crash of 1929 served to punctuate the end of the previous era, as The Great Depression set in. The Great Depression was a worldwide economic downturn starting in most places in 1929 and ending at different times in the 1930s or early 1940s for different countries."Great Depression", Encyclopædia Britannica It was the largest and most important economic depression in the 20th century, and is used in the 21st century as an example of how far the world's economy can fall.Charles Duhigg, "Depression, You Say? Check Those Safety Nets", New York Times, March 23, 2008
The depression had devastating effects in virtually every country, rich or poor. International trade plunged by half to two-thirds, as did personal income, tax revenue, prices and profits. Cities all around the world were hit hard, especially those dependent on heavy industry. Construction was virtually halted in many countries. Farming and rural areas suffered as crop prices fell by roughly 60 percent. Facing plummeting demand with few alternate sources of jobs, areas dependent on primary sector industries suffered the most.
The Great Depression ended at different times in different countries with the effect lasting into the next era.Great Depression and World War II. The Library of Congress. America's Great Depression ended in 1941 with America's entry into World War II.What Ended the Great Depression of 1929?. Source: The Federal Reserve Board web site, "Remarks by Governor Ben Bernanke at the H. Parket Willis Lecture in Economic Policy", March 2, 2004, FDR Library Web Site. The majority of countries set up relief programs, and most underwent some sort of political upheaval, pushing them to the left or right. In some world states, the desperate citizens turned toward nationalist demagogues—the most infamous being Adolf Hitler—setting the stage for the next era of war. The convulsion brought on by the worldwide depression resulted in the rise of Nazism. In Asia, Japan became an ever more assertive power, especially with regards to China.
Nanjing period
thumb|220px|With Sino-German cooperation until 1941, Chinese industry and military was improved just prior to the war against Japan.
The "Nanjing Decade" of 1928–37 was one of consolidation and accomplishment under the leadership of the Nationalists, with a mixed but generally positive record in the economy, social progress, development of democracy, and cultural creativity. Some of the harsh aspects of foreign concessions and privileges in China were moderated through diplomacy.
The League and crises
The interwar period was also marked by a radical change in the international order, away from the balance of power that had dominated pre–World War I Europe. One main institution that was meant to bring stability was the League of Nations, which was created after the First World War with the intention of maintaining world security and peace and encouraging economic growth between member countries. The League was undermined by the bellicosity of Nazi Germany, Imperial Japan, the Soviet Union, and Mussolini's Italy, and by the non-participation of the United States, leading many to question its effectiveness and legitimacy.
A series of international crises strained the League to its limits, the earliest being the invasion of Manchuria by Japan and the Abyssinian crisis of 1935/36 in which Italy invaded Abyssinia, one of the only free African nations at that time. The League tried to enforce economic sanctions upon Italy, but to no avail. The incident highlighted French and British weakness, exemplified by their reluctance to alienate Italy and lose her as their ally. The limited actions taken by the Western powers pushed Mussolini's Italy towards alliance with Hitler's Germany anyway. The Abyssinian war showed Hitler how weak the League was and encouraged the remilitarization of the Rhineland in flagrant disregard of the Treaty of Versailles. This was the first in a series of provocative acts culminating in the invasion of Poland in September 1939 and the beginning of the Second World War.
Few Chinese had any illusions about Japanese designs on China. Hungry for raw materials and pressed by a growing population, Japan initiated the seizure of Manchuria in September 1931 and established ex-Qing emperor Puyi as head of the puppet state of Manchukuo in 1932. During the Sino-Japanese War (1937–1945), the loss of Manchuria, and its vast potential for industrial development and war industries, was a blow to the Kuomintang economy. The League of Nations, established at the end of World War I, was unable to act in the face of the Japanese defiance. After 1940, conflicts between the Kuomintang and Communists became more frequent in the areas not under Japanese control. The Communists expanded their influence wherever opportunities presented themselves through mass organizations, administrative reforms, and the land- and tax-reform measures favoring the peasants—while the Kuomintang attempted to neutralize the spread of Communist influence.
Tripartite Pact
The Second Sino-Japanese War had seen tensions rise between Imperial Japan and the United States; events such as the Panay incident and the Nanking Massacre turned American public opinion against Japan. With the occupation of French Indochina in the years of 1940–41, and with the continuing war in China, the United States placed embargoes on Japan of strategic materials such as scrap metal and oil, which were vitally needed for the war effort. The Japanese were faced with the option of either withdrawing from China and losing face or seizing and securing new sources of raw materials in the resource-rich, European-controlled colonies of South East Asia—specifically British Malaya and the Dutch East Indies (modern-day Indonesia). In 1940, Imperial Japan signed the Tripartite Pact with Nazi Germany and Fascist Italy.
World War II
thumb|right|National flag of the Third Reich (Nazi Germany).
The Second World War was a global military conflict that took place in 1939–1945. It was the largest and deadliest war in history, culminating in the Holocaust and ending with the dropping of the atom bomb.
Even though Japan had been fighting in China since 1937, the conventional view is that the war began on September 1, 1939, when Nazi Germany invaded Poland, the Drang nach Osten. Within two days the United Kingdom and France declared war on Germany, even though the fighting was confined to Poland. Pursuant to a then-secret provision of its non-aggression Molotov-Ribbentrop Pact, the Soviet Union joined with Germany on September 17, 1939, to conquer Poland and to divide Eastern Europe.
The Allies were initially made up of Poland, the United Kingdom, France, Australia, Canada, New Zealand, South Africa, as well as British Commonwealth countries which were controlled directly by the UK, such as the Indian Empire. All of these countries declared war on Germany in September 1939.
Following the lull in fighting, known as the "Phoney War", Germany invaded western Europe in May 1940. Six weeks later, France, in the mean time attacked by Italy as well, surrendered to Germany, which then tried unsuccessfully to conquer Britain. On September 27, Germany, Italy, and Japan signed a mutual defense agreement, the Tripartite Pact, and were known as the Axis Powers.
right|thumb|Ensign of the Imperial Japanese Navy.
Nine months later, on June 22, 1941, Germany launched a massive invasion of the Soviet Union, which promptly joined the Allies. Germany was now engaged in fighting a war on two fronts. This proved to be a mistake by Germany – Germany had not successfully carried out the invasion of Britain and the war turned against the Axis.
On December 7, 1941, Japan attacked the United States at Pearl Harbor, bringing it too into the war on the Allied side. China also joined the Allies, as eventually did most of the rest of the world. China was in turmoil at the time, and attacked Japanese armies through guerilla-type warfare. By the beginning of 1942, the major combatants were aligned as follows: the British Commonwealth, the United States, and the Soviet Union were fighting Germany and Italy; and the British Commonwealth, China, and the United States were fighting Japan. The United Kingdom, the United States, the Soviet Union and China were referred as a "trusteeship of the powerful" during the World War II and were recognized as the Allied "Big Four" in Declaration by United NationsHoopes, Townsend, and Douglas Brinkley. FDR and the Creation of the U.N. (Yale University Press, 1997) These four countries were considered as the "Four Policemen" or "Four Sheriffs" of the Allies power and primary victors of World War II. From then through August 1945, battles raged across all of Europe, in the North Atlantic Ocean, across North Africa, throughout Southeast Asia, throughout China, across the Pacific Ocean and in the air over Japan.
Italy surrendered in September 1943 and was split into a northern Germany-occupied puppet state and an Allies-friendly state in the South; Germany surrendered in May 1945. Following the atomic bombings of Hiroshima and Nagasaki, Japan surrendered, marking the end of the war on September 2, 1945.
It is possible that around 62 million people died in the war; estimates vary greatly. About 60% of all casualties were civilians, who died as a result of disease, starvation, genocide (in particular, the Holocaust), and aerial bombing. The former Soviet Union and China suffered the most casualties. Estimates place deaths in the Soviet Union at around 23 million, while China suffered about 10 million. No country lost a greater portion of its population than Poland: approximately 5.6 million, or 16%, of its pre-war population of 34.8 million died.
thumb|right|Flag of the Italian Empire.
The Holocaust (which roughly means "burnt whole") was the deliberate and systematic murder of millions of Jews and other "unwanted" during World War II by the Nazi regime in Germany. Several differing views exist regarding whether it was intended to occur from the war's beginning, or if the plans for it came about later. Regardless, persecution of Jews extended well before the war even started, such as in the Kristallnacht (Night of Broken Glass). The Nazis used propaganda to great effect to stir up anti-Semitic feelings within ordinary Germans.
After World War II, Europe was informally split into Western and Soviet spheres of influence. Western Europe later aligned as the North Atlantic Treaty Organization (NATO) and Eastern Europe as the Warsaw Pact. There was a shift in power from Western Europe and the British Empire to the two new superpowers, the United States and the Soviet Union. These two rivals would later face off in the Cold War. In Asia, the defeat of Japan led to its democratization. China's civil war continued through and after the war, resulting eventually in the establishment of the People's Republic of China. The former colonies of the European powers began their road to independence.
Post-1945 world
thumb|The Blue Marble, a photograph of Earth as seen from Apollo 17. The second half of the 20th century saw an increase of interest in both space exploration and the environmental movement.
The mid-20th century is distinguished from most of human history in that its most significant changes were directly or indirectly economic and technological in nature. Economic development was the force behind vast changes in everyday life, to a degree which was unprecedented in human history.
Over the course of the 20th century, the world's per-capita gross domestic product grew by a factor of five,J. Bradford DeLong, Cornucopia: Increasing Wealth in the Twentieth Century. 2000. much more than all earlier centuries combined (including the 19th with its Industrial Revolution). Many economists make the case that this understates the magnitude of growth, as many of the goods and services consumed at the end of the 20th century, such as improved medicine (causing world life expectancy to increase by more than two decades) and communications technologies, were not available at any price at its beginning. However, the gulf between the world's rich and poor grew wider,Morrison, Wayne. Theoretical criminology: from modernity to post-modernism. Page 53. and the majority of the global population remained in the poor side of the divide.Millennium Ecosystem Assessment (Program). Ecosystems and Human Well-Being. The Millennium Ecosystem Assessment series. Washington, D.C.: Island Press, 2005. Page 12
Still, advancing technology and medicine has had a great impact even in the Global South. Large-scale industry and more centralized media made brutal dictatorships possible on an unprecedented scale in the middle of the century, leading to wars that were also unprecedented. However, the increased communications contributed to democratization. Technological developments included the development of airplanes and space exploration, nuclear technology, advancement in genetics, and the dawning of the Information Age.
American Peace
thumb|right|National flag of the United States.
Pax Americana is an appellation applied to the historical concept of relative liberal peace in the Western world, resulting from the preponderance of power enjoyed by the United States of America starting around the start of the 20th century. Although the term finds its primary utility in the latter half of the 20th century, it has been used in various places and eras. Its modern connotations concern the peace established after the end of World War II in 1945.
Cold War era
The Cold War began in the mid-1940s and lasted into the early 1990s. Throughout this period, the conflict was expressed through military coalitions, espionage, weapons development, invasions, propaganda, and competitive technological development. The conflict included costly defense spending, a massive conventional and nuclear arms race, and numerous proxy wars; the two superpowers never fought one another directly.
750px|thumb|center|Borders of NATO (blue) and Warsaw Pact (red) states during the Cold war era.
The Soviet Union created the Eastern Bloc of countries that it occupied, annexing some as Soviet Socialist Republics and maintaining others as satellite states that would later form the Warsaw Pact. The United States and various western European countries began a policy of "containment" of communism and forged myriad alliances to this end, including NATO. Several of these western countries also coordinated efforts regarding the rebuilding of western Europe, including western Germany, which the Soviets opposed. In other regions of the world, such as Latin America and Southeast Asia, the Soviet Union fostered communist revolutionary movements, which the United States and many of its allies opposed and, in some cases, attempted to "roll back". Many countries were prompted to align themselves with the nations that would later form either NATO or the Warsaw Pact, though other movements would also emerge.
The Cold War saw periods of both heightened tension and relative calm. International crises arose, such as the Berlin Blockade (1948–1949), the Korean War (1950–1953), the Berlin Crisis of 1961, the Vietnam War (1959–1975), the Cuban Missile Crisis (1962), the Soviet war in Afghanistan (1979–1989) and NATO exercises in November 1983. There were also periods of reduced tension as both sides sought détente. Direct military attacks on adversaries were deterred by the potential for mutual assured destruction using deliverable nuclear weapons. In the Cold War era, the Generation of Love and the rise of computers changed society in very different, complex ways, including higher social and local mobility.
thumb|333px|European trade blocs as of the late 1980s. EEC member states are marked in blue, EFTA – green, and Comecon – red.thumb|333px|East and West in 1980, as defined by the Cold War. The Cold War had divided Europe politically into East and West, with the Iron Curtain splitting Central Europe.
The Cold War drew to a close in the late 1980s and the early 1990s. The United States under President Ronald Reagan increased diplomatic, military, and economic pressure on the Soviet Union, which was already suffering from severe economic stagnation. In the second half of the 1980s, newly appointed Soviet leader Mikhail Gorbachev introduced the perestroika and glasnost reforms. The Soviet Union collapsed in 1991, leaving the United States as the dominant military power, though Russia retained much of the massive Soviet nuclear arsenal.
Latin America polarization
In Latin America in the 1970s, leftists acquired a significant political influence which prompted the right-wing, ecclesiastical authorities and a large portion of the individual country's upper class to support coups d'état to avoid what they perceived as a communist threat. This was further fueled by Cuban and United States intervention which led to a political polarization. Most South American countries were in some periods ruled by military dictatorships that were supported by the United States of America. In the 1970s, the regimes of the Southern Cone collaborated in Operation Condor killing many leftist dissidents, including some urban guerrillas. However, by the early 1990s all countries had restored their democracies.
Space Age
thumb|right|275px|This high-resolution image of the Hubble Ultra Deep Field includes galaxies of various ages, sizes, shapes, and colors. The smallest, reddest galaxies, are some of the most distant galaxies to have been imaged by an optical telescope
The Space Age is a period encompassing the activities related to the Space Race, space exploration, space technology, and the cultural developments influenced by these events. The Space Age began with the development of several technologies that culminated with the launch of Sputnik 1 by the Soviet Union. This was the world's first artificial satellite, orbiting the Earth in 98.1 minutes and weighing in at 83 kg. The launch of Sputnik 1 ushered a new era of political, scientific and technological achievements that became known as the Space Age. The Space Age was characterized by rapid development of new technology in a close race mostly between the United States and the Soviet Union. The Space Age brought the first human spaceflight during the Vostok programme and reached its peak with the Apollo program which captured the imagination of much of the world's population. The landing of Apollo 11 was an event watched by over 500 million people around the world and is widely recognized as one of the defining moments of the 20th century. Since then and with the end of the space race due to the dissolution of the Soviet Union, public attention has largely moved to other areas.
Education and schools
The humanities are academic disciplines which study the human condition, using methods that are primarily analytic, critical, or speculative, as distinguished from the mainly empirical approaches of the natural and social sciences. Although many of the subjects of modern history coincide with that of standard history, the subject is taught independently by various systems of education in the world.
British education
Students can choose the subject at university. The material covered includes from the mid-18th century, to analysis of the present day. Virtually all colleges and sixth forms that do teach modern history do it alongside standard history; very few teach the subject exclusively.
Universities
At the University of Oxford 'Modern History' has a somewhat different meaning. The contrast is not with the Middle Ages but with Antiquity. The earliest period that can be studied in the Final Honour School of Modern History begins in 285.
See also
List of World Map changes
History of modern literature
Modernism framework: Premodernity, Modernism, Postmodernism
References
General informationBooks Earle, Edward Mead. An Outline of Modern History; A Syllabus with Map Studies. New York: Macmillan Co, 1921.
Grosvenor, Edwin A. Contemporary History of the World. New York and Boston: T.Y. Crowell & Co, 1899.
Taylor, William Cooke, Charles Duke Yonge, and G. W. Cox. The Student's Manual of Modern History; Containing the Rise and Progress of the Principal European Nations, Their Political History, and the Changes in Their Social Condition; with a History of the Colonies Founded by Europeans. 1880.
Websites''
Internet Modern History Sourcebook, fordham.edu
Footnotes
Further reading
21st-century sources
Boyd, Andrew, Joshua Comenetz. An atlas of world affairs. Routledge, 2007. ISBN 0-415-39169-5
Black, Edwin. Internal Combustion: How Corporations and Governments Addicted the World to Oil and Derailed the Alternatives. New York: St. Martin's Press, 2006.
Briggs, Asa, and Peter Burke. A Social History of the Media: From Gutenberg to the Internet. Cambridge: Polity, 2002.
Barzun, Jacques. From Dawn to Decadence: 500 Years of Western Cultural Life : 1500 to the Present. New York: HarperCollins, 2001.
20th-century sources
Burke, Peter. A Social History of Knowledge: From Gutenberg to Diderot. Cambridge, UK: Polity, 2000.
CBS News. People of the century. Simon and Schuster, 1999. ISBN 0-684-87093-2
Wang, Ke-wen. Modern China: an encyclopedia of history, culture, and nationalism. Taylor & Francis, 1998. ISBN 0-8153-0720-9
Huffman, James L. Modern Japan: an encyclopedia of history, culture, and nationalism. Taylor & Francis, 1998. ISBN 0-8153-2525-8
Schlesinger, Arthur M. New Viewpoints in American History. New York: Macmillan, 1922.
Nock, Albert Jay. The Myth of a Guilty Nation. B.W. Huebsch, Incorporated, 1922.
Bakeless, John Edwin. The Economic Causes of Modern War; A Study of the Period: 1878–1918. New York: Printed for the Department of political science of Williams college, by Moffat, Yard and Co, 1921
Day, Clive. A History of Commerce. New York [etc.]: Longmans, Green, and Co, 1921.
Moore, Edward Caldwell. The Spread of Christianity in the Modern World. Chicago, Ill: University of Chicago Press, 1919.
Muir, Ramsay. The Expansion of Europe; The Culmination of Modern History. Boston: Houghton Mifflin Company, 1917.
Palat, Madhavan K., Social Identities in Revolutionary Russia, ed. (Macmillan, Palgrave, UK, and St Martin’s Press, New York, 2001).
Palat, Madhavan K., History of Civilizations of Central Asia, ed. , vol. 6, Towards the Contemporary Period: From The Mid-Nineteenth Century To The End Of The Twentieth Century, UNESCO, Paris 2005.
Robinson, James Harvey, and Charles Austin Beard. Readings in Modern European History; A Collection of Extracts from the Sources Chosen with the Purpose of Illustrating Some of the Chief Phases of Development of Europe During the Last Two Hundred Years. Boston: Ginn & Co, 1908.
External links
General
Vistorica – Timelines of European modern history
Journal of Contemporary History. SAGE Publications. (Print )
Contemporary History Institute (CHI). ohiou.edu (ed., Analyzes the contemporary period in world affairs—the period from World War II to the present—from an interdisciplinary historical perspective.)
China and Europe, 1500–2000 and Beyond: What is Modern?. Columbia University
Videos
The French Revolution: Crash Course World History #29 – YouTube
Haitian Revolutions: Crash Course World History #30 – YouTube
Latin American Revolutions: Crash Course World History #31 – YouTube
Coal, Steam, and The Industrial Revolution: Crash Course World History #32 – YouTube
Capitalism and Socialism: Crash Course World History #33 – YouTube
Samurai, Daimyo, Matthew Perry, and Nationalism: Crash Course World History #34 – YouTube
Imperialism: Crash Course World History #35 – YouTube
Archdukes, Cynicism, and World War I: Crash Course World History #36 – YouTube
Communists, Nationalists, and China's Revolutions: Crash Course World History #37 – YouTube
World War II: Crash Course World History #38 – YouTube
USA vs USSR Fight! The Cold War: Crash Course World History #39 – YouTube
Decolonization and Nationalism Triumphant: Crash Course World History #40 – YouTube
Category:Historical eras
Category:Historiography
Category:Postmodern theory
Category:Articles which contain graphical timelines | 885,795 | 2017-01 |
Planck constant | Values of h Units Ref. J⋅s Barry N. Taylor of the Data Center in close collaboration with Peter J. Mohr of the Physical Measurement Laboratory's Atomic Physics Division, Termed the "2014 CODATA recommended values," they are generally recognized worldwide for use in all fields of science and technology. The values became available on 25 June 2015 and replaced the 2010 CODATA set. They are based on all of the data available through 31 December 2014. Available: http://physics.nist.gov eV⋅s 2π EP⋅tP Values of ħ (h-bar) Units Ref. J⋅s eV⋅s 1 EP⋅tP def Values of hc Units Ref. J⋅m eV⋅μm 2π EP⋅ℓP Values of ħc Units Ref. J⋅m eV⋅μm 1 EP⋅ℓP
thumb|right|Plaque at the Humboldt University of Berlin: " Max Planck, discoverer of the elementary quantum of action h, taught in this building from 1889 to 1928."
The Planck constant (denoted , also called Planck's constant) is a physical constant that is the quantum of action, central in quantum mechanics.
First recognized in 1900 by Max Planck, it was originally the proportionality constant between the minimal increment of energy, , of a hypothetical electrically charged oscillator in a cavity that contained black body radiation, and the frequency, , of its associated electromagnetic wave. In 1905, the value , the minimal energy increment of a hypothetical oscillator, was theoretically associated by Albert Einstein with a "quantum" or minimal element of the energy of the electromagnetic wave itself. The light quantum behaved in some respects as an electrically neutral particle, as opposed to an electromagnetic wave. It was eventually called the photon.
The Planck–Einstein relation connects the particulate photon energy with its associated wave frequency :
This energy is extremely small in terms of ordinarily perceived everyday objects.
Since the frequency , wavelength , and speed of light are related by , the relation can also be expressed as
This leads to another relationship involving the Planck constant. With denoting the linear momentum of a particle (not only a photon, but other fine particles as well), the de Broglie wavelength of the particle is given by
In applications where it is natural to use the angular frequency (i.e. where the frequency is expressed in terms of radians per second instead of cycles per second or hertz) it is often useful to absorb a factor of into the Planck constant. The resulting constant is called the reduced Planck constant or Dirac constant. It is equal to the Planck constant divided by , and is denoted (pronounced "h-bar"):
The energy of a photon with angular frequency , where , is given by
while its linear momentum relates to
where k is a wavenumber. In 1923, Louis de Broglie generalized the Planck–Einstein relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but the quantum wavelength of any particle. This was confirmed by experiments soon afterwards. This holds throughout quantum theory, including electrodynamics.
These two relations are the temporal and spatial component parts of the special relativistic expression using 4-Vectors.
Classical statistical mechanics requires the existence of (but does not define its value). Eventually, following upon Planck's discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be some multiple of a very small quantity, the "quantum of action", now called the Planck constant. This is the so-called "old quantum theory" developed by Bohr and Sommerfeld, in which particle trajectories exist but are hidden, but quantum laws constrain them based on their action. This view has been largely replaced by fully modern quantum theory, in which definite trajectories of motion do not even exist, rather, the particle is represented by a wavefunction spread out in space and in time. Thus there is no value of the action as classically defined. Related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of a classical particle motion.
In many cases, such as for monochromatic light or for atoms, quantization of energy also implies that only certain energy levels are allowed, and values in between are forbidden.
Value
The Planck constant has dimensions of physical action; i.e., energy multiplied by time, or momentum multiplied by distance, or angular momentum. In SI units, the Planck constant is expressed in joule-seconds (J⋅s) or (N⋅m⋅s) or (kg⋅m2⋅s−1).
The value of the Planck constant is:
The value of the reduced Planck constant is:
The two digits inside the parentheses denote the standard uncertainty in the last two digits of the value. The figures cited here are the 2014 CODATA recommended values for the constants and their uncertainties. The 2014 CODATA results were made available in June 2015 and represent the best-known, internationally accepted values for these constants, based on all data published as of 31 December 2014. New CODATA figures are normally produced every four years.
Significance of the value
The Planck constant is related to the quantization of light and matter. It can be seen as a subatomic-scale constant. In a unit system adapted to subatomic scales, the electronvolt is the appropriate unit of energy and the petahertz the appropriate unit of frequency. Atomic unit systems are based (in part) on the Planck constant.
The numerical value of the Planck constant depends entirely on the system of units used to measure it. When it is expressed in SI units, it is one of the smallest constants used in physics. This reflects the fact that on a scale adapted to humans, where energies are typically of the order of kilojoules and times are typically of the order of seconds or minutes, the Planck constant (the quantum of action) is very small.
Equivalently, the smallness of the Planck constant reflects the fact that everyday objects and systems are made of a large number of particles. For example, green light with a wavelength of 555 nanometres (a wavelength that can be perceived by the human eye) has a frequency of (). Each photon has an energy . That is a very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual photons any more than with individual atoms or molecules. An amount of light compatible with everyday experience is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro constant, . The result is that green light of wavelength 555 nm has an energy of , a typical energy of everyday life.
Origins
Black-body radiation
thumb|right|250px|Intensity of light emitted from a black body at any given frequency. Each color is a different temperature. Planck was the first to explain the shape of these curves.
In the last years of the nineteenth century, Planck was investigating the problem of black-body radiation first posed by Kirchhoff some forty years earlier. It is well known that hot objects glow, and that hotter objects glow brighter than cooler ones. The electromagnetic field obeys laws of motion similarly to a mass on a spring, and can come to thermal equilibrium with hot atoms. The hot object in equilibrium with light absorbs just as much light as it emits. If the object is black, meaning it absorbs all the light that hits it, then its thermal light emission is maximized.
The assumption that black-body radiation is thermal leads to an accurate prediction: the total amount of emitted energy goes up with the temperature according to a definite rule, the Stefan–Boltzmann law (1879–84). But it was also known that the colour of the light given off by a hot object changes with the temperature, so that "white hot" is hotter than "red hot". Nevertheless, Wilhelm Wien discovered the mathematical relationship between the peaks of the curves at different temperatures, by using the principle of adiabatic invariance. At each different temperature, the curve is moved over by Wien's displacement law (1893). Wien also proposed an approximation for the spectrum of the object, which was correct at high frequencies (short wavelength) but not at low frequencies (long wavelength). It still was not clear why the spectrum of a hot object had the form that it has (see diagram).
Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency. He examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, and was able to derive an approximate mathematical function for black-body spectrum.. English translation: "On the Law of Distribution of Energy in the Normal Spectrum".
However, Planck soon realized that his solution was not unique. There were several different solutions, each of which gave a different value for the entropy of the oscillators. To save his theory, Planck had to resort to using the then controversial theory of statistical mechanics, which he described as "an act of despair … I was ready to sacrifice any of my previous convictions about physics." One of his new boundary conditions was
With this new condition, Planck had imposed the quantization of the energy of the oscillators, "a purely formal assumption … actually I did not think much about it…" in his own words, but one which would revolutionize physics. Applying this new approach to Wien's displacement law showed that the "energy element" must be proportional to the frequency of the oscillator, the first version of what is now sometimes termed the "Planck–Einstein relation":
Planck was able to calculate the value of h from experimental data on black-body radiation: his result, , is within 1.2% of the currently accepted value. He was also able to make the first determination of the Boltzmann constant kB from the same data and theory.
350px|thumb|Note that the (black) Rayleigh–Jeans curve never touches the Planck curve.
Prior to Planck's work, it had been assumed that the energy of a body could take on any value whatsoever – that it was a continuous variable. The Rayleigh–Jeans law makes close predictions for a narrow range of values at one limit of temperatures, but the results diverge more and more strongly as temperatures increase. To make Planck's law, which correctly predicts blackbody emissions, it was necessary to multiply the classical expression by a complex factor that involves h in both the numerator and the denominator. The influence of h in this complex factor would not disappear if it were set to zero or to any other value. Making an equation out of Planck's law that would reproduce the Rayleigh–Jeans law could not be done by changing the values of h, of the Boltzmann constant, or of any other constant or variable in the equation. In this case the picture given by classical physics is not duplicated by a range of results in the quantum picture.
The black-body problem was revisited in 1905, when Rayleigh and Jeans (on the one hand) and Einstein (on the other hand) independently proved that classical electromagnetism could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) in convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism. The very first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta". Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta".
Photoelectric effect
The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz,See, e.g., who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard in 1902. Einstein's 1905 paper discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921, when his predictions had been confirmed by the experimental work of Robert Andrews Millikan. The Nobel committee awarded the prize for his work on the photo-electric effect, rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment, and dissent amongst its members as to the actual proof that relativity was real., pp. 309–314.
Prior to Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterise different types of radiation. The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time and per unit space (and hence consumes more electricity) than the ordinary bulb, even though the colour of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their own intensity. However, the energy account of the photoelectric effect didn't seem to agree with the wave description of light.
The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light, but depends linearly on the frequency; and if the frequency is too low (corresponding to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect). Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy.
Einstein's explanation for these observations was that light itself is quantized; that the energy of light is not transferred continuously as in a classical wave, but only in small "packets" or quanta. The size of these "packets" of energy, which would later be named photons, was to be the same as Planck's "energy element", giving the modern version of the Planck–Einstein relation:
Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light (f) and the kinetic energy of photoelectrons (E) was shown to be equal to the Planck constant (h).
Atomic structure
thumb|right|A schematization of the Bohr model of the hydrogen atom. The transition shown from the level to the level gives rise to visible light of wavelength 656 nm (red), as the model predicts.
Niels Bohr introduced the first quantized model of the atom in 1913, in an attempt to overcome a major shortcoming of Rutherford's classical model. In classical electrodynamics, a charge moving in a circle should radiate electromagnetic radiation. If that charge were to be an electron orbiting a nucleus, the radiation would cause it to lose energy and spiral down into the nucleus. Bohr solved this paradox with explicit reference to Planck's work: an electron in a Bohr atom could only have certain defined energies En
where c0 is the speed of light in vacuum, R∞ is an experimentally determined constant (the Rydberg constant) and n is any integer (n = 1, 2, 3, …). Once the electron reached the lowest energy level (), it could not get any closer to the nucleus (lower energy). This approach also allowed Bohr to account for the Rydberg formula, an empirical description of the atomic spectrum of hydrogen, and to account for the value of the Rydberg constant R∞ in terms of other fundamental constants.
Bohr also introduced the quantity , now known as the reduced Planck constant, as the quantum of angular momentum. At first, Bohr thought that this was the angular momentum of each electron in an atom: this proved incorrect and, despite developments by Sommerfeld and others, an accurate description of the electron angular momentum proved beyond the Bohr model. The correct quantization rules for electrons – in which the energy reduces to the Bohr model equation in the case of the hydrogen atom – were given by Heisenberg's matrix mechanics in 1925 and the Schrödinger wave equation in 1926: the reduced Planck constant remains the fundamental quantum of angular momentum. In modern terms, if J is the total angular momentum of a system with rotational invariance, and Jz the angular momentum measured along any given direction, these quantities can only take on the values
Uncertainty principle
The Planck constant also occurs in statements of Werner Heisenberg's uncertainty principle. Given a large number of particles prepared in the same state, the uncertainty in their position, Δx, and the uncertainty in their momentum (in the same direction), Δp, obey
where the uncertainty is given as the standard deviation of the measured value from its expected value. There are a number of other such pairs of physically measurable values which obey a similar rule. One example is time vs. energy. The either-or nature of uncertainty forces measurement attempts to choose between trade offs, and given that they are quanta, the trade offs often take the form of either-or (as in Fourier analysis), rather than the compromises and gray areas of time series analysis.
In addition to some assumptions underlying the interpretation of certain values in the quantum mechanical formulation, one of the fundamental cornerstones to the entire theory lies in the commutator relationship between the position operator and the momentum operator :
where δij is the Kronecker delta.
Dependent physical constants
There are several related constants for which more than 99% of the uncertainty in the 2014 CODATA values is due to the uncertainty in the value of the Planck constant, as indicated by the square of the correlation coefficient (). The Planck constant is (with one or two exceptions)The main exceptions are the Newtonian constant of gravitation G () and the gas constant R (). The uncertainty in the value of the gas constant also affects those physical constants which are related to it, such as the Boltzmann constant and the Loschmidt constant. the fundamental physical constant which is known to the lowest level of precision, with a 1σ relative uncertainty ur of .
Rest mass of the electron
The normal textbook derivation of the Rydberg constant R∞ defines it in terms of the electron mass me and a variety of other physical constants.
However, the Rydberg constant can be determined very accurately () from the atomic spectrum of hydrogen, whereas there is no direct method to measure the mass of a stationary electron in SI units. Hence the equation for the computation of me becomes
where c0 is the speed of light and α is the fine-structure constant. The speed of light has an exactly defined value in SI units, and the fine-structure constant can be determined more accurately () than the Planck constant. Thus, the uncertainty in the value of the electron rest mass is due entirely to the uncertainty in the value of the Planck constant ().
Avogadro constant
The Avogadro constant NA is determined as the ratio of the mass of one mole of electrons to the mass of a single electron; the mass of one mole of electrons is the "relative atomic mass" of an electron Ar(e), which can be measured in a Penning trap (), multiplied by the molar mass constant Mu, which is defined as .
The dependence of the Avogadro constant on the Planck constant () also holds for the physical constants which are related to amount of substance, such as the atomic mass constant. The uncertainty in the value of the Planck constant limits the knowledge of the masses of atoms and subatomic particles when expressed in SI units. It is possible to measure the masses more precisely in atomic mass units, but not to convert them more precisely into kilograms.
Elementary charge
Sommerfeld originally defined the fine-structure constant α as:
where e is the elementary charge, ε0 is the electric constant (also called the permittivity of free space), and μ0 is the magnetic constant (also called the permeability of free space). The latter two constants have fixed values in the International System of Units. However, α can also be determined experimentally, notably by measuring the electron spin g-factor ge, then comparing the result with the value predicted by quantum electrodynamics.
At present, the most precise value for the elementary charge is obtained by rearranging the definition of α to obtain the following definition of e in terms of α and h:
Bohr magneton and nuclear magneton
The Bohr magneton and the nuclear magneton are units which are used to describe the magnetic properties of the electron and atomic nuclei respectively. The Bohr magneton is the magnetic moment which would be expected for an electron if it behaved as a spinning charge according to classical electrodynamics. It is defined in terms of the reduced Planck constant, the elementary charge and the electron mass, all of which depend on the Planck constant: the final dependence on h1/2 () can be found by expanding the variables.
The nuclear magneton has a similar definition, but corrected for the fact that the proton is much more massive than the electron. The ratio of the electron relative atomic mass to the proton relative atomic mass can be determined experimentally to a high level of precision ().
Determination
Method Value of h() Relativeuncertainty Ref. Watt balance X-ray crystal density Josephson constant . . Magnetic resonance Faraday constant 1.3 CODATA 2010recommended value 4.4 P.J. Mohr, B.N. Taylor, and D.B. Newell (2011), "The 2010 CODATA Recommended Values of the Fundamental Physical Constants" (Web Version 6.0). This database was developed by J. Baker, M. Douma, and S. Kotochigova. Available: http://physics.nist.gov [Thursday, 02-Jun-2011 21:00:12 EDT]. National Institute of Standards and Technology, Gaithersburg, MD 20899.The nine recent determinations of the Planck constant cover five separate methods. Where there is more than one recent determination for a given method, the value of h given here is a weighted mean of the results, as calculated by CODATA.
In principle, the Planck constant could be determined by examining the spectrum of a black-body radiator or the kinetic energy of photoelectrons, and this is how its value was first calculated in the early twentieth century. In practice, these are no longer the most accurate methods. The CODATA value quoted here is based on three watt-balance measurements of KJ2RK and one inter-laboratory determination of the molar volume of silicon, but is mostly determined by a 2007 watt-balance measurement made at the U.S. National Institute of Standards and Technology (NIST). Five other measurements by three different methods were initially considered, but not included in the final refinement as they were too imprecise to affect the result.
There are both practical and theoretical difficulties in determining h. The practical difficulties can be illustrated by the fact that the two most accurate methods, the watt balance and the X-ray crystal density method, do not appear to agree with one another. The most likely reason is that the measurement uncertainty for one (or both) of the methods has been estimated too low – it is (or they are) not as precise as is currently believed – but for the time being there is no indication which method is at fault.
The theoretical difficulties arise from the fact that all of the methods except the X-ray crystal density method rely on the theoretical basis of the Josephson effect and the quantum Hall effect. If these theories are slightly inaccurate – though there is no evidence at present to suggest they are – the methods would not give accurate values for the Planck constant. More importantly, the values of the Planck constant obtained in this way cannot be used as tests of the theories without falling into a circular argument. Fortunately, there are other statistical ways of testing the theories, and the theories have yet to be refuted.
Josephson constant
The Josephson constant KJ relates the potential difference U generated by the Josephson effect at a "Josephson junction" with the frequency ν of the microwave radiation. The theoretical treatment of Josephson effect suggests very strongly that KJ = 2e/h.
The Josephson constant may be measured by comparing the potential difference generated by an array of Josephson junctions with a potential difference which is known in SI volts. The measurement of the potential difference in SI units is done by allowing an electrostatic force to cancel out a measurable gravitational force. Assuming the validity of the theoretical treatment of the Josephson effect, KJ is related to the Planck constant by
Watt balance
A watt balance is an instrument for comparing two powers, one of which is measured in SI watts and the other of which is measured in conventional electrical units. From the definition of the conventional watt W90, this gives a measure of the product KJ2RK in SI units, where RK is the von Klitzing constant which appears in the quantum Hall effect. If the theoretical treatments of the Josephson effect and the quantum Hall effect are valid, and in particular assuming that RK = h/e2, the measurement of KJ2RK is a direct determination of the Planck constant.
Magnetic resonance
The gyromagnetic ratio γ is the constant of proportionality between the frequency ν of nuclear magnetic resonance (or electron paramagnetic resonance for electrons) and the applied magnetic field . It is difficult to measure gyromagnetic ratios precisely because of the difficulties in precisely measuring B, but the value for protons in water at is known to better than one part per million. The protons are said to be "shielded" from the applied magnetic field by the electrons in the water molecule, the same effect that gives rise to chemical shift in NMR spectroscopy, and this is indicated by a prime on the symbol for the gyromagnetic ratio, γ′p. The gyromagnetic ratio is related to the shielded proton magnetic moment μ′p, the spin number I ( for protons) and the reduced Planck constant.
The ratio of the shielded proton magnetic moment μ′p to the electron magnetic moment μe can be measured separately and to high precision, as the imprecisely known value of the applied magnetic field cancels itself out in taking the ratio. The value of μe in Bohr magnetons is also known: it is half the electron g-factor ge. Hence
A further complication is that the measurement of γ′p involves the measurement of an electric current: this is invariably measured in conventional amperes rather than in SI amperes, so a conversion factor is required. The symbol Γ′p-90 is used for the measured gyromagnetic ratio using conventional electrical units. In addition, there are two methods of measuring the value, a "low-field" method and a "high-field" method, and the conversion factors are different in the two cases. Only the high-field value Γ′p-90(hi) is of interest in determining the Planck constant.
Substitution gives the expression for the Planck constant in terms of Γ′p-90(hi):
Faraday constant
The Faraday constant F is the charge of one mole of electrons, equal to the Avogadro constant NA multiplied by the elementary charge e. It can be determined by careful electrolysis experiments, measuring the amount of silver dissolved from an electrode in a given time and for a given electric current. In practice, it is measured in conventional electrical units, and so given the symbol F90. Substituting the definitions of NA and e, and converting from conventional electrical units to SI units, gives the relation to the Planck constant.
X-ray crystal density
The X-ray crystal density method is primarily a method for determining the Avogadro constant NA but as the Avogadro constant is related to the Planck constant it also determines a value for h. The principle behind the method is to determine NA as the ratio between the volume of the unit cell of a crystal, measured by X-ray crystallography, and the molar volume of the substance. Crystals of silicon are used, as they are available in high quality and purity by the technology developed for the semiconductor industry. The unit cell volume is calculated from the spacing between two crystal planes referred to as d220. The molar volume Vm(Si) requires a knowledge of the density of the crystal and the atomic weight of the silicon used. The Planck constant is given by
Particle accelerator
The experimental measurement of the Planck constant in the Large Hadron Collider laboratory was carried out in 2011. The study called PCC using a giant particle accelerator helped to better understand the relationships between the Planck constant and measuring distances in space.
Fixation
As mentioned above, the numerical value of the Planck constant depends on the system of units used to describe it. Its value in SI units is known to 50 parts per billion but its value in atomic units is known exactly, because of the way the scale of atomic units is defined. The same is true of conventional electrical units, where the Planck constant (denoted h90 to distinguish it from its value in SI units) is given by
with KJ–90 and RK–90 being exactly defined constants. Atomic units and conventional electrical units are very useful in their respective fields, because the uncertainty in the final result does not depend on an uncertain conversion factor, only on the uncertainty of the measurement itself.
There are a number of proposals to redefine certain of the SI base units in terms of fundamental physical constants.94th Meeting of the International Committee for Weights and Measures (2005). Recommendation 1: Preparative steps towards new definitions of the kilogram, the ampere, the kelvin and the mole in terms of fundamental constants This has already been done for the metre, which is defined in terms of a fixed value of the speed of light. The most urgent unit on the list for redefinition is the kilogram, whose value has been fixed for all science (since 1889) by the mass of a small cylinder of platinum–iridium alloy kept in a vault just outside Paris. While nobody knows if the mass of the International Prototype Kilogram has changed since 1889 – the value 1 kg of its mass expressed in kilograms is by definition unchanged and therein lies one of the problems – it is known that over such a timescale the many similar Pt–Ir alloy cylinders kept in national laboratories around the world, have changed their relative mass by several tens of parts per million, however carefully they are stored, and the more so the more they have been taken out and used as mass standards. A change of several tens of micrograms in one kilogram is equivalent to the current uncertainty in the value of the Planck constant in SI units.
The legal process to change the definition of the kilogram is already underway, but it had been decided that no final decision would be made before the next meeting of the General Conference on Weights and Measures in 2011.23rd General Conference on Weights and Measures (2007). Resolution 12: On the possible redefinition of certain base units of the International System of Units (SI). (For more detailed information, see kilogram definitions.) The Planck constant is a leading contender to form the basis of the new definition, although not the only one. Possible new definitions include "the mass of a body at rest whose equivalent energy equals the energy of photons whose frequencies sum to ", or simply "the kilogram is defined so that the Planck constant equals ".
The BIPM provided Draft Resolution A in anticipation of the 24th General Conference on Weights and Measures meeting (2011-10-17 through 2011-10-21), detailing the considerations "On the possible future revision of the International System of Units, the SI".
Watt balances already measure mass in terms of the Planck constant: at present, standard mass is taken as fixed and the measurement is performed to determine the Planck constant but, were the Planck constant to be fixed in SI units, the same experiment would be a measurement of the mass. The relative uncertainty in the measurement would remain the same.
Mass standards could also be constructed from silicon crystals or by other atom-counting methods. Such methods require a knowledge of the Avogadro constant, which fixes the proportionality between atomic mass and macroscopic mass but, with a defined value of the Planck constant, NA would be known to the same level of uncertainty (if not better) than current methods of comparing macroscopic mass.
See also
Basic concepts of quantum mechanics
Planck units
Wave–particle duality
Notes
References
External links
Quantum of Action and Quantum of Spin – Numericana
Category:Fundamental constants
constant | 19,594,213 | 2017-01 |
Child labour | right|frame|A succession of laws on child labour, the so-called Factory Acts, were passed in the UK in the 19th century. Children younger than nine were not allowed to work, those aged 9–16 could work 16 hours per day per Cotton Mills Act. In 1856, the law permitted child labour past age 9, for 60 hours per week, night or day. In 1901, the permissible child labour age was raised to 12."The Life of the Industrial Worker in Nineteenth-Century England". Laura Del Col, West Virginia University.
thumb|Early 20th century witnessed many home-based enterprises involving child labour. An example is shown above from New York in 1912.
Child labour refers to the employment of children in any work that deprives children of their childhood, interferes with their ability to attend regular school, and that is mentally, physically, socially or morally dangerous and harmful. This practice is considered exploitative by many international organisations. Legislation across the world prohibit child labour. These laws do not consider all work by children as child labour; exceptions include work by child artists, family duties, supervised training, certain categories of work such as those by Amish children, some forms of child work common among indigenous American children, and others.
Child labour has existed to varying extents, through most of history. During the 19th and early 20th centuries, many children aged 5–14 from poorer families still worked in Europe, the United States and various colonies of European powers. These children mainly worked in agriculture, home-based assembly operations, factories, mining and in services such as news boys. Some worked night shifts lasting 12 hours. With the rise of household income, availability of schools and passage of child labour laws, the incidence rates of child labour fell.
In developing countries, with high poverty and poor schooling opportunities, child labour is still prevalent. In 2010, sub-saharan Africa had the highest incidence rates of child labour, with several African nations witnessing over 50 percent of children aged 5–14 working. Worldwide agriculture is the largest employer of child labour. Vast majority of child labour is found in rural settings and informal urban economy; children are predominantly employed by their parents, rather than factories. Poverty and lack of schools are considered as the primary cause of child labour.
Globally the incidence of child labour decreased from 25% to 10% between 1960 and 2003, according to the World Bank.Norberg, Johan (2007), Världens välfärd (Stockholm: Government Offices of Sweden), p. 58 Nevertheless, the total number of child labourers remains high, with UNICEF and ILO acknowledging an estimated 168 million children aged 5–17 worldwide, were involved in child labour in 2013.
History
thumb|left|220px|Child labourers, Macon, Georgia, 1909
Child labour in preindustrial societies
Child labour forms an intrinsic part of pre-industrial economies.ThompsonDiamond, J., The World Before Yesterday In pre-industrial societies, there is rarely a concept of childhood in the modern sense. Children often begin to actively participate activities such as child rearing, hunting and farming as soon as they are competent. In many societies, children as young as 13 are seen as adults and engage in the same activities as adults.
The work of children was important in pre-industrial societies, as children needed to provide their labour for their survival and that of their group. Pre-industrial societies were characterised by low productivity and short life expectancy, preventing children from participating in productive work would be more harmful to their welfare and that of their group in the long run. In pre-industrial societies, there was little need for children to attend school. This is especially the case in non literate societies. Most pre-industrial skill and knowledge were amenable to being passed down through direct mentoring or apprenticing by competent adults.
The Industrial Revolution
With the onset of the Industrial Revolution in Britain in the late 18th century, there was a rapid increase in the industrial exploitation of labour, including child labour. Industrial cities such as Birmingham, Manchester and Liverpool rapidly grew from small villages into large cities and improving child mortality rates. These cities drew in the population that was rapidly growing due to increased agricultural output. This was process replicated in other industrialising counties.
thumb|180px|Children going to a 12-hour night shift in the United States (1908).
The Victorian era in particular became notorious for the conditions under which children were employed.Laura Del Col, West Virginia University, The Life of the Industrial Worker in Nineteenth-Century England Children as young as four were employed in production factories and mines working long hours in dangerous, often fatal, working conditions.E. P. Thompson The Making of the English Working Class, (Penguin, 1968), pp. 366–7 In coal mines, children would crawl through tunnels too narrow and low for adults.Jane Humphries, Childhood And Child Labour in the British Industrial Revolution (2010) p 33 Children also worked as errand boys, crossing sweepers, shoe blacks, or selling matches, flowers and other cheap goods. Some children undertook work as apprentices to respectable trades, such as building or as domestic servants (there were over 120,000 domestic servants in London in the mid-18th century). Working hours were long: builders worked 64 hours a week in summer and 52 in winter, while domestic servants worked 80-hour weeks.
Child labour played an important role in the Industrial Revolution from its outset, often brought about by economic hardship. The children of the poor were expected to contribute to their family income.Barbara Daniels, Poverty and Families in the Victorian Era In 19th-century Great Britain, one-third of poor families were without a breadwinner, as a result of death or abandonment, obliging many children to work from a young age. In England and Scotland in 1788, two-thirds of the workers in 143 water-powered cotton mills were described as children."Child Labour and the Division of Labour in the Early English Cotton Mills". A high number of children also worked as prostitutes. The author Charles Dickens worked at the age of 12 in a blacking factory, with his family in debtor's prison.
Child wages were often low; as little as 10–20% of an adult male's wage.Douglas A. Galbi. Centre for History and Economics, King's College, Cambridge CB2 1ST.
Karl Marx was an outspoken opponent of child labour,In The Communist Manifesto, Part II:Proletariats and Communist and Capital, Volume I, Part III saying British industries, "could but live by sucking blood, and children’s blood too," and that U.S. capital was financed by the "capitalized blood of children".
180px|thumb|Children working in home-based assembly operations in United States (1923).
right|thumb|180px|Two girls protesting child labour (by calling it child slavery) in the 1909 New York City Labor Day parade.
Throughout the second half of the 19th century, child labour began to decline in industrialised societies due to regulation and economic factors. The regulation of child labour began from the earliest days of the Industrial revolution. The first act to regulate child labour in Britain was passed in 1803. As early as 1802 and 1819 Factory Acts were passed to regulate the working hours of workhouse children in factories and cotton mills to 12 hours per day. These acts were largely ineffective and after radical agitation, by for example the "Short Time Committees" in 1831, a Royal Commission recommended in 1833 that children aged 11–18 should work a maximum of 12 hours per day, children aged 9–11 a maximum of eight hours, and children under the age of nine were no longer permitted to work. This act however only applied to the textile industry, and further agitation led to another act in 1847 limiting both adults and children to 10-hour working days. Lord Shaftesbury was an outspoken advocate of regulating child labour.
As technology improved and proliferated, there was a greater need for educated employees. This saw an increase in schooling, with the eventual introduction of compulsory schooling. Improved technology and automation also made child labour redundant.
Early 20th century
thumbnail|left|Arthur Rothstein, Child Labor, Cranberry Bog, 1939. Brooklyn Museum
In the early 20th century, thousands of boys were employed in glass making industries. Glass making was a dangerous and tough job especially without the current technologies. The process of making glass includes intense heat to melt glass (3133 °F). When the boys are at work, they are exposed to this heat. This could cause eye trouble, lung ailments, heat exhaustion, cut, and burns. Since workers were paid by the piece, they had to work productively for hours without a break. Since furnaces had to be constantly burning, there were night shifts from 5:00 pm to 3:00 am. Many factory owners preferred boys under 16 years of age.
An estimated 1.7 million children under the age of fifteen were employed in American industry by 1900.
In 1910, over 2 million children in the same age group were employed in the United States."Photographs of Lewis Hine: Documentation of Child Labour". The U.S. National Archives and Records Administration. This included children who rolled cigarettes,Essay from UMBC on children working in the 1910s as cigarette rollers engaged in factory work, worked as bobbin doffers in textile mills, worked in coal mines and were employed in canneries.Child Labour in the South: Essays and Links to photographs from the Lewis Hines Collection at the University of Maryland, Baltimore County. Lewis Hine's photographs of child labourers in the 1910s powerfully evoked the plight of working children in the American south. Hines took these photographs between 1908 and 1917 as the staff photographer for the National Child Labor Committee.
Household enterprises
Factories and mines were not the only places where child labour was prevalent in the early 20th century. Home-based manufacturing across the United States and Europe employed children as well. Governments and reformers argued that labour in factories must be regulated and the state had an obligation to provide welfare for poor. Legislation that followed had the effect of moving work out of factories into urban homes. Families and women, in particular, preferred it because it allowed them to generate income while taking care of household duties.
Home-based manufacturing operations were active year-round. Families willingly deployed their children in these income generating home enterprises. In many cases, men worked from home. In France, over 58 percent of garment workers operated out of their homes; in Germany, the number of full-time home operations nearly doubled between 1882 and 1907; and in the United States, millions of families operated out of home seven days a week, year round to produce garments, shoes, artificial flowers, feathers, match boxes, toys, umbrellas and other products. Children aged 5–14 worked alongside the parents. Home-based operations and child labour in Australia, Britain, Austria and other parts of the world was common. Rural areas similarly saw families deploying their children in agriculture. In 1946, Frieda Miller - then Director of United States Department of Labour - told the International Labour Organisation that these home-based operations offered, "low wages, long hours, child labour, unhealthy and insanitary working conditions."
+ Percentage children workingin England and Wales Census Year % Boys aged 10–14as child labour 1881 22.9 1891 26.0 1901 21.9 1911 18.3Note: These are averages; child labour in Lancashire was 80%Source: Census of England and Wales
21st century
thumb|upright=1.95|Incidence rates for child labour worldwide in 10-14 age group, in 2003, per World Bank data.Table 2.8, WDI 2005, The World Bank The data is incomplete, as many countries do not collect or report child labour data (coloured gray). The colour code is as follows: yellow (<10% of children working), green (10–20%), orange (20–30%), red (30–40%) and black (>40%). Some nations such as Guinea-Bissau, Mali and Ethiopia have more than half of all children aged 5–14 at work to help provide for their families.Percentage of children aged 5–14 engaged in child labour
Child labour is still common in many parts of the world. Estimates for child labour vary. It ranges between 250 and 304 million, if children aged 5–17 involved in any economic activity are counted. If light occasional work is excluded, ILO estimates there were 153 million child labourers aged 5–14 worldwide in 2008. This is about 20 million less than ILO estimate for child labourers in 2004. Some 60 percent of the child labour was involved in agricultural activities such as farming, dairy, fisheries and forestry. Another 25 percent of child labourers were in service activities such as retail, hawking goods, restaurants, load and transfer of goods, storage, picking and recycling trash, polishing shoes, domestic help, and other services. The remaining 15 percent laboured in assembly and manufacturing in informal economy, home-based enterprises, factories, mines, packaging salt, operating machinery, and such operations. Two out of three child workers work alongside their parents, in unpaid family work situations. Some children work as guides for tourists, sometimes combined with bringing in business for shops and restaurants. Child labour predominantly occurs in the rural areas (70%) and informal urban sector (26%).
Contrary to popular beliefs, most child labourers are employed by their parents rather than in manufacturing or formal economy. Children who work for pay or in-kind compensation are usually found in rural settings, then urban centres. Less than 3 percent of child labour aged 5–14 across the world work outside their household, or away from their parents.
Child labour accounts for 22% of the workforce in Asia, 32% in Africa, 17% in Latin America, 1% in the US, Canada, Europe and other wealthy nations.Facts and figures on child labour The proportion of child labourers varies greatly among countries and even regions inside those countries. Africa has the highest percentage of children aged 5–17 employed as child labour, and a total of over 65 million. Asia, with its larger population, has the largest number of children employed as child labour at about 114 million. Latin America and Caribbean region have lower overall population density, but at 14 million child labourers has high incidence rates too.
thumb|left|180px|A boy repairing a tire in Gambia.
Accurate present day child labour information is difficult to obtain because of disagreements between data sources as to what constitutes child labour. In some countries, government policy contributes to this difficulty. For example, the overall extent of child labour in China is unclear due to the government categorizing child labour data as “highly secret”. China has enacted regulations to prevent child labour; still, the practice of child labour is reported to be a persistent problem within China, generally in agriculture and low-skill service sectors as well as small workshops and manufacturing enterprises.
In 2014, the U.S. Department of Labor issued a List of Goods Produced by Child Labor or Forced Labor where China was attributed 12 goods the majority of which were produced by both underage children and indentured labourers.List of Goods Produced by Child Labor or Forced Labor The report listed electronics, garments, toys and coal among other goods.
Maplecroft Child Labour Index 2012 survey reports 76 countries pose extreme child labour complicity risks for companies operating worldwide. The ten highest risk countries in 2012, ranked in decreasing order, were: Myanmar, North Korea, Somalia, Sudan, DR Congo, Zimbabwe, Afghanistan, Burundi, Pakistan and Ethiopia. Of the major growth economies, Maplecroft ranked Philippines 25th riskiest, India 27th, China 36th, Viet Nam 37th, Indonesia 46th, and Brazil 54th - all of them rated to involve extreme risks of child labour uncertainties, to corporations seeking to invest in developing world and import products from emerging markets.
Causes of child labour
Primary causes
International Labour Organisation (ILO) suggests poverty is the greatest single cause behind child labour. For impoverished households, income from a child's work is usually crucial for his or her own survival or for that of the household. Income from working children, even if small, may be between 25 and 40% of the household income. Other scholars such as Harsch on African child labour, and Edmonds and Pavcnik on global child labour have reached the same conclusion.Basu, Kaushik and Van, Phan Hoang, 1998. 'The Economics of Child Labour', American Economic Review, 88(3),412–427
Lack of meaningful alternatives, such as affordable schools and quality education, according to ILO, is another major factor driving children to harmful labour. Children work because they have nothing better to do. Many communities, particularly rural areas where between 60–70% of child labour is prevalent, do not possess adequate school facilities. Even when schools are sometimes available, they are too far away, difficult to reach, unaffordable or the quality of education is so poor that parents wonder if going to school is really worth it.
thumb|180px|Young girl working on a loom in Aït Benhaddou, Morocco in May 2008.
Cultural causes
In European history when child labour was common, as well as in contemporary child labour of modern world, certain cultural beliefs have rationalised child labour and thereby encouraged it. Some view that work is good for the character-building and skill development of children. In many cultures, particular where the informal economy and small household businesses thrive, the cultural tradition is that children follow in their parents' footsteps; child labour then is a means to learn and practice that trade from a very early age. Similarly, in many cultures the education of girls is less valued or girls are simply not expected to need formal schooling, and these girls pushed into child labour such as providing domestic services. thumb|180px|Agriculture deploys 70% of the world's child labour. Above, child worker on a rice farm in Vietnam.
Macroeconomic causes
Biggeri and Mehrotra have studied the macroeconomic factors that encourage child labour. They focus their study on five Asian nations including India, Pakistan, Indonesia, Thailand and Philippines. They suggest that child labour is a serious problem in all five, but it is not a new problem. Macroeconomic causes encouraged widespread child labour across the world, over most of human history. They suggest that the causes for child labour include both the demand and the supply side. While poverty and unavailability of good schools explain the child labour supply side, they suggest that the growth of low-paying informal economy rather than higher paying formal economy is amongst the causes of the demand side. Other scholars too suggest that inflexible labour market, sise of informal economy, inability of industries to scale up and lack of modern manufacturing technologies are major macroeconomic factors affecting demand and acceptability of child labour.
Child Labour by country
thumb|180px|Child Labour in a quarry, Ecuador.
Colonial empires
Systematic use of child labour was common place in the colonies of European powers between 1650 and 1950. In Africa, colonial administrators encouraged traditional kin-ordered modes of production, that is hiring a household for work not just the adults. Millions of children worked in colonial agricultural plantations, mines and domestic service industries. Sophisticated schemes were promulgated where children in these colonies between the ages of 5–14 were hired as an apprentice without pay in exchange for learning a craft. A system of Pauper Apprenticeship came into practice in the 19th century where the colonial master neither needed the native parents' nor child's approval to assign a child to labour, away from parents, at a distant farm owned by a different colonial master. Other schemes included 'earn-and-learn' programs where children would work and thereby learn. Britain for example passed a law, the so-called Masters and Servants Act of 1899, followed by Tax and Pass Law, to encourage child labour in colonies particularly in Africa. These laws offered the native people the legal ownership to some of the native land in exchange for making labour of wife and children available to colonial government's needs such as in farms and as picannins.
Beyond laws, new taxes were imposed on colonies. One of these taxes was the Head Tax in the British and French colonial empires. The tax was imposed on everyone older than 8 years, in some colonies. To pay these taxes and cover living expenses, children in colonial households had to work.
In southeast Asian colonies, such as Hong Kong, child labour such as the Mui Tsai (妹仔), was rationalised as a cultural tradition and ignored by British authorities. The Dutch East India Company officials rationalised their child labour abuses with, "it is a way to save these children from a worse fate." Christian mission schools in regions stretching from Zambia to Nigeria too required work from children, and in exchange provided religious education, not secular education. Elsewhere, the Canadian Dominion Statutes in form of so-called Breaches of Contract Act, stipulated jail terms for uncooperative child workers.
Proposals to regulate child labour began as early as 1786.Steve Charnovitz, "Child Labour: What to do?," Journal of Commerce, 15 August 1996.
Africa
thumb|Child Labour in Africa
Children working at a young age has been a consistent theme throughout Africa. Many children began first working in the home to help their parents run the family farm. Children in Africa today are often forced into exploitative labour due to family debt and other financial factors, leading to ongoing poverty. Other types of domestic child labour include working in commercial plantations, begging, and other sales such as boot shining. In total, there is an estimated five million children who are currently working in the field of agriculture which steadily increases during the time of harvest. Along with 30 percent of children who are picking coffee, there are an estimated 25,000 school age children who work year round.
What industries children work in depends on if they grew up in a rural area or an urban area. Children who were born in urban areas often found themselves working for street vendors, washing cars, helping in construction sites, weaving clothing, and sometimes even working as exotic dancers. While children who grew up in rural areas would work on farms doing physical labour, working with animals, and selling crops. Of all the child workers, the most serious cases involved street children and trafficked children due to the physical and emotional abuse they endured by their employers. To address the issue of child labour, the United Nations Conventions on the Rights of the Child Act was implemented in 1959. Yet due to poverty, lack of education and ignorance, the legal actions were not/are not wholly enforced or accepted in Africa.
Other legal factors that have been implemented to end and reduce child labour includes the global response that came into force in 1979 by the declaration of the International Year of the Child. Along with the Human Rights Committee of the United Nations, these two declarations worked on many levels to eliminate child labour. Although many actions have been taken to end this epidemic, child labour in Africa is still an issue today due to the unclear definition of adolescence and how much time is needed for children to engage in activities that are crucial for their development. Another issue that often comes into play is the link between what constitutes as child labour within the household due to the cultural acceptance of children helping run the family business. In the end, there is a consistent challenge for the national government to strengthen its grip politically on child labour, and to increase education and awareness on the issue of children working below the legal age limit. With children playing an important role in the African economy, child labour still plays an important role for many in the 20th century.
Australia
From European settlement in 1888, child convicts were occasionally sent to Australia where they were made to work. Child labour was not as excessive in Australia as in Britain. With a low population, agricultural productivity was higher and families did not face starvation as in established industrialised countries. Australia also did not have significant industry until the later part of the 20th century when child labour laws, and compulsory schooling had developed under the influence of Britain. From the 1870s Child labour was restricted by compulsorry schooling.
Child labour laws in Australia differ from state to state. Generally, children are allowed to work at any age, but restrictions exist for children under 15 years of age. These restrictions apply to work hours and the type of work that children can perform. In all states, children are obliged to attend school until a minimum leaving age, (15 years of age in all states except Tasmania and Queensland where the leaving age is 15).https://www.fairwork.gov.au/find-help-for/young-workers-and-students/what-age-can-i-start-work
Brazil
thumb|180px|Child labour in Brazil, leaving after collecting recyclables from a landfill.
Child labour has been a consistent struggle for children in Brazil ever since the country was colonised on April 22, 1550 by Pedro Álvares Cabral. Work that many children took part in was not always visible, legal, or paid. Free or slave labour was a common occurrence for many youths and was a part of their everyday lives as they grew into adulthood. Yet due to there being no clear definition of how to classify what a child or youth is, there has been little historical documentation of child labour during the colonial period. Due to this lack of documentation, it is hard to determine just how many children were used for what kinds of work before the nineteenth century. The first documentation of child labour in Brazil occurred during the time of indigenous societies and slave labour where it was found that children were forcibly working on tasks that exceeded their emotional and physical limits. Armando Dias, for example, died in November 1913 whilst still very young, a victim of an electric shock when entering the textile industry where he worked. Boys and girls were victims of industrial accidents on a daily basis.
In Brazil, the minimum working age has been identified as fourteen due to continuous constitutional amendments that occurred in 1934, 1937, and 1946. Yet due to a change in the dictatorship by the military in the 80’s, the minimum age restriction was reduced to the age of twelve but was reviewed due to reports of dangerous and hazardous working conditions in 1988. This led to the minimum age being raised once again to 14. Another set of restrictions was passed in 1998 that restricted the kinds of work youth could partake in, such as work that was considered hazardous like running construction equipment, or certain kinds of factory work. Although many steps were taken to reduce the risk and occurrence of child labour, there is still a high number of children and adolescents working under the age of fourteen in Brazil. It was not until recently in the 80’s that it was discovered that almost nine million children in Brazil were working illegally and not partaking in traditional childhood activities that help to develop important life experiences.
Brazilian census data (PNAD, 1999) indicate that 2.55 million 10-14 year-olds were illegally holding jobs. They were joined by 3.7 million 15-17 year-olds and about 375,000 5-9 year-olds. Due to the raised age restriction of 14, at least half of the recorded young workers had been employed illegally which lead to many not being protect by important labour laws. Although substantial time has passed since the time of regulated child labour, there is still a large number of children working illegally in Brazil. Many children are used by drug cartels to sell and carry drugs, guns, and other illegal substances because of their perception of innocence. This type of work that youth are taking part in is very dangerous due to the physical and psychological implications that come with these jobs. Yet despite the hazards that come with working with drug dealers, there has been an increase in this area of employment throughout the country.
England
Many factors played a role in Britain’s long-term economic growth, such as the industrial revolution in the late 1700s and the prominent presence of child labour during the industrial age. Children who worked at an early age were often not forced; but did so because they needed to help their family survive financially. Due to poor employment opportunities for many parents, sending their children to work on farms and in factories was a way to help feed and support the family. Child Labour first started to occur in England when household businesses were turned into local labour markets that mass-produced the once homemade goods. Because children often helped produce the goods out of their homes, working in a factory to make those same goods was a simple change for many of these youths. Although there are many counts of children under the age of ten working for factories, the majority of children workers were between the ages of ten and fourteen. This age range was an important time for many youths as they were first helping to provide for their families; while also transitioning to save for their own future families.
Besides the obligation, many children had to help support their families financially; another factor that influenced child labour was the demographic changes that occurred in the eighteenth century. By the end of the eighteenth century, 20 percent of the population was made up of children between the ages of 5 and 14. Due to this substantial shift in available workers, and the development of the industrial revolution, children began to work earlier in life in companies outside of the home. Yet, even though there was an increase of child labour in factories such as cotton textiles, there consistently was large numbers of children working in the field of agriculture and domestic production.
With such a high percentage of children working, the rising of illiteracy, and the lack of a formal education became a widespread issue for many children who worked to provide for their families. Due to this problematic trend, many parents developed a change of opinion when deciding whether or not to send their children to work. Other factors that lead to the decline of child labour included financial changes in the economy, changes in the development of technology, raised wages, and continuous regulations on factory legislation.
The first legal steps taken to end the occurrence of child labour was enacted more than fifty years ago. In 1966, the nation adopted the UN General Assembly of the International Covenant on Economic, Social and Cultural Rights. This act legally limited the minimum age for when children could start work at the age of 14. But 23 years later in 1989 the Convention on the Rights of Children was adopted and helped to reduce the exploitation of children and demanded safe working environments. They all worked towards the goal of ending the most problematic forms of child labour.
India
thumb|Working girl in India.
In 2015, the country of India is home to the largest number of children who are working illegally in various industrial industries. Agriculture in India is the largest sector where many children work at early ages to help support their family. Many of these children are forced to work at young ages due to many family factors such as unemployment, a large number of family members, poverty, and lack of parental education. This is often the major cause of the high rate of child labour in India.
On 23 June 1757, the English East India Company defeated Siraj-ud-Daula, the Nawab of Bengal, in the Battle of Plassey. The British thus became masters of east India (Bengal, Bihar, Orissa) – a prosperous region with a flourishing agriculture, industry and trade. This led to a large amount of children being forced into labour due to the increasing need of cheap labour to produce large numbers of goods. Many multinationals often employed children because that they can be recruited for less pay, and have more endurance to utilise in factory environments. Another reason many Indian children were hired was because they lack knowledge of their basic rights, they did not cause trouble or complain, and they were often more trustworthy. The innocence that comes with childhood was utilised to make a profit by many and was encouraged by the need for family income.
A variety of Indian social scientists as well as the non-governmental organisations (NGOs) have done extensive research on the numeric figures of child labour found in India and determined that India contributes to one-third of Asia’s child labour and one-fourth of the world's child labour. Due to a large number of children being illegally employed, the Indian government began to take extensive actions to reduce the number of children working, and to focus on the importance of facilitating the proper growth and development of children. thumb|right|180px|An eight-year-old boy making his livelihood by showing a playful monkey in a running train in India in 2011.International influences help to encourage legal actions to be taken in India, such as the Geneva Declaration of the Right of Children Act was passed in 1924. This act was followed by The Universal Declaration of Human Rights in 1948 to which incorporated the basic human rights and needs of children for proper progression and growth in their younger years. These international acts encouraged major changes to the workforce in India which occurred in 1986 when the Child Labour (Prohibition and Regulation) Act was put into place. This act prohibited hiring children younger than the age of 14, and from working in hazardous conditions.
Due to the increase of regulations and legal restrictions on child labour, there has been a 64 percent decline in child labour from 1993-2005. Although this is a great decrease in the country of India, there is still high numbers of children working in the rural areas of India. With 85 percent of the child labour occurring in rural areas, and 15 percent occurring in urban areas, there are still substantial areas of concern in the country of India.
India has legislation since 1986 which allows work by children in non-hazardous industry. In 2013, the Punjab and Haryana High Court gave a landmark order that directed that there shall be a total ban on the employment of children up to the age of 14 years, be it hazardous or non-hazardous industries. However, the Court ruled that a child can work with his or her family in family based trades/occupations, for the purpose of learning a new trade/craftsmanship or vocation.[142]
Ireland
In post-colonial Ireland, the rate of child exploitation was extremely high as children were used as farm labourers once they were able to walk, these children were never paid for the labour that they carried out on the family farm. Children were wanted and desired in Ireland for the use of their labour on the family farm. Irish parents felt that it was the children's duty to carry out chores on the family farm Curtin, Chris and Anthony Varley. 1984. Children and childhood in rural Ireland: a consideration of the ethnographic literature. Culture and ideology in Ireland.
Soviet Union and Russia
Although formally banned since 1922, child labour was widespread in the Soviet Union, mostly in the form of mandatory, unpaid work by schoolchildren on Saturdays and holidays. The students were used as a cheap, unqualified workforce on kolhoz (collective farms) as well as in industry and forestry. The practice was formally called "work education".Svetlana Stephenson, "Child Labour in the Russian Federation", 2002, University of North London
From the 1950s on, the students were also used for unpaid work at schools, where they cleaned and performed repairs.Евгений Жирнов, Среднее и высшее самообслуживание, 2007, Kommersant This practice has continued in the Russian Federation, where up to 21 days of the summer holidays is sometimes set aside for school works. By law, this is only allowed as part of specialised occupational training and with the students' and parents' permission, but those provisions are widely ignored.Школьников заставляют работать в школьном участке в каникулах. 21 дней. Не платят. Нарушаются ли права? In 2012 there was an accident near city of Nalchik where a car killed several pupils cleaning up a highway shoulder during their "holiday work" as well as their teacher who was supervising them.
Out of former Soviet Union republics Uzbekistan continued and expanded the program of child labour on industrial scale to increase profits on the main source of Islam Karimov's income, cotton harvesting. In September, when school normally starts, the classes are suspended and children are sent to cotton fields for work, where they are assigned daily quotas of 20 to 60 kg of raw cotton they have to collect. This process is repeated in spring, when collected cotton needs to be hoed and weeded. In 2006 it is estimated that 2.7 million children were forced to work this way.
Switzerland
As in many other countries, child labour in Switzerland affected among the so-called Kaminfegerkinder ("chimney sweep children") and children working p.e. in spinning mills, factories and in agriculture in 19th-century Switzerland, but also to the 1960s so-called Verdingkinder (literally: "contract children" or "indentured child laborers") were children who were taken from their parents, often due to poverty or moral reasons – usually mothers being unmarried, very poor citizens, of Gypsy–Yeniche origin, so-called Kinder der Landstrasse, etc. – and sent to live with new families, often poor farmers who needed cheap labour.
There were even Verdingkinder auctions where children were handed over to the farmer asking the least amount of money from the authorities, thus securing cheap labour for his farm and relieving the authority from the financial burden of looking after the children. In the 1930s 20% of all agricultural labourers in the Canton of Bern were children below the age of 15. Swiss municipality guardianship authorities acted so, commonly tolerated by federal authorities, to the 1960s, not all of them of course, but usually communities affected of low taxes in some Swiss cantons Swiss historian Marco Leuenberger investigated, that in 1930 there were some 35,000 indentured children, and between 1920 and 1970 more than 100,000 are believed to have been placed with families or homes. 10,000 Verdingkinder are still alive. Therefore, the so-called Wiedergutmachungsinitiative was started in April 2014. In April 2014 the collection of targeted at least authenticated 100,000 signatures of Swiss citizens has started, and still have to be collected to October 2015.
Child labour laws and initiatives
Almost every country in the world has laws relating to and aimed at preventing child labour. International Labour Organisation has helped set international law, which most countries have signed on and ratified.
According to ILO minimum age convention (C138) of 1973, child labour refers to any work performed by children under the age of 12, non-light work done by children aged 12–14, and hazardous work done by children aged 15–17. Light work was defined, under this Convention, as any work that does not harm a child's health and development, and that does not interfere with his or her attendance at school. This convention has been ratified by 135 countries.
The United Nations adopted the Convention on the Rights of the Child in 1990, which was subsequently ratified by 193 countries.United Nations Treaty Collection. Convention on the Rights of the Child. Retrieved 21 May 2009. Article 32 of the convention addressed child labour, as follows:...Parties recognise the right of the child to be protected from economic exploitation and from performing any work that is likely to be hazardous or to interfere with the child's education, or to be harmful to the child's health or physical, mental, spiritual, moral or social development.
Under Article 1 of the 1990 Convention, a child is defined as "... every human being below the age of eighteen years unless, under the law applicable to the child, a majority is attained earlier." Article 28 of this Convention requires States to, "make primary education compulsory and available free to all."
195 countries are party to the Convention; only two nations have not ratified the treaty, Somalia and the United States.United Nations Treaty Collection. Convention on the Rights of the Child. Retrieved 21 May 2009.
In 1999, ILO helped lead the Worst Forms Convention 182 (C182), which has so far been signed upon and domestically ratified by 151 countries including the United States. This international law prohibits worst forms of child labour, defined as all forms of slavery and slavery-like practices, such as child trafficking, debt bondage, and forced labour, including forced recruitment of children into armed conflict. The law also prohibits the use of a child for prostitution or the production of pornography, child labour in illicit activities such as drug production and trafficking; and in hazardous work. Both the Worst Forms Convention (C182) and the Minimum Age Convention (C138) are examples of international labour standards implemented through the ILO that deal with child labour.
thumb|180px|left|The United States has passed a law that allows Amish children older than 14 to work in traditional wood enterprises with proper supervision.
In addition to setting the international law, the United Nations initiated International Program on the Elimination of Child Labour (IPEC) in 1992. This initiative aims to progressively eliminate child labour through strengthening national capacities to address some of the causes of child labour. Amongst the key initiative is the so-called time-bounded programme countries, where child labour is most prevalent and schooling opportunities lacking. The initiative seeks to achieve amongst other things, universal primary school availability. The IPEC has expanded to at least the following target countries: Bangladesh, Brazil, China, Egypt, India, Indonesia, Mexico, Nigeria, Pakistan, Democratic Republic of Congo, El Salvador, Nepal, Tanzania, Dominican Republic, Costa Rica, Philippines, Senegal, South Africa and Turkey.
Targeted child labour campaigns were initiated by the International Programme on the Elimination of Child Labour (IPEC) in order to advocate for prevention and elimination of all forms of child labour. The global Music against Child Labour Initiative was launched in 2013 in order to involve socially excluded children in structured musical activity and education in efforts to help protect them from child labour.
Exceptions granted
In 2004, the United States passed an amendment to the Fair Labour Standards Act of 1938. The amendment allows certain children aged 14–18 to work in or outside a business where machinery is used to process wood. The law aims to respect the religious and cultural needs of the Amish community of the United States. The Amish believe that one effective way to educate children is on the job. The new law allows Amish children the ability to work with their families, once they are passed eighth grade in school.
Similarly, in 1996, member countries of the European Union, per Directive 94/33/EC, agreed to a number of exceptions for young people in its child labour laws. Under these rules, children of various ages may work in cultural, artistic, sporting or advertising activities if authorised by the competent authority. Children above the age of 13 may perform light work for a limited number of hours per week in other economic activities as defined at the discretion of each country. Additionally, the European law exception allows children aged 14 years or over to work as part of a work/training scheme. The EU Directive clarified that these exceptions do not allow child labour where the children may experience harmful exposure to dangerous substances. Nonetheless, many children under the age of 13 do work, even in the most developed countries of the EU. For instance, a recent study showed over a third of Dutch twelve-year-old kids had a job, the most common being babysitting.Eenderde van de 12-jarigen heeft bijbaan (RTL News, 14 February 2012)
More laws vs. more freedom
Scholars disagree on the best legal course forward to address child labour. Some suggest the need for laws that place a blanket ban on any work by children less than 18 years old. Others suggest the current international laws are enough, and the need for more engaging approach to achieve the ultimate goals.
Some scholars suggest any labour by children aged 18 years or less is wrong since this encourages illiteracy, inhumane work and lower investment in human capital. Child labour, claim these activists, also leads to poor labour standards for adults, depresses the wages of adults in developing countries as well as the developed countries, and dooms the third world economies to low-skill jobs only capable of producing poor quality cheap exports. More children that work in poor countries, the fewer and worse-paid are the jobs for adults in these countries. In other words, there are moral and economic reasons that justify a blanket ban on labour from children aged 18 years or less, everywhere in the world.
thumb|180px|Child labour in Bangladesh.
Other scholars suggest that these arguments are flawed, ignores history and more laws will do more harm than good. According to them, child labour is merely the symptom of a greater disease named poverty. If laws ban all lawful work that enables the poor to survive, informal economy, illicit operations and underground businesses will thrive. These will increase abuse of the children. In poor countries with very high incidence rates of child labour - such as Ethiopia, Chad, Niger and Nepal - schools are not available, and the few schools that exist offer poor quality education or are unaffordable. The alternatives for children who currently work, claim these studies, are worse: grinding subsistence farming, militia or prostitution. Child labour is not a choice, it is a necessity, the only option for survival. It is currently the least undesirable of a set of very bad choices.
thumb|180px|Nepali girls working in brick factory.
These scholars suggest, from their studies of economic and social data, that early 20th-century child labour in Europe and the United States ended in large part as a result of the economic development of the formal regulated economy, technology development and general prosperity. Child labour laws and ILO conventions came later. Edmonds suggests, even in contemporary times, the incidence of child labour in Vietnam has rapidly reduced following economic reforms and GDP growth. These scholars suggest economic engagement, emphasis on opening quality schools rather than more laws and expanding economically relevant skill development opportunities in the third world. International legal actions, such as trade sanctions increase child labour.
"The Incredible Bread Machine" a book published by "World Research, Inc." in 1974 stated:
Child labour incidents
Cocoa production
In 1998, UNICEF reported that Ivory Coast farmers used enslaved children – many from surrounding countries. In late 2000 a BBC documentary reported the use of enslaved children in the production of cocoa—the main ingredient in chocolate— in West Africa. Other media followed by reporting widespread child slavery and child trafficking in the production of cocoa. In 2001, the US State Department estimated there were 15,000 child slaves cocoa, cotton and coffee farms in the Ivory Coast, and the Chocolate Manufacturers Association acknowledged that child slavery is used in the cocoa harvest.
Malian migrants have long worked on cocoa farms in the Ivory Coast, but in 2000 cocoa prices had dropped to a 10-year low and some farmers stopped paying their employees. The Malian counsel had to rescue some boys who had not been paid for five years and who were beaten if they tried to run away. Malian officials believed that 15,000 children, some as young as 11 years old, were working in the Ivory Coast in 2001. These children were often from poor families or the slums and were sold to work in other countries. Parents were told the children would find work and send money home, but once the children left home, they often worked in conditions resembling slavery. In other cases, children begging for food were lured from bus stations and sold as slaves. In 2002, the Ivory Coast had 12,000 children with no relatives nearby, which suggested they were trafficked, likely from neighboring Mali, Burkina Faso and Togo.
The cocoa industry was accused of profiting from child slavery and trafficking. The European Cocoa Association dismissed these accusations as "false and excessive" and the industry said the reports were not representative of all areas. Later the industry acknowledged the working conditions for children were unsatisfactory and children's rights were sometimes violated and acknowledged the claims could not be ignored. In a BBC interview, the ambassador for Ivory Coast to the United Kingdom called these reports of widespread use of slave child labour by 700,000 cocoa farmers as absurd and inaccurate.
In 2001, a voluntary agreement called the Harkin-Engel Protocol, was accepted by the international cocoa and chocolate industry to eliminate the worst forms of child labour, as defined by ILO's Convention 182, in West Africa. This agreement created a foundation named International Cocoa Initiative in 2002. The foundation claims it has, as of 2011, active programs in 290 cocoa growing communities in Côte d'Ivoire and Ghana, reaching a total population of 689,000 people to help eliminate the worst forms of child labour in cocoa industry. Other organisations claim progress has been made, but the protocol's 2005 deadlines have not yet been met.
Mining in Africa
thumb|200px|Children engaged in diamond mining in Sierra Leone.
In 2008, Bloomberg claimed child labour in copper and cobalt mines that supplied Chinese companies in Congo. The children are creuseurs, that is they dig the ore by hand, carry sacks of ores on their backs, and these are then purchased by these companies. Over 60 of Katanga's 75 processing plants are owned by Chinese companies and 90 percent of the region's minerals go to China. An African NGO report claimed 80,000 child labourers under the age of 15, or about 40% of all miners, were supplying ore to Chinese companies in this African region.
Amnesty International alleged in 2016 that some cobalt sold by Congo Dongfang Mining was produced by child labour, and that it was being used in lithium-ion batteries powering electric cars and mobile devices worldwide.https://www.amnesty.org/en/latest/news/2016/01/child-labour-behind-smart-phone-and-electric-car-batteries/
BBC, in 2012, accused Glencore of using child labour in its mining and smelting operations of Africa. Glencore denied it used child labour, and said it has strict policy of not using child labour. The company claimed it has a strict policy whereby all copper was mined correctly, placed in bags with numbered seals and then sent to the smelter. Glencore mentioned being aware of child miners who were part of a group of artisanal miners who had without authorisation raided the concession awarded to the company since 2010; Glencore has been pleading with the government to remove the artisanal miners from the concession.
Small-scale artisanal mining of gold is another source of dangerous child labour in poor rural areas in certain parts of the world. This form of mining uses labour-intensive and low-tech methods. It is informal sector of the economy. Human Rights Watch group estimates that about 12 percent of global gold production comes from artisanal mines. In west Africa, in countries such as Mali - the third largest exporter of gold in Africa - between 20,000 and 40,000 children work in artisanal mining. Locally known as orpaillage, children as young as 6 years old work with their families. These children and families suffer chronic exposure to toxic chemicals including mercury, and do hazardous work such as digging shafts and working underground, pulling up, carrying and crushing the ore. The poor work practices harm the long term health of children, as well as release hundreds of tons of mercury every year into local rivers, ground water and lakes. Gold is important to the economy of Mali and Ghana. For Mali, it is the second largest earner of its export revenue. For many poor families with children, it is the primary and sometimes the only source of income.
Meatpacking
In early August 2008, Iowa Labour Commissioner David Neil announced that his department had found that Agriprocessors, a kosher meatpacking company in Postville which had recently been raided by Immigration and Customs Enforcement, had employed 57 minors, some as young as 14, in violation of state law prohibiting anyone under 18 from working in a meatpacking plant. Neil announced that he was turning the case over to the state Attorney General for prosecution, claiming that his department's inquiry had discovered "egregious violations of virtually every aspect of Iowa's child labour laws."Inquiry Finds Under-Age Workers at Meat Plant. The New York Times. Agriprocessors claimed that it was at a loss to understand the allegations. Agriprocessors' CEO went to trial on these charges in state court on 4 May 2010. After a five-week trial he was found not guilty of all 57 charges of child labour violations by the Black Hawk County District Court jury in Waterloo, Iowa, on 7 June 2010.Julia Preston (7 June 2010). "Former Manager of Iowa Slaughterhouse Is Acquitted of Labour Charges". The New York Times. Retrieved 29 November 2010.
GAP
thumbnail|Working child in Ooty, India
A 2007 report claimed some GAP products had been produced by child labourers. GAP acknowledged the problem and announced it is pulling the products from its shelf.Child sweatshop shame threatens Gap's ethical image The report found Gap had rigorous social audit systems since 2004 to eliminate child labour in its supply chain. However, the report concluded that the system was being abused by unscrupulous subcontractors.
GAP's policy, the report claimed, is that if it discovers child labour was used by its supplier in its branded clothes, the contractor must remove the child from the workplace, provide it with access to schooling and a wage, and guarantee the opportunity of work on reaching a legal working age.
In 2007, The New York Times reported that GAP, after the child labour discovery, created a $200,000 grant to improve working conditions in the supplier community.Gap moves to recover from child labour scandal
H&M and Zara
In December 2009, campaigners in the UK called on two leading high street retailers to stop selling clothes made with cotton which may have been picked by children. Anti-Slavery International and the Environmental Justice Foundation (EJF) accused H&M and Zara of using cotton suppliers in Bangladesh. It is also suspected that many of their raw materials originates from Uzbekistan, where children aged 10 are forced to work in the fields. The activists were calling to ban the use of Uzbek cotton and implement a "track and trace" systems to guarantee an ethical responsible source of the material.
H&M said it "does not accept" child labour and "seeks to avoid" using Uzbek cotton, but admitted it did "not have any reliable methods" to ensure Uzbek cotton did not end up in any of its products. Inditex, the owner of Zara, said its code of conduct banned child labour."Stores urged to stop using child labour cotton"
Silk weaving
A 2003 Human Rights Watch report claimed children as young as five years old were employed and worked for up to 12 hours a day and six to seven days a week in silk industry. These children, HRW claimed, were bonded child labour in India, easy to find in Karnataka, Uttar Pradesh and Tamil Nadu.
In 2010, a German news investigative report claimed that in silk weaving industry, non-governmental organisations (NGOs) had found up to 10,000 children working in the 1,000 silk factories in 1998. In other places, thousands of bonded child labour were present in 1994. After UNICEF and NGOs got involved, after 2005, child labour figure is drastically lower, with the total estimated to be fewer than a thousand child labourers. The released children were back in school, claims the report.
Primark
In 2008, the BBC reportedBBC News that the company Primark was using child labour in the manufacture of clothing. In particular, a £4 hand-embroidered shirt was the starting point of a documentary produced by BBC's Panorama programme. The programme asks consumers to ask themselves, "Why am I only paying £4 for a hand embroidered top? This item looks handmade. Who made it for such little cost?", in addition to exposing the violent side of the child labour industry in countries where child exploitation is prevalent.
As a result of the BBC report, Royal Television Society awarded it a prize, and Primark took immediate action and fired three Indian suppliers in 2008.Primark fires child worker firms
Primark continued to investigate the allegations for three years,Primark's Investigation findings of BBC's fake reporting on child labour, 2011 concluding that BBC report was a fake. In 2011, following an investigation by the BBC Trust’s Editorial Standards Committee, the BBC announced, "Having carefully scrutinised all of the relevant evidence, the committee concluded that, on the balance of probabilities, it was more likely than not that the Bangalore footage was not authentic." BBC subsequently apologised for faking footage, and returned the television award for investigative reporting.Channel 4 - BBC's apology over child labour footageTelegraph - BBC to apologise over 'faked footage' in Panorama report about PrimarkBBC hands back RTS award for Panorama programme on Primark
Eliminating child labour
thumb|240px|Child labour in a coal mine, United States, c. 1912. Photograph by Lewis Hine.
thumb|240px|Different forms of child labour in Central America, 1999.
Concerns have often been raised over the buying public's moral complicity in purchasing products assembled or otherwise manufactured in developing countries with child labour. However, others have raised concerns that boycotting products manufactured through child labour may force these children to turn to more dangerous or strenuous professions, such as prostitution or agriculture. For example, a UNICEF study found that after the Child Labour Deterrence Act was introduced in the US, an estimated 50,000 children were dismissed from their garment industry jobs in Bangladesh, leaving many to resort to jobs such as "stone-crushing, street hustling, and prostitution", jobs that are "more hazardous and exploitative than garment production". The study suggests that boycotts are "blunt instruments with long-term consequences, that can actually harm rather than help the children involved."
According to Milton Friedman, before the Industrial Revolution virtually all children worked in agriculture."Some authors such as conservative Nobel economist Milton Friedman claim that child labour actually decreased during the industrial revolution. He argues that before the industrial revolution almost all children were working in agriculture(...)" During the Industrial Revolution many of these children moved from farm work to factory work. Over time, as real wages rose, parents became able to afford to send their children to school instead of work and as a result child labour declined, both before and after legislation.Hugh Cunningham, "The Employment and Unemployment of Children in England c.1680–1851." Past and Present. Feb. 1990. Austrian School economist Murray Rothbard said that British and American children of the pre- and post-Industrial Revolution lived and suffered in infinitely worse conditions where jobs were not available for them and went "voluntarily and gladly" to work in factories.Murray Rothbard, Down With Primitivism: A Thorough Critique of Polanyi Ludwig Von Mises Institute, reprint of June 1961 article.
British historian and socialist E. P. Thompson in The Making of the English Working Class draws a qualitative distinction between child domestic work and participation in the wider (waged) labour market. Further, the usefulness of the experience of the industrial revolution in making predictions about current trends has been disputed. Social historian Hugh Cunningham, author of Children and Childhood in Western Society Since 1500, notes that:
"Fifty years ago it might have been assumed that, just as child labour had declined in the developed world in the late nineteenth and early twentieth centuries, so it would also, in a trickle-down fashion, in the rest of the world. Its failure to do that, and its re-emergence in the developed world, raise questions about its role in any economy, whether national or global."Hugh Cunningham, "The decline of child labour: labour markets and family economies in Europe and North America since 1830", Economic History Review, 2000.
According to Thomas DeGregori, an economics professor at the University of Houston, in an article published by the Cato Institute, a libertarian think-tank operating in Washington D.C., "it is clear that technological and economic change are vital ingredients in getting children out of the workplace and into schools. Then they can grow to become productive adults and live longer, healthier lives. However, in poor countries like Bangladesh, working children are essential for survival in many families, as they were in our own heritage until the late 19th century. So, while the struggle to end child labour is necessary, getting there often requires taking different routes—and, sadly, there are many political obstacles.DeGregori, Thomas R., "Child Labour or Child Prostitution?" Cato Institute.
The International Programme on the Elimination of Child Labour (IPEC), founded in 1992, aims to eliminate child labour. It operates in 88 countries and is the largest program of its kind in the world.IPEC IPEC works with international and government agencies, NGOs, the media, and children and their families to end child labour and provide children with education and assistance.
From 2008 to 2013, the ILO operated a program through International Programme on the Elimination of Child Labour (IPEC) titled " Combating Abusive Child Labour (CACL-II) ". The project, funded by the European Union, contributed to the Government of Pakistan by providing alternative opportunities for vocational training and education to children withdrawn from the worst forms of child labour.
Statistics
Number of children involved in ILO categories of work, by age and gender in 2002
All Children (2002)ILO(2002a), "Every child counts: new global estimates on child labour", Geneva: International Labour Office. Economically Active Children Economically Active Children (%) Child Labour Child Labour (%) Children In Hazardous Work Children In Hazardous Work (%) Ages 5–11 838,800,000 109,700,000 13.1 109,700,000 13.1 60,500,000 7.2 Ages 12–14 360,600,000 101,100,000 28.0 76,000,000 21.1 50,800,000 14.1 Ages 5–14 1,199,400,000 210,800,000 17.6 186,300,000 15.5 111,300,000 9.3 Ages 15–17 332,100,000 140,900,000 42.4 59,200,000 17.8 59,200,000 17.8 Boys 786,600,000 184,100,000 23.4 132,200,000 16.8 95,700,000 12.2 Girls 744,900,000 167,600,000 22.5 113,300,000 15.2 74,800,000 10.5 Total 1,531,500,000 351,700,000 23.0 245,500,000 16.0 170,500,000 11.1
Potential positives of children working
The term child labour can be misleading when it confuses harmful work with employment that may be beneficial to children. It can also ignore harmful work outside employment and any benefits children normally derive from their work.For examples see Bourdillon et al pp 1-6, 180-194. Pp 195-200 offer an alternative and more effective approach to protecting working children. Domestic work is an example: all families but the rich must work at cleaning, cooking, caring, and more to maintain their homes. In most families in the world, this process extends to productive activities, especially herding and various types of agriculture, and to a variety of small family businesses. Where trading is a significant feature of social life, children can start trading in small items at an early age, often in the company of family members or of peers.
Work is undertaken from an early age by vast numbers of children in the world and may have a natural place in growing up.
Work can contribute to the well-being of children in a variety of ways; children often choose to work to improve their lives, both in the short- and long-term. At the material level, children’s work often contributes to producing food or earning income that benefits themselves and their families; and such income is especially important when the families are poor. Work can provide an escape from debilitating poverty, sometimes by allowing a young person to move away from an impoverished environment. Young people often enjoy their work, especially paid work, or when work involves the company of peers. Even when work is intensive and enforced, children often find ways to combine their work with play.
While full-time work hinders schooling, empirical evidence is varied on the relationship between part-time work and school. Sometimes even part-time work may hinder school attendance or performance. On the other hand, many poor children work for resources to attend school. Children who are not doing well at school sometimes seek more satisfactory experience in work. Good relations with a supervisor at work can provide relief from tensions that children feel at school and home. In the modern world, school education has become so central to society that schoolwork has become the dominant work for most children, often replacing participation in productive work. If school curricula or quality do not provide children with appropriate skills for available jobs or if children do nor have the aptitude for schoolwork, school may impede the learning of skills, such as agriculture, which will become necessary for future livelihood.
See also
thumb|upright|Lewis Hine used photography to help bring attention to child labour in America. He created this poster in 1914 with an appeal about child labour.
Child abuse
Child labour in Africa
Child labour in Bangladesh
Child labour in India
Child migration
Child prostitution
Child slavery
Child soldiers
Child work in indigenous American cultures
Children in cocoa production
Children's rights movement
Concerned for Working Children
Guaranteed minimum income
History of childhood
International Programme on the Elimination of Child Labour, IPEC
International Research on Working Children
Kinder der Landstrasse, Switzerland
Labour law
Unfree labour
Human trafficking
Debt bondage
Trafficking of children
Rochdale sex trafficking gang
Exploitation
Sweatshop
Legal working age
London matchgirls strike of 1888
Newsboys strike of 1899
Street children
International conventions and other instruments:
Pilot project on Delivery of water to households far from sources of safe water
ILO Forced Labour Convention, 1930 (No. 29)
ILO Abolition of Forced Labour Convention, 1957 (No. 105)
ILO Minimum Age Convention, 1973 (No. 138)
ILO Worst Forms of Child Labour Convention, 1999 (No. 182)
Notes
References
ILO Minimum Estimate of Forced Labour in the World. (2005)
The Cost of Coercion ILO 2009
International Labour Office. (2005). A global alliance against forced labour
Operational Indicators of Trafficking in Human Beings 2009 ILO/SAP-FL
Lists of indicators of Trafficking in Human Beings 2009 ILO/SAP-FL
Eradication of forced labour - General Survey concerning the Forced Labour Convention, 1930 (No. 29), and the Abolition of Forced Labour Convention, 1957 (No. 105) - ILO 2007
Forced Labour: Definition, Indicators and Measurement 2004 - ILO
Stopping Forced Labour 2001 - ILO
Further reading
Baland, Jean-Marie and James A. Robinson (2000) 'Is child labour inefficient?' Journal of Political Economy 108, 663–679
Basu, Kaushik, and Homa Zarghamee (2009) 'Is product boycott a good idea for controlling child labour? A theoretical investigation' Journal of Development Economics 88, 217–220
Bhukuth, Augendra. "Defining child labour: a controversial debate" Development in Practice (2008) 18, 385–394
Emerson, Patrick M., and André Portela Souza. "Is Child Labour Harmful? The Impact of Working Earlier in Life on Adult Earnings" Economic Development and Cultural Change 59:345–385, January 2011 uses data from Brazil to show very strong negative effects—boys who work before age 14 earn much less as adults
Humbert, Franziska. The Challenge of Child Labour in International Law (2009)
Humphries, Jane. Childhood and Child Labour in the British Industrial Revolution (2010)
ILO, Investing in every child: An economic Study of the Costs and Benefits of Eliminating Child Labour
Mayer, Gerald. Child Labor in America: History, Policy, and Legislative Issues. Washington, D.C.: Congressional Research Service, 2013.
Ravallion, Martin, and Quentin Wodon (2000) 'Does child labour displace schooling? Evidence on behavioural responses to an enrollment subsidy' Economic Journal 110, C158-C175
History
"Child Employing Industries," Annals of the American Academy of Political and Social Science Vol. 35, Mar. 1910 in JSTOR, articles by experts in 1910
Goldberg, Ellis. Trade, Reputation, and Child Labour in Twentieth-Century Egypt (2004) excerpt and text search
Grier, Beverly. Invisible Hands: Child Labour and the State in Colonial Zimbabwe (2005)
Hindman, Hugh D. Child Labour: An American History (2002)
Humphries, Jane. Childhood and Child Labour in the British Industrial Revolution (Cambridge Studies in Economic History) (2011) excerpt and text search
Kirby, Peter. Child Labour in Britain, 1750-1870 (2003) excerpt and text search
McIntosh, Robert. Boys in the pits: Child labour in coal mines (McGill-Queen's Press-MQUP, 2000), Canadian mines
Meerkerk, Elise van Naderveen; Schmidt, Ariadne. "Between Wage Labor and Vocation: Child Labor in Dutch Urban Industry, 1600-1800," Journal of Social History (2008) 41#3 pp 717–736 in Project MUSE
Mofford, Juliet. Child Labour in America (1970)
Tuttle, Carolyn. Hard At Work In Factories And Mines: The Economics Of Child Labour During The British Industrial Revolution (1999)
External links
Combating Child Labor — Bureau of International Labor Affairs, U.S. Department of Labor
A UNICEF web resource with tables of % children who work for a living, by country and gender
Rare child labour photos from the U.S. Library of Congress
History Place Photographs from 1908–1912
International Research on Child Labour
International Program on the Elimination of Child Labour International Labour Organisation (UN)
World Day Against Child Labour 12 June
Concerned for Working Children An India-based non-profit organisation working towards elimination of child labour
The OneWorld guide to child labour
The State of the World's Children – a UNICEF study
"United States Child Labour, 1908–1920: As Seen Through the Lens of Sociologist and Photographer Lewis W. Hine" (video)
Child Labour in Chile, 1880–1950 download complete text, in spanish
12 to 12 community portal ILO sponsored website on the elimination of child labour
The ILO Special Action Programme to combat Forced Labour (SAP-FL)
Category:Childhood
Category:History of youth
*
Category:Children's rights
Category:Labor rights
Category:Human trafficking
Category:Ethically disputed working conditions | 101,942 | 2017-01 |
Buckingham Palace | thumb|300px|Buckingham Palace. This is the principal façade, the East Front; originally constructed by Edward Blore and completed in 1850. It acquired its present appearance following a remodelling, in 1913, by Sir Aston Webb.
thumb|upright|Queen Victoria, the first monarch to reside at Buckingham Palace, moved into the newly completed palace in 1837.
Buckingham Palace ( ) is the London residence and administrative headquarters of the reigning monarch of the United Kingdom.By tradition, the British Royal Court is officially resident at St James's Palace, which means that, while foreign ambassadors assuming their new position are received by the British sovereign at Buckingham Palace, they are accredited to the "Court of St James's Palace". This anomaly continues for the sake of tradition, as Buckingham Palace is to all intents and purposes the official residence. See History of St James's Palace (Official website of the British Monarchy). Located in the City of Westminster, the palace is often at the centre of state occasions and royal hospitality. It has been a focal point for the British people at times of national rejoicing and mourning.
Originally known as Buckingham House, the building at the core of today's palace was a large townhouse built for the Duke of Buckingham in 1703 on a site that had been in private ownership for at least 150 years. It was acquired by King George III in 1761Robinson, p. 14. as a private residence for Queen Charlotte and became known as The Queen's House. During the 19th century it was enlarged, principally by architects John Nash and Edward Blore, who constructed three wings around a central courtyard. Buckingham Palace became the London residence of the British monarch on the accession of Queen Victoria in 1837.
The last major structural additions were made in the late 19th and early 20th centuries, including the East front, which contains the well-known balcony on which the royal family traditionally congregates to greet crowds. The palace chapel was destroyed by a German bomb during World War II; the Queen's Gallery was built on the site and opened to the public in 1962 to exhibit works of art from the Royal Collection.
The original early 19th-century interior designs, many of which survive, include widespread use of brightly coloured scagliola and blue and pink lapis, on the advice of Sir Charles Long. King Edward VII oversaw a partial redecoration in a Belle Époque cream and gold colour scheme. Many smaller reception rooms are furnished in the Chinese regency style with furniture and fittings brought from the Royal Pavilion at Brighton and from Carlton House. The palace has 775 rooms, and the garden is the largest private garden in London. The state rooms, used for official and state entertaining, are open to the public each year for most of August and September, and on some days in winter and spring.
History
Site
thumb|Buckingham House, c. 1710, was designed by William Winde for the 1st Duke of Buckingham and Normanby. This façade evolved into today's Grand Entrance on the west (inner) side of the quadrangle, with the Green Drawing Room above.
In the Middle Ages, the site of the future palace formed part of the Manor of Ebury (also called Eia). The marshy ground was watered by the river Tyburn, which still flows below the courtyard and south wing of the palace.Goring, p. 15. Where the river was fordable (at Cow Ford), the village of Eye Cross grew. Ownership of the site changed hands many times; owners included Edward the Confessor and his queen consort Edith of Wessex in late Saxon times, and, after the Norman Conquest, William the Conqueror. William gave the site to Geoffrey de Mandeville, who bequeathed it to the monks of Westminster Abbey.The topography of the site and its ownership are dealt with in Wright, chapters 1–4
In 1531, King Henry VIII acquired the Hospital of St James (later St James's Palace)Goring, p. 28. from Eton College, and in 1536 he took the Manor of Ebury from Westminster Abbey.Goring, p. 18. These transfers brought the site of Buckingham Palace back into royal hands for the first time since William the Conqueror had given it away almost 500 years earlier.
Various owners leased it from royal landlords and the freehold was the subject of frenzied speculation during the 17th century. By then, the old village of Eye Cross had long since fallen into decay, and the area was mostly wasteland.Wright, pp. 76–78. Needing money, James I sold off part of the Crown freehold but retained part of the site on which he established a mulberry garden for the production of silk. (This is at the northwest corner of today's palace.)Goring, pp. 31, 36. Clement Walker in Anarchia Anglicana (1649) refers to "new-erected sodoms and spintries at the Mulberry Garden at S. James's"; this suggests it may have been a place of debauchery. Eventually, in the late 17th century, the freehold was inherited from the property tycoon Sir Hugh Audley by the great heiress Mary Davies.Audley and Davies were key figures in the development of Ebury Manor and also the Grosvenor Estate (see Dukes of Westminster), which still exists today. (They are remembered in the streetnames North Audley Street, South Audley Street, and Davies Street, all in Mayfair.)
First houses on the site
Goring House
Possibly the first house erected within the site was that of a Sir William Blake, around 1624.Wright, p. 83. The next owner was Lord Goring, who from 1633 extended Blake's house and developed much of today's garden, then known as Goring Great Garden.Goring, Chapter VHarris, p. 21. He did not, however, obtain the freehold interest in the mulberry garden. Unbeknown to Goring, in 1640 the document "failed to pass the Great Seal before King Charles I fled London, which it needed to do for legal execution".Wright, p. 96. It was this critical omission that helped the British royal family regain the freehold under King George III.Goring, p. 62.
Arlington House
The improvident Goring defaulted on his rents;Goring, p. 58. Henry Bennet, 1st Earl of Arlington obtained the mansion and was occupying it, now known as Goring House, when it burned down in 1674. Arlington House rose on the site—the location of the southern wing of today's palace—the next year. In 1698, John Sheffield, later the first Duke of Buckingham and Normanby, acquired the lease.
Buckingham House
thumb|The palace c. 1837, depicting the Marble Arch, which served as the ceremonial entrance to the Palace precincts. It was moved to make way for the east wing, built in 1847, which enclosed the quadrangle.
The house which forms the architectural core of the palace was built for the first Duke of Buckingham and Normanby in 1703 to the design of William Winde. The style chosen was of a large, three-floored central block with two smaller flanking service wings.Harris, p. 22. Buckingham House was eventually sold by Buckingham's descendant, Sir Charles Sheffield, in 1761 to George III for £21,000.Mackenzie, p. 12 and Nash, p. 18; although the purchase price is given by Wright p. 142 as £28,000 Sheffield's leasehold on the mulberry garden site, the freehold of which was still owned by the royal family, was due to expire in 1774.Mackenzie, p. 12
From Queen's House to palace
Under the new crown ownership, the building was originally intended as a private retreat for King George III's wife, Queen Charlotte, and was accordingly known as The Queen's House. Remodelling of the structure began in 1762.Harris, p. 24. In 1775, an Act of Parliament settled the property on Queen Charlotte, in exchange for her rights to Somerset House (see Old and New London (below), and 14 of their 15 children were born there. Some furnishings were transferred from Carlton House, and others had been bought in France after the French RevolutionJones, p. 42. of 1789. While St James's Palace remained the official and ceremonial royal residence,Westminster: Buckingham Palace, Old and New London: Volume 4 (1878), pp. 61–74. Date accessed: 3 February 2009. The tradition persists of foreign ambassadors being formally accredited to "the Court of St James's", even though it is at Buckingham Palace that they present their credentials and staff to the Monarch upon their appointment. the name "Buckingham-palace" began from at least 1791.
After his accession to the throne in 1820, King George IV continued the renovation with the idea in mind of a small, comfortable home. While the work was in progress, in 1826, the King decided to modify the house into a palace with the help of his architect John Nash.Harris, pp. 30–31. The external façade was designed keeping in mind the French neo-classical influence preferred by George IV. The cost of the renovations grew dramatically, and by 1829 the extravagance of Nash's designs resulted in his removal as architect. On the death of George IV in 1830, his younger brother King William IV hired Edward Blore to finish the work.Harris, p. 33. After the destruction of the Palace of Westminster by fire in 1834, William considered converting the palace into the new Houses of Parliament.
Home of the monarch
Buckingham Palace finally became the principal royal residence in 1837, on the accession of Queen Victoria, who was the first monarch to reside there; her predecessor William IV had died before its completion.Hedley, p. 10. While the state rooms were a riot of gilt and colour, the necessities of the new palace were somewhat less luxurious. For one thing, it was reported the chimneys smoked so much that the fires had to be allowed to die down, and consequently the court shivered in icy magnificence.Woodham-Smith, p. 249. Ventilation was so bad that the interior smelled, and when it was decided to install gas lamps, there was a serious worry about the build-up of gas on the lower floors. It was also said that staff were lax and lazy and the palace was dirty. Following the Queen's marriage in 1840, her husband, Prince Albert, concerned himself with a reorganisation of the household offices and staff, and with the design faults of the palace. The problems were all rectified by the close of 1840. However, the builders were to return within the decade.
By 1847, the couple had found the palace too small for court life and their growing family,Harris, de Bellaigue & Miller, p. 33. and consequently the new wing, designed by Edward Blore, was built by Thomas Cubitt,Holland & Hannen and Cubitts – The Inception and Development of a Great Building Firm, published 1920, p. 35. enclosing the central quadrangle. The large East Front, facing The Mall, is today the "public face" of Buckingham Palace, and contains the balcony from which the royal family acknowledge the crowds on momentous occasions and after the annual Trooping the Colour. The ballroom wing and a further suite of state rooms were also built in this period, designed by Nash's student Sir James Pennethorne.
Before Prince Albert's death, the palace was frequently the scene of musical entertainments,Hedley, p. 19. and the greatest contemporary musicians entertained at Buckingham Palace. The composer Felix Mendelssohn is known to have played there on three occasions.Healey, pp. 137–138. Johann Strauss II and his orchestra played there when in England.Healey, p. 122. Strauss's "Alice Polka" was first performed at the palace in 1849 in honour of the queen's daughter, Princess Alice. Under Victoria, Buckingham Palace was frequently the scene of lavish costume balls, in addition to the usual royal ceremonies, investitures and presentations.
Widowed in 1861, the grief-stricken Queen withdrew from public life and left Buckingham Palace to live at Windsor Castle, Balmoral Castle and Osborne House. For many years the palace was seldom used, even neglected. In 1864, a note was found pinned to the fence of Buckingham Palace, saying: "These commanding premises to be let or sold, in consequence of the late occupant's declining business." Eventually, public opinion persuaded the Queen to return to London, though even then she preferred to live elsewhere whenever possible. Court functions were still held at Windsor Castle, presided over by the sombre Queen habitually dressed in mourning black, while Buckingham Palace remained shuttered for most of the year.Robinson, p. 9.
Interior
thumb|left|Piano nobile of Buckingham Palace. A: State Dining Room; B: Blue Drawing Room; C: Music Room; D: White Drawing Room; E: Royal Closet; F: Throne Room; G: Green Drawing Room; H: Cross Gallery; J: Ballroom; K: East Gallery; L: Yellow Drawing Room; M: Centre/Balcony Room; N: Chinese Luncheon Room; O: Principal Corridor; P: Private Apartments; Q: Service Areas; W: The Grand staircase. On the ground floor: R: Ambassador's Entrance; T: Grand Entrance. The areas defined by shaded walls represent lower minor wings. Note: This is an unscaled sketch plan for reference only. Proportions of some rooms may slightly differ in reality.
The palace measures by , is high and contains over of floorspace. The floor area is smaller than the Royal Palace of Madrid, the Papal Palace and Quirinal Palace in Rome, the Louvre in Paris, the Hofburg Palace in Vienna, and the Forbidden City.Robinson, p. 11. There are 775 rooms, including 19 state rooms, 52 principal bedrooms, 188 staff bedrooms, 92 offices, and 78 bathrooms. The palace also has its own post office, cinema, swimming pool, doctor's surgery, and jeweller's workshop.
The principal rooms are contained on the piano nobile behind the west-facing garden façade at the rear of the palace. The centre of this ornate suite of state rooms is the Music Room, its large bow the dominant feature of the façade. Flanking the Music Room are the Blue and the White Drawing Rooms. At the centre of the suite, serving as a corridor to link the state rooms, is the Picture Gallery, which is top-lit and long.Harris, p. 41. The Gallery is hung with numerous works including some by Rembrandt, van Dyck, Rubens and Vermeer;Harris, pp. 78–79.Healey, pp. 387–388. other rooms leading from the Picture Gallery are the Throne Room and the Green Drawing Room. The Green Drawing Room serves as a huge anteroom to the Throne Room, and is part of the ceremonial route to the throne from the Guard Room at the top of the Grand Staircase. The Guard Room contains white marble statues of Queen Victoria and Prince Albert, in Roman costume, set in a tribune lined with tapestries. These very formal rooms are used only for ceremonial and official entertaining, but are open to the public every summer.
thumb|The Duke of Edinburgh seated in the Chinese Luncheon Room, one of a series of Chinese themed rooms on the piano nobile of the east wing. The fireplace, theoretically more Indian than Chinese, was designed by Robert Jones and sculpted by Richard Westmacott. It was formerly in the Music Room at the Brighton Pavilion. The ornate clock, known as the Kylin Clock was made in Jingdezhen, Jiangxi Province, China, in the second half of the 18th century; it has a later movement by Benjamin Vulliamy circa 1820.Harris, de Bellaigue & Miller, p. 135.
Directly underneath the State Apartments is a suite of slightly less grand rooms known as the semi-state apartments. Opening from the Marble Hall, these rooms are used for less formal entertaining, such as luncheon parties and private audiences. Some of the rooms are named and decorated for particular visitors, such as the 1844 Room, decorated in that year for the State visit of Tsar Nicholas I of Russia, and, on the other side of the Bow Room, the 1855 Room, in honour of the visit of Emperor Napoleon III of France.Harris, p. 81. At the centre of this suite is the Bow Room, through which thousands of guests pass annually to the Queen's Garden Parties in the Gardens.Harris, p. 40. The Queen and Prince Philip use a smaller suite of rooms in the north wing.
Between 1847 and 1850, when Blore was building the new east wing, the Brighton Pavilion was once again plundered of its fittings. As a result, many of the rooms in the new wing have a distinctly oriental atmosphere. The red and blue Chinese Luncheon Room is made up from parts of the Brighton Banqueting and Music Rooms with a large oriental chimney piece sculpted by Richard Westmacott.Harris, de Bellaigue & Miller, p. 87. The Yellow Drawing Room has wallpaper supplied in 1817 for the Brighton Saloon, and a chimney piece which is a European vision of how the Chinese chimney piece may appear. It has nodding mandarins in niches and fearsome winged dragons, designed by Robert Jones.Healey, pp. 159–160.
At the centre of this wing is the famous balcony with the Centre Room behind its glass doors. This is a Chinese-style saloon enhanced by Queen Mary, who, working with the designer Sir Charles Allom, created a more "binding"Harris, de Bellaigue & Miller, p. 93. Chinese theme in the late 1920s, although the lacquer doors were brought from Brighton in 1873. Running the length of the piano nobile of the east wing is the great gallery, modestly known as the Principal Corridor, which runs the length of the eastern side of the quadrangle.Harris, de Bellaigue & Miller, p. 91. It has mirrored doors, and mirrored cross walls reflecting porcelain pagodas and other oriental furniture from Brighton. The Chinese Luncheon Room and Yellow Drawing Room are situated at each end of this gallery, with the Centre Room obviously placed in the centre.
The original early 19th-century interior designs, many of which still survive, included widespread use of brightly coloured scagliola and blue and pink lapis, on the advice of Sir Charles Long. King Edward VII oversaw a partial redecoration in a Belle époque cream and gold colour scheme.Jones, p. 43.
When paying a state visit to Britain, foreign heads of state are usually entertained by the Queen at Buckingham Palace. They are allocated a large suite of rooms known as the Belgian Suite, situated at the foot of the Minister's Staircase, on the ground floor of the north-facing Garden Wing. The rooms of the suite are linked by narrow corridors, one of them is given extra height and perspective by saucer domes designed by Nash in the style of Soane.Harris, p. 82. A second corridor in the suite has Gothic influenced cross over vaulting. The Belgian Rooms themselves were decorated in their present style and named after Prince Albert's uncle Léopold I, first King of the Belgians. In 1936, the suite briefly became the private apartments of the palace when they were occupied by King Edward VIII.
Court ceremonies
Investitures
Investitures, which include the conferring of knighthoods by dubbing with a sword, and other awards take place in the palace's Ballroom, built in 1854. At long, wide and high, it is the largest room in the palace. It has replaced the throne room in importance and use. During investitures, the Queen stands on the throne dais beneath a giant, domed velvet canopy, known as a shamiana or a baldachin, that was used at the Delhi Durbar in 1911.Harris, p. 72. A military band plays in the musicians' gallery as award recipients approach the Queen and receive their honours, watched by their families and friends.Healey, p. 364.
State banquets
State banquets also take place in the Ballroom; these formal dinners are held on the first evening of a state visit by a foreign head of state. On these occasions, for up to 170 guests in formal "white tie and decorations", including tiaras, the dining table is laid with the Grand Service, a collection of silver-gilt plate made in 1811 for the Prince of Wales, later George IV. The largest and most formal reception at Buckingham Palace takes place every November when the Queen entertains members of the diplomatic corps.Healey, p. 362. On this grand occasion, all the state rooms are in use, as the royal family proceed through them,Hedley, p. 16. beginning at the great north doors of the Picture Gallery. As Nash had envisaged, all the large, double-mirrored doors stand open, reflecting the numerous crystal chandeliers and sconces, creating a deliberate optical illusion of space and light.Robinson, p. 18.
Other ceremonies and functions
Smaller ceremonies such as the reception of new ambassadors take place in the "1844 Room". Here too, the Queen holds small lunch parties, and often meetings of the Privy Council. Larger lunch parties often take place in the curved and domed Music Room, or the State Dining Room. On all formal occasions, the ceremonies are attended by the Yeomen of the Guard in their historic uniforms, and other officers of the court such as the Lord Chamberlain.Healey, pp. 363–365.
Since the bombing of the palace chapel in World War II, royal christenings have sometimes taken place in the Music Room. The Queen's first three children were all baptised there.Robinson, p. 49.
The largest functions of the year are the Queen's Garden Parties for up to 8,000 invitees in the Garden.
Former ceremonial at the Palace
Court dress
thumb|right|The 1844 Room, a sitting room of the Belgium Suite, also serves as an audience room and is often used for personal investitures.
thumb|The Queen with President Nixon in the ground floor Marble Hall
Formerly, men not wearing military uniform wore knee breeches of an 18th-century design. Women's evening dress included trains and tiaras or feathers in their hair (or both). The dress code governing formal court uniform and dress has progressively relaxed. After World War I, when Queen Mary wished to follow fashion by raising her skirts a few inches from the ground, she requested a lady-in-waiting to shorten her own skirt first to gauge the king's reaction. King George V was horrified, so the queen kept her hemline unfashionably low.Healey, p. 233, quoting The Memoirs of Mabell, Countess of Airlie, edited and arranged by Jennifer Ellis, London: Hutchinson, 1962. Following their accession in 1936, King George VI and his consort, Queen Elizabeth, allowed the hemline of daytime skirts to rise. Today, there is no official dress code. Most men invited to Buckingham Palace in the daytime choose to wear service uniform or lounge suits; a minority wear morning coats, and in the evening, depending on the formality of the occasion, black tie or white tie.
"Coming Out" - Court presentation of débutantes
Débutantes were aristocratic young ladies making their first entrée into society through presentation to the monarch at court. These occasions, known as "coming out", took place at the palace from the reign of Edward VII. Wearing full court dress, with three ostrich feathers in their hair, débutantes entered, curtsied, and performed a backwards walk and a further curtsey, while manoeuvring a dress train of prescribed length. (The ceremony, known as an evening court, corresponded to the "court drawing rooms" of Victoria's reign.)Peacocke, pp. 178–179, 244–247. After World War II, the ceremony was replaced by less formal afternoon receptions, usually omitting curtsies and court dress.Peacocke, pp. 264–265. In 1958, the Queen abolished the presentation parties for débutantes, replacing them with Garden Parties.Princess Margaret is reputed to have remarked of the débutante presentations: "We had to put a stop to it, every tart in London was getting in." See Blaikie, Thomas (2002). You look awfully like the Queen: Wit and Wisdom from the House of Windsor. London: Harper Collins. ISBN 0-00-714874-7
Security breaches
The boy Jones was an intruder who gained entry to the palace on three occasions between 1838 and 1841 as recorded by Charles Dickens some 40 years later.Dickens, Charles (5 July 1885) "The boy Jones", All the Year Round, pp. 234–37. In 1982, Michael Fagan broke into the palace twice but, contrary to media reports of the time, did not speak to the Queen. It was only in 2007 that trespassing on the palace grounds became a criminal offence.
Garden, Royal Mews and The Mall
thumb|The west façade of Buckingham Palace, faced in Bath stone, seen from the palace garden
At the rear of the palace is the large and park-like garden, which together with its lake is the largest private garden in London. (Museum of London.) Retrieved 2 May 2009. There, the Queen hosts her annual garden parties each summer, and also holds large functions to celebrate royal milestones, such as jubilees. It covers , and includes a helicopter landing area, a lake, and a tennis court.
Adjacent to the palace is the Royal Mews, also designed by Nash, where the royal carriages, including the Gold State Coach, are housed. This rococo gilt coach, designed by Sir William Chambers in 1760, has painted panels by G. B. Cipriani. It was first used for the State Opening of Parliament by George III in 1762 and has been used by the monarch for every coronation since George IV. It was last used for the Golden Jubilee of Elizabeth II. Also housed in the mews are the coach horses used at royal ceremonial processions.
The Mall, a ceremonial approach route to the palace, was designed by Sir Aston Webb and completed in 1911 as part of a grand memorial to Queen Victoria. It extends from Admiralty Arch, across St James's Park to the Victoria Memorial. This route is used by the cavalcades and motorcades of visiting heads of state, and by the royal family on state occasions such as the annual Trooping the Colour.
Modern history
thumb|Visiting heads of state are received by the Queen at either Buckingham Palace or Windsor Castle. Here, United States President Barack Obama and Michelle Obama are greeted in the first-floor audience chamber in the private apartments in the north wing.
In 1901 the accession of Edward VII saw new life breathed into the palace. The new King and his wife Queen Alexandra had always been at the forefront of London high society, and their friends, known as "the Marlborough House Set", were considered to be the most eminent and fashionable of the age. Buckingham Palace—the Ballroom, Grand Entrance, Marble Hall, Grand Staircase, vestibules and galleries redecorated in the Belle époque cream and gold colour scheme they retain today—once again became a setting for entertaining on a majestic scale but leaving some to feel King Edward's heavy redecorations were at odds with Nash's original work.Robinson (Page 9) asserts that the decorations, including plaster swags and other decorative motifs, are "finicky" and "at odds with Nash's original detailing".
The last major building work took place during the reign of King George V when, in 1913, Sir Aston Webb redesigned Blore's 1850 East Front to resemble in part Giacomo Leoni's Lyme Park in Cheshire. This new, refaced principal façade (of Portland stone) was designed to be the backdrop to the Victoria Memorial, a large memorial statue of Queen Victoria, placed outside the main gates.Harris, p. 34. George V, who had succeeded Edward VII in 1910, had a more serious personality than his father; greater emphasis was now placed on official entertaining and royal duties than on lavish parties.Healey, p. 185. He arranged a series of command performances featuring jazz musicians such as the Original Dixieland Jazz Band (1919) – the first jazz performance for a head of state, Sidney Bechet, and Louis Armstrong (1932), which earned the palace a nomination in 2009 for a (Kind of) Blue Plaque by the Brecon Jazz Festival as one of the venues making the greatest contribution to jazz music in the United Kingdom. George V's wife Queen Mary was a connoisseur of the arts, and took a keen interest in the Royal Collection of furniture and art, both restoring and adding to it. Queen Mary also had many new fixtures and fittings installed, such as the pair of marble Empire-style chimneypieces by Benjamin Vulliamy, dating from 1810, which the Queen had installed in the ground floor Bow Room, the huge low room at the centre of the garden façade. Queen Mary was also responsible for the decoration of the Blue Drawing Room.Healey pp. 221–222. This room, long, previously known as the South Drawing Room, has a ceiling designed specially by Nash, coffered with huge gilt console brackets.Harris, p. 63.
thumb|The Victoria Memorial was created by sculptor Sir Thomas Brock in 1911 and erected in front of the main gates at the palace on a surround constructed by architect Sir Aston Webb.
During World War I, the palace, then the home of King George V and Queen Mary, escaped unscathed. Its more valuable contents were evacuated to Windsor but the royal family remained in situ. The King imposed rationing at the palace, much to the dismay of his guests and household. To the King's later regret, David Lloyd George persuaded him to go further by ostentatiously locking the wine cellars and refraining from alcohol, to set a good example to the supposedly inebriated working class. The workers continued to imbibe and the King was left unhappy at his enforced abstinence.Rose, pp. 178–179. In 1938, the north-west pavilion, designed by Nash as a conservatory, was converted into a swimming pool.Allison and Riddell, p. 69.
During World War II, the palace was bombed nine times, the most serious and publicised of which resulted in the destruction of the palace chapel in 1940. Coverage of this event was played in cinemas all over the UK to show the common suffering of rich and poor. One bomb fell in the palace quadrangle while King George VI and Queen Elizabeth were in residence, and many windows were blown in and the chapel destroyed. War-time coverage of such incidents was severely restricted, however. The King and Queen were filmed inspecting their bombed home, the smiling Queen, as always, immaculately dressed in a hat and matching coat seemingly unbothered by the damage around her. It was at this time the Queen famously declared: "I'm glad we have been bombed. Now I can look the East End in the face". The royal family were seen as sharing their subjects' hardship, as The Sunday Graphic reported:
On 15 September 1940, known as the Battle of Britain Day, an RAF pilot, Ray Holmes of No. 504 Squadron RAF rammed a German bomber he believed was going to bomb the Palace. Holmes had run out of ammunition and made the quick decision to ram it. Holmes bailed out. Both aircraft crashed. In fact the Dornier Do 17 bomber was empty. It had already been damaged, two of its crew had been killed and the remainder bailed out. Its pilot, Feldwebel Robert Zehbe, landed, only to die later of wounds suffered during the attack. During the Dornier's descent, it somehow unloaded its bombs, one of which hit the Palace. It then crashed into the forecourt of London Victoria station.Price, Alfred. The Battle of Britain Day, Greenhill Books, London, 1990, pp. 49–50 and Stephen Bungay, The Most Dangerous Enemy: A History of the Battle of Britain. Aurum Press, London, 2000, p. 325. The bomber's engine was later exhibited at the Imperial War Museum in London. The British pilot became a King's Messenger after the war, and died at the age of 90 in 2005.
On VE Day—8 May 1945—the palace was the centre of British celebrations. The King, Queen, Princess Elizabeth (the future Queen), and Princess Margaret appeared on the balcony, with the palace's blacked-out windows behind them, to the cheers from a vast crowd in the Mall.1945: Rejoicing at end of war in Europe (BBC On this day.) Retrieved 3 February 2009. The damaged Palace was carefully restored after the War by John Mowlem & Co. It was designated a Grade I listed building in 1970.
21st century: Royal use and public access
Every year some 50,000 invited guests are entertained at garden parties, receptions, audiences and banquets. Three Garden Parties are held in the summer, usually in July. The Forecourt of Buckingham Palace is used for Changing of the Guard, a major ceremony and tourist attraction (daily from April to July; every other day in other months).
thumb|The Queen's Gallery
The palace, like Windsor Castle, is owned by the reigning monarch in right of the Crown. It is not the monarch's personal property, unlike Sandringham House and Balmoral Castle. Many of the contents from Buckingham Palace, Windsor Castle, Kensington Palace, and St James's Palace are part of the Royal Collection, held in trust by the Sovereign; they can, on occasion, be viewed by the public at the Queen's Gallery, near the Royal Mews. Unlike the palace and the castle, the purpose-built gallery is open continually and displays a changing selection of items from the collection. It occupies the site of the chapel destroyed by an air raid in World War II. The palace's state rooms have been open to the public during August and September and on selected dates throughout the year since 1993. The money raised in entry fees was originally put towards the rebuilding of Windsor Castle after the 1992 fire devastated many of its state rooms. In the year to 31 March 2016, 519,000 people visited the palace.
Her Majesty's Government is responsible for maintaining the palace in exchange for the profits made by the Crown Estate. In November 2015, the State Dining Room was closed for six months because its ceiling had become potentially dangerous. A 10-year schedule of maintenance work, including new plumbing, wiring, boilers and radiators, and the installation of solar panels on the roof, has been estimated to cost £369 million and was approved by the prime minister in November 2016. It will be funded by a temporary increase in the Sovereign Grant paid from the income of the Crown Estate and is intended to extend the building's working life by at least 50 years.
Thus, Buckingham Palace is a symbol and home of the British monarchy, an art gallery and a tourist attraction. Behind the gilded railings and gates which were completed by the Bromsgrove Guild in 1911 and Webb's famous façade, which has been described in a book published by the Royal Collection as looking "like everybody's idea of a palace", is not only a weekday home of the Queen and Prince Philip but also the London residence of the Duke of York and the Earl and Countess of Wessex. The palace also houses the offices of the Queen, Prince Philip, the Duke of York, the Earl and Countess of Wessex, the Princess Royal, and Princess Alexandra, and is the workplace of more than 800 people.
See also
Flags at Buckingham Palace
List of British royal residences
Queen's Guard
Notes
References
Allison, Ronald; Riddell, Sarah (1991). The Royal Encyclopedia. London: Macmillan. ISBN 0-333-53810-2
Blaikie, Thomas (2002). You Look Awfully Like the Queen: Wit and Wisdom from the House of Windsor. London: Harper Collins. ISBN 0-00-714874-7.
Goring, O. G. (1937). From Goring House to Buckingham Palace. London: Ivor Nicholson & Watson.
Harris, John; de Bellaigue, Geoffrey; & Miller, Oliver (1968). Buckingham Palace. London: Nelson. ISBN 0-17-141011-4.
Healey, Edma (1997). The Queen's House: A Social History of Buckingham Palace. London: Penguin Group. ISBN 0-7181-4089-3.
Hedley, Olwen (1971) The Pictorial History of Buckingham Palace. Pitkin, ISBN 0-85372-086-X.
Mackenzie, Compton (1953). The Queen's House. London: Hutchinson.
Nash, Roy (1980). Buckingham Palace: The Place and the People. London: Macdonald Futura. ISBN 0-354-04529-6.
Robinson, John Martin (1999). Buckingham Palace. Published by The Royal Collection, St James's Palace, London ISBN 1-902163-36-2.
Williams, Neville (1971). Royal Homes. Lutterworth Press. ISBN 0-7188-0803-7.
Woodham-Smith, Cecil (1973). Queen Victoria (vol 1) Hamish Hamilton Ltd.
Wright, Patricia (1999; first published 1996). The Strange History of Buckingham Palace. Stroud, Gloucs.: Sutton Publishing Ltd. ISBN 0-7509-1283-9.
External links
Buckingham Palace at the Royal Family website
Account of Buckingham Palace, with prints of Arlington House and Buckingham House from Old and New London (1878)
Account of the acquisition of the Manor of Ebury from Survey of London (1977)
The State Rooms, Buckingham Palace at the Royal Collection Trust
Category:Edward Blore buildings
Category:Edwardian architecture in London
Category:Georgian architecture in London
Category:Grade I listed buildings in the City of Westminster
Category:Grade I listed palaces
Category:Historic house museums in London
Category:Houses completed in 1703
Category:Houses completed in 1762
Category:John Nash buildings
Category:Museums in the City of Westminster
Category:Neoclassical architecture in London
Category:Neoclassical palaces
Category:Palaces in London
Category:Royal buildings in London
Category:Royal residences in the City of Westminster
Category:Terminating vistas
Category:Tourist attractions in the City of Westminster
Category:1837 establishments in the United Kingdom | 3,969 | 2017-01 |
Age of Enlightenment | The Enlightenment (also known as the Age of Enlightenment or the Age of Reason; in ; and in , 'Enlightenment') was an intellectual movement which dominated the world of ideas in Europe in the 18th century. The Enlightenment included a range of ideas centered on reason as the primary source of authority and legitimacy, and came to advance ideals like liberty, progress, tolerance, fraternity, constitutional government, and separation of church and state. In France, the central doctrines of were individual liberty and religious tolerance in opposition to an absolute monarchy and the fixed dogmas of the Roman Catholic Church. The Enlightenment was marked by an emphasis on the scientific method and reductionism along with increased questioning of religious orthodoxy – an attitude captured by the phrase Sapere aude, "Dare to know".
French historians traditionally place the Enlightenment between 1715, the year that Louis XIV died, and 1789, the beginning of the French Revolution. Some recent historians begin the period in the 1620s, with the start of the scientific revolution. () of the period widely circulated their ideas through meetings at scientific academies, Masonic lodges, literary salons, coffee houses, and through printed books and pamphlets. The ideas of the Enlightenment undermined the authority of the monarchy and the Church, and paved the way for the political revolutions of the 18th and 19th centuries. A variety of 19th-century movements, including liberalism and neo-classicism, trace their intellectual heritage back to the Enlightenment.Eugen Weber, Movements, Currents, Trends: Aspects of European Thought in the Nineteenth and Twentieth Centuries (1992)
The Age of Enlightenment was preceded by and closely associated with the scientific revolution.I. Bernard Cohen, "Scientific Revolution and Creativity in the Enlightenment." Eighteenth-Century Life 7.2 (1982): 41–54. Earlier philosophers whose work influenced the Enlightenment included Francis Bacon, René Descartes, John Locke, and Baruch Spinoza.Sootin, Harry. "Isaac Newton." New York, Messner (1955) The major figures of the Enlightenment included Cesare Beccaria, Voltaire, Denis Diderot, Jean-Jacques Rousseau, David Hume, Adam Smith, and Immanuel Kant. Some European rulers, including Catherine II of Russia, Joseph II of Austria and Frederick II of Prussia, tried to apply Enlightenment thought on religious and political tolerance, which became known as enlightened absolutism.Jeremy Black, "Ancien Regime and Enlightenment. Some Recent Writing on Seventeenth-and Eighteenth-Century Europe," European History Quarterly 22.2 (1992): 247–55. Benjamin Franklin visited Europe repeatedly and contributed actively to the scientific and political debates there and brought the newest ideas back to Philadelphia. Thomas Jefferson closely followed European ideas and later incorporated some of the ideals of the Enlightenment into the Declaration of Independence (1776). Others like James Madison incorporated them into the Constitution in 1787.Robert A. Ferguson, The American Enlightenment, 1750–1820 (1994).
The most influential publication of the Enlightenment was the (Encyclopaedia). Published between 1751 and 1772 in thirty-five volumes, it was compiled by Denis Diderot, Jean le Rond d'Alembert (until 1759), and a team of 150 scientists and philosophers and it helped spread the ideas of the Enlightenment across Europe and beyond.Robert Darnton, The Business of Enlightenment: a publishing history of the Encyclopédie, 1775–1800 (2009).
Other landmark publications were Voltaire's Dictionnaire philosophique (Philosophical Dictionary; 1764) and Letters on the English (1733); Rousseau's Discourse on Inequality (1754) and The Social Contract (1762); Adam Smith's The Wealth of Nations (1776); and Montesquieu's Spirit of the Laws (1748). The ideas of the Enlightenment played a major role in inspiring the French Revolution, which began in 1789. After the Revolution, the Enlightenment was followed by an opposing intellectual movement known as Romanticism.
Philosophy
René Descartes' rationalist philosophy laid the foundation for enlightenment thinking. His attempt to found the sciences on a secure metaphysical foundation was not as successful as his method of doubt applied in philosophic areas leading to a dualistic doctrine of mind and matter. His skepticism was refined by John Locke's 1690 Essay Concerning Human Understanding and David Hume's writings in the 1740s. His dualism was challenged by Spinoza's uncompromising assertion of the unity of matter in his Tractatus (1670) and Ethics (1677).
These laid down two distinct lines of Enlightenment thought: the moderate variety, following Descartes, Locke and Christian Wolff which sought accommodation between reform and the traditional systems of power and faith and the radical enlightenment, inspired by the philosophy of Spinoza, advocating democracy, individual liberty, freedom of expression, and eradication of religious authority The moderate variety tended to be deistic whereas the radical tendency separated the basis of morality entirely from theology. Both lines of thought were eventually opposed by a conservative Counter-Enlightenment, which sought a return to faith.
In the mid-18th century, Paris became the center of an explosion of philosophic and scientific activity challenging traditional doctrines and dogmas. The philosophic movement was led by Voltaire and Jean-Jacques Rousseau, who argued for a society based upon reason rather than faith and Catholic doctrine, for a new civil order based on natural law, and for science based on experiments and observation. The political philosopher Montesquieu introduced the idea of a separation of powers in a government, a concept which was enthusiastically adopted by the authors of the United States Constitution. While the Philosophes of the French Enlightenment were not revolutionaries, and many were members of the nobility, their ideas played an important part in undermining the legitimacy of the Old Regime and shaping the French Revolution.
thumb|upright|German philosopher Immanuel Kant
Francis Hutcheson, a moral philosopher, described the utilitarian and consequentialist principle that virtue is that which provides, in his words, "the greatest happiness for the greatest numbers". Much of what is incorporated in the scientific method (the nature of knowledge, evidence, experience, and causation) and some modern attitudes towards the relationship between science and religion were developed by his protégés David Hume and Adam Smith. Hume became a major figure in the skeptical philosophical and empiricist traditions of philosophy.
Immanuel Kant (1724–1804) tried to reconcile rationalism and religious belief, individual freedom and political authority, as well as map out a view of the public sphere through private and public reason. Kant's work continued to shape German thought, and indeed all of European philosophy, well into the 20th century.Manfred Kuehn, Kant: A Biography (2001). Mary Wollstonecraft was one of England's earliest feminist philosophers. She argued for a society based on reason, and that women, as well as men, should be treated as rational beings. She is best known for her work A Vindication of the Rights of Woman (1791).Mary Wollstonecraft, A Vindication of the Rights of Woman (Renascence Editions, 2000) online
Science
Science played an important role in Enlightenment discourse and thought. Many Enlightenment writers and thinkers had backgrounds in the sciences and associated scientific advancement with the overthrow of religion and traditional authority in favour of the development of free speech and thought. Scientific progress during the Enlightenment included the discovery of carbon dioxide (fixed air) by the chemist Joseph Black, the argument for deep time by the geologist James Hutton, and the invention of the steam engine by James Watt.Bruce P. Lenman, Integration and Enlightenment: Scotland, 1746–1832 (1993) excerpt and text search The experiments of Lavoisier were used to create the first modern chemical plants in Paris, and the experiments of the Montgolfier Brothers enabled them to launch the first manned flight in a hot-air balloon on 21 November 1783, from the Château de la Muette, near the Bois de Boulogne.Sarmant, Thierry, Histoire de Paris, p. 120.
Broadly speaking, Enlightenment science greatly valued empiricism and rational thought, and was embedded with the Enlightenment ideal of advancement and progress. The study of science, under the heading of natural philosophy, was divided into physics and a conglomerate grouping of chemistry and natural history, which included anatomy, biology, geology, mineralogy, and zoology.Porter (2003), 79–80. As with most Enlightenment views, the benefits of science were not seen universally; Rousseau criticized the sciences for distancing man from nature and not operating to make people happier.Burns (2003), entry: 7,103. Science during the Enlightenment was dominated by scientific societies and academies, which had largely replaced universities as centres of scientific research and development. Societies and academies were also the backbone of the maturation of the scientific profession. Another important development was the popularization of science among an increasingly literate population. Philosophes introduced the public to many scientific theories, most notably through the Encyclopédie and the popularization of Newtonianism by Voltaire and Émilie du Châtelet. Some historians have marked the 18th century as a drab period in the history of science;see Hall (1954), iii; Mason (1956), 223. however, the century saw significant advancements in the practice of medicine, mathematics, and physics; the development of biological taxonomy; a new understanding of magnetism and electricity; and the maturation of chemistry as a discipline, which established the foundations of modern chemistry.
Scientific academies and societies grew out of the Scientific Revolution as the creators of scientific knowledge in contrast to the scholasticism of the university.Gillispie, (1980), p. xix. During the Enlightenment, some societies created or retained links to universities. However, contemporary sources distinguished universities from scientific societies by claiming that the university's utility was in the transmission of knowledge, while societies functioned to create knowledge.James E. McClellan III, "Learned Societies," in Encyclopedia of the Enlightenment, ed. Alan Charles Kors (Oxford: Oxford University Press, 2003) http://www.oup.com/us/catalog/general/subject/HistoryWorld/Modern/?view=usa&ci=9780195104301 (accessed on June 8, 2008). As the role of universities in institutionalized science began to diminish, learned societies became the cornerstone of organized science. Official scientific societies were chartered by the state in order to provide technical expertise.Porter, (2003), p. 91. Most societies were granted permission to oversee their own publications, control the election of new members, and the administration of the society.See Gillispie, (1980), "Conclusion." After 1700, a tremendous number of official academies and societies were founded in Europe, and by 1789 there were over seventy official scientific societies. In reference to this growth, Bernard de Fontenelle coined the term "the Age of Academies" to describe the 18th century.Porter, (2003), p. 90.
The influence of science also began appearing more commonly in poetry and literature during the Enlightenment. Some poetry became infused with scientific metaphor and imagery, while other poems were written directly about scientific topics. Sir Richard Blackmore committed the Newtonian system to verse in Creation, a Philosophical Poem in Seven Books (1712). After Newton's death in 1727, poems were composed in his honour for decades.Burns, (2003), entry: 158. James Thomson (1700–1748) penned his "Poem to the Memory of Newton," which mourned the loss of Newton, but also praised his science and legacy.Thomson, (1786), p. 203.
Sociology, economics and law
thumb|upright|Cesare Beccaria, father of classical criminal theory (1738–1794)
Hume and other Scottish Enlightenment thinkers developed a 'science of man', which was expressed historically in works by authors including James Burnett, Adam Ferguson, John Millar, and William Robertson, all of whom merged a scientific study of how humans behaved in ancient and primitive cultures with a strong awareness of the determining forces of modernity. Modern sociology largely originated from this movement,A. Swingewood, "Origins of Sociology: The Case of the Scottish Enlightenment", The British Journal of Sociology, vol. 21, no. 2 (June 1970), pp. 164–80 in JSTOR. and Hume's philosophical concepts that directly influenced James Madison (and thus the U.S. Constitution) and as popularised by Dugald Stewart, would be the basis of classical liberalism.D. Daiches, P. Jones and J. Jones, A Hotbed of Genius: The Scottish Enlightenment, 1730–1790 (1986).
Adam Smith published The Wealth of Nations, often considered the first work on modern economics, in 1776. It had an immediate impact on British economic policy that continues into the 21st century.M. Fry, Adam Smith's Legacy: His Place in the Development of Modern Economics (Routledge, 1992). It was immediately preceded and influenced by Anne-Robert-Jacques Turgot, Baron de Laune drafts of Reflections on the Formation and Distribution of Wealth (Paris, 1766). (Smith acknowledged indebtedness and possibly was the original English translator.)The Illusion of Free Markets, Bernard E. Harcourt, p. 260, notes 11–14.
Cesare Beccaria, a jurist, criminologist, philosopher, and politician and one of the great Enlightenment writers, became famous for his masterpiece Of Crimes and Punishments (1764), later translated into 22 languages, which condemned torture and the death penalty, and was a founding work in the field of penology and the Classical School of criminology by promoting criminal justice. Another prominent intellectual was Francesco Mario Pagano, who wrote important studies such as Saggi Politici (Political Essays, 1783), one of the major works of the Enlightenment in Naples, and Considerazioni sul processo criminale (Considerations on the criminal trial, 1787), which established him as an international authority on criminal law.Roland Sarti, Italy: A Reference Guide from the Renaissance to the Present, Infobase Publishing, 2009, p. 457
Politics
thumb|left|upright|Like other Enlightenment philosophers, Rousseau was critical of the Atlantic slave trade."The Abolition of The Slave Trade"
The Enlightenment has long been hailed as the foundation of modern Western political and intellectual culture.Daniel Brewer, The Enlightenment Past: reconstructing eighteenth-century French thought (2008), p. 1 The Enlightenment brought political modernization to the West, in terms of introducing democratic values and institutions and the creation of modern, liberal democracies. This thesis has been widely accepted by Anglophone scholars and has been reinforced by the large-scale studies by Robert Darnton, Roy Porter and most recently by Jonathan Israel.
Theories of government
thumb|upright|Denmark's minister Johann Struensee, a social reformer ahead of his time, was publicly executed in 1772
John Locke, one of the most influential Enlightenment thinkers, based his governance philosophy in social contract theory, a subject that permeated Enlightenment political thought. The English philosopher Thomas Hobbes ushered in this new debate with his work Leviathan in 1651. Hobbes also developed some of the fundamentals of European liberal thought: the right of the individual; the natural equality of all men; the artificial character of the political order (which led to the later distinction between civil society and the state); the view that all legitimate political power must be "representative" and based on the consent of the people; and a liberal interpretation of law which leaves people free to do whatever the law does not explicitly forbid.Pierre Manent, An Intellectual History of Liberalism (1994) pp. 20–38
Both Locke and Rousseau developed social contract theories in Two Treatises of Government and Discourse on Inequality, respectively. While quite different works, Locke, Hobbes, and Rousseau agreed that a social contract, in which the government's authority lies in the consent of the governed,Lessnoff, Michael H. Social Contract Theory. New York: New York U, 1990. Print. is necessary for man to live in civil society. Locke defines the state of nature as a condition in which humans are rational and follow natural law; in which all men are born equal and with the right to life, liberty and property. However, when one citizen breaks the Law of Nature, both the transgressor and the victim enter into a state of war, from which it is virtually impossible to break free. Therefore, Locke said that individuals enter into civil society to protect their natural rights via an "unbiased judge" or common authority, such as courts, to appeal to. Contrastingly, Rousseau's conception relies on the supposition that "civil man" is corrupted, while "natural man" has no want he cannot fulfill himself. Natural man is only taken out of the state of nature when the inequality associated with private property is established.Discourse on the Origin of Inequality Rousseau said that people join into civil society via the social contract to achieve unity while preserving individual freedom. This is embodied in the sovereignty of the general will, the moral and collective legislative body constituted by citizens.
Locke is known for his statement that individuals have a right to "Life, Liberty and Property", and his belief that the natural right to property is derived from labor. Tutored by Locke, Anthony Ashley-Cooper, 3rd Earl of Shaftesbury wrote in 1706: "There is a mighty Light which spreads its self over the world especially in those two free Nations of England and Holland; on whom the Affairs of Europe now turn". quoted in Locke's theory of natural rights has influenced many political documents, including the United States Declaration of Independence and the French National Constituent Assembly's Declaration of the Rights of Man and of the Citizen.
The philosophes argued that the establishment of a contractual basis of rights would lead to the market mechanism and capitalism, the scientific method, religious tolerance, and the organization of states into self-governing republics through democratic means. In this view, the tendency of the philosophes in particular to apply rationality to every problem is considered the essential change.Lorraine Y. Landry, Marx and the postmodernism debates: an agenda for critical theory (2000) p. 7
Though much of Enlightenment political thought was dominated by social contract theorists, both David Hume and Adam Ferguson criticized this camp. Hume's essay Of the Original Contract argues that governments derived from consent are rarely seen, and civil government is grounded in a ruler's habitual authority and force. It is precisely because of the ruler's authority over-and-against the subject, that the subject tacitly consents; Hume says that the subjects would "never imagine that their consent made him sovereign", rather the authority did so.Of the Original Contract Similarly, Ferguson did not believe citizens built the state, rather polities grew out of social development. In his 1767 An Essay on the History of Civil Society, Ferguson uses the four stages of progress, a theory that was very popular in Scotland at the time, to explain how humans advance from a hunting and gathering society to a commercial and civil society without "signing" a social contract.
Both Rousseau and Locke's social contract theories rest on the presupposition of natural rights, which are not a result of law or custom, but are things that all men have in pre-political societies, and are therefore universal and inalienable. The most famous natural right formulation comes from John Locke in his Second Treatise, when he introduces the state of nature. For Locke the law of nature is grounded on mutual security, or the idea that one cannot infringe on another's natural rights, as every man is equal and has the same inalienable rights. These natural rights include perfect equality and freedom, and the right to preserve life and property. Locke also argued against slavery on the basis that enslaving yourself goes against the law of nature; you cannot surrender your own rights, your freedom is absolute and no one can take it from you. Additionally, Locke argues that one person cannot enslave another because it is morally reprehensible, although he introduces a caveat by saying that enslavement of a lawful captive in time of war would not go against one's natural rights.
Enlightened absolutism
In several nations, rulers welcomed leaders of the Enlightenment at court and asked them to help design laws and programs to reform the system, typically to build stronger national states. These rulers are called "enlightened despots" by historians.Stephen J. Lee, Aspects of European history, 1494–1789 (1990) pp. 258–66 They included Frederick the Great of Prussia, Catherine the Great of Russia, Leopold II of Tuscany, and Joseph II of Austria. Joseph was over-enthusiastic, announcing many reforms that had little support, so that revolts broke out and his regime became a comedy of errors, and nearly all his programs were reversed.Nicholas Henderson, "Joseph II", History Today (March 1991) 41:21–27 Senior ministers Pombal in Portugal and Struensee in Denmark also governed according to Enlightenment ideals. In Poland, the model constitution of 1791 expressed Enlightenment ideals, but was in effect for only one year before the nation was partitioned among its neighbors. More enduring were the cultural achievements, which created a nationalist spirit in Poland.John Stanley, "Towards A New Nation: The Enlightenment and National Revival in Poland", Canadian Review of Studies in Nationalism, 1983, Vol. 10 Issue 2, pp. 83–110
Frederick the Great, the king of Prussia from 1740 to 1786, saw himself as a leader of the Enlightenment and patronized philosophers and scientists at his court in Berlin. Voltaire, who had been imprisoned and maltreated by the French government, was eager to accept Frederick's invitation to live at his palace. Frederick explained, "My principal occupation is to combat ignorance and prejudice ... to enlighten minds, cultivate morality, and to make people as happy as it suits human nature, and as the means at my disposal permit."Giles MacDonogh, Frederick the Great: A Life in Deed and Letters (2001) p. 341
The French Revolution
The Enlightenment has been frequently linked to the French Revolution of 1789. One view of the political changes that occurred during the Enlightenment is that the "consent of the governed" philosophy as delineated by Locke in Two Treatises of Government (1689) represented a paradigm shift from the old governance paradigm under feudalism known as the "divine right of kings." In this view, the revolutions of the late 1700s and early 1800s were caused by the fact that this governance paradigm shift often could not be resolved peacefully, and therefore violent revolution was the result. Clearly a governance philosophy where the king was never wrong was in direct conflict with one whereby citizens by natural law had to consent to the acts and rulings of their government.
Alexis de Tocqueville described the French Revolution as the inevitable result of the radical opposition created in the 18th century between the monarchy and the men of letters of the Enlightenment. These men of letters constituted a sort of "substitute aristocracy that was both all-powerful and without real power." This illusory power came from the rise of "public opinion," born when absolutist centralization removed the nobility and the bourgeoisie from the political sphere. The "literary politics" that resulted promoted a discourse of equality and was hence in fundamental opposition to the monarchical regime.Chartier, 8. See also Alexis de Tocqueville, L'Ancien Régime et la Révolution, 1850, Book Three, Chapter One. De Tocqueville "clearly designates ... the cultural effects of transformation in the forms of the exercise of power".Chartier, 13. Nevertheless, it took another century before cultural approach became central to the historiography, as typified by Robert Darnton, The Business of Enlightenment: A Publishing History of the Encyclopédie, 1775–1800 (1979).
Jonathan Israel asserts that "The prevailing view about the French Revolution not being caused by books and ideas in the first place may be very widely influential, but it is also, on the basis of the detailed evidence, totally indefensible. Indeed, without referring to Radical Enlightenment nothing about the French Revolution makes the slightest sense or can even begin to be provisionally explained."
Religion
Enlightenment era religious commentary was a response to the preceding century of religious conflict in Europe, especially the Thirty Years' War.Margaret C. Jacob, ed. The Enlightenment: Brief History with Documents, Boston: Bedford/St. Martin's, 2001, Introduction, pp. 1–72. Theologians of the Enlightenment wanted to reform their faith to its generally non-confrontational roots and to limit the capacity for religious controversy to spill over into politics and warfare while still maintaining a true faith in God. For moderate Christians, this meant a return to simple Scripture. John Locke abandoned the corpus of theological commentary in favor of an "unprejudiced examination" of the Word of God alone. He determined the essence of Christianity to be a belief in Christ the redeemer and recommended avoiding more detailed debate. Thomas Jefferson in the Jefferson Bible went further; he dropped any passages dealing with miracles, visitations of angels, and the resurrection of Jesus after his death. He tried to extract the practical Christian moral code of the New Testament.
Enlightenment scholars sought to curtail the political power of organized religion and thereby prevent another age of intolerant religious war. Spinoza determined to remove politics from contemporary and historical theology (e.g., disregarding Judaic law).Baruch Spinoza, Theologico-Political Treatise, "Preface," 1677, gutenberg.com Moses Mendelssohn advised affording no political weight to any organized religion, but instead recommended that each person follow what they found most convincing. A good religion based in instinctive morals and a belief in God should not theoretically need force to maintain order in its believers, and both Mendelssohn and Spinoza judged religion on its moral fruits, not the logic of its theology.
A number of novel ideas about religion developed with the Enlightenment, including Deism and talk of atheism. Deism, according to Thomas Paine, is the simple belief in God the Creator, with no reference to the Bible or any other miraculous source. Instead, the Deist relies solely on personal reason to guide his creed,Thomas Paine, Of the Religion of Deism Compared with the Christian Religion, 1804, Internet History Sourcebook which was eminently agreeable to many thinkers of the time. Atheism was much discussed, but there were few proponents. Wilson and Reill note that, "In fact, very few enlightened intellectuals, even when they were vocal critics of Christianity, were true atheists. Rather, they were critics of orthodox belief, wedded rather to skepticism, deism, vitalism, or perhaps pantheism." Some followed Pierre Bayle and argued that atheists could indeed be moral men. Many others like Voltaire held that without belief in a God who punishes evil, the moral order of society was undermined. That is, since atheists gave themselves to no Supreme Authority and no law, and had no fear of eternal consequences, they were far more likely to disrupt society. Bayle (1647–1706) observed that in his day, "prudent persons will always maintain an appearance of [religion].". He believed that even atheists could hold concepts of honor and go beyond their own self-interest to create and interact in society. Locke said that if there were no God and no divine law, the result would be moral anarchy: every individual "could have no law but his own will, no end but himself. He would be a god to himself, and the satisfaction of his own will the sole measure and end of all his actions".
Separation of church and state
The "Radical Enlightenment" promoted the concept of separating church and state, an idea that is often credited to English philosopher John Locke (1632–1704).Feldman, Noah (2005). Divided by God. Farrar, Straus and Giroux, p. 29 ("It took John Locke to translate the demand for liberty of conscience into a systematic argument for distinguishing the realm of government from the realm of religion.") According to his principle of the social contract, Locke said that the government lacked authority in the realm of individual conscience, as this was something rational people could not cede to the government for it or others to control. For Locke, this created a natural right in the liberty of conscience, which he said must therefore remain protected from any government authority.
These views on religious tolerance and the importance of individual conscience, along with the social contract, became particularly influential in the American colonies and the drafting of the United States Constitution.Feldman, Noah (2005). Divided by God. Farrar, Straus and Giroux, p. 29 Thomas Jefferson called for a "wall of separation between church and state" at the federal level. He previously had supported successful efforts to disestablish the Church of England in Virginia,Ferling, 2000, p. 158 and authored the Virginia Statute for Religious Freedom.Mayer, 1994 p. 76 Jefferson's political ideals were greatly influenced by the writings of John Locke, Francis Bacon, and Isaac NewtonHayes, 2008, p. 10 whom he considered the three greatest men that ever lived.Cogliano, 2003, p. 14
National variations
thumb|right|upright=2|Europe at the beginning of the War of the Spanish Succession, 1700
The Enlightenment took hold in most European countries, often with a specific local emphasis. For example, in France it became associated with anti-government and anti-Church radicalism while in Germany it reached deep into the middle classes and where it expressed a spiritualistic and nationalistic tone without threatening governments or established churches.David N. Livingstone and Charles W. J. Withers, Geography and Enlightenment (1999) Government responses varied widely. In France, the government was hostile, and the philosophes fought against its censorship, sometimes being imprisoned or hounded into exile. The British government for the most part ignored the Enlightenment's leaders in England and Scotland, although it did give Isaac Newton a knighthood and a very lucrative government office.
thumb|upright|left|upright|One leader of the Scottish Enlightenment was Adam Smith, the father of modern economic science.
In the Scottish Enlightenment, Scotland's major cities created an intellectual infrastructure of mutually supporting institutions such as universities, reading societies, libraries, periodicals, museums and masonic lodges. The Scottish network was "predominantly liberal Calvinist, Newtonian, and 'design' oriented in character which played a major role in the further development of the transatlantic Enlightenment".A. Herman, How the Scots Invented the Modern World (Crown Publishing Group, 2001). In France, Voltaire said "we look to Scotland for all our ideas of civilization." The focus of the Scottish Enlightenment ranged from intellectual and economic matters to the specifically scientific as in the work of William Cullen, physician and chemist; James Anderson, an agronomist; Joseph Black, physicist and chemist; and James Hutton, the first modern geologist.J. Repcheck, The Man Who Found Time: James Hutton and the Discovery of the Earth's Antiquity (Basic Books, 2003), pp. 117–43.
In Italy, parts of society also dramatically changed during the Enlightenment, with rulers such as Leopold II of Tuscany abolishing the death penalty in Tuscany. The significant reduction in the Church's power led to a period of great thought and invention, with scientists such as Alessandro Volta and Luigi Galvani making new discoveries and greatly contributing to science.
In Russia, the government began to actively encourage the proliferation of arts and sciences in the mid-18th century. This era produced the first Russian university, library, theatre, public museum, and independent press. Like other enlightened despots, Catherine the Great played a key role in fostering the arts, sciences, and education. She used her own interpretation of Enlightenment ideals, assisted by notable international experts such as Voltaire (by correspondence) and, in residence, world class scientists such as Leonhard Euler and Peter Simon Pallas. The national Enlightenment differed from its Western European counterpart in that it promoted further modernization of all aspects of Russian life and was concerned with attacking the institution of serfdom in Russia. The Russian enlightenment centered on the individual instead of societal enlightenment and encouraged the living of an enlightened life.Elise Kimerling Wirtschafter, "Thoughts on the Enlightenment and Enlightenment in Russia", Modern Russian History & Historiography, 2009, Vol. 2 Issue 2, pp. 1–26
thumb|left|John Trumbull's Declaration of Independence shows the drafting committee presenting its work to the Congress
Several Americans, especially Benjamin Franklin and Thomas Jefferson, played a major role in bringing Enlightenment ideas to the New World and in influencing British and French thinkers.Henry F. May, The Enlightenment in America (1978) Franklin was influential for his political activism and for his advances in physics.Michael Atiyah, "Benjamin Franklin and the Edinburgh Enlightenment," Proceedings of the American Philosophical Society (Dec 2006) 150#4 pp. 591–606.Jack Fruchtman, Jr., Atlantic Cousins: Benjamin Franklin and His Visionary Friends (2007) The cultural exchange during the Age of Enlightenment ran in both directions across the Atlantic. Thinkers such as Paine, Locke, and Rousseau all take Native American cultural practices as examples of natural freedom.Charles C. Mann, 1491 (2005) The Americans closely followed English and Scottish political ideas, as well as some French thinkers such as Montesquieu.Paul M. Spurlin, Montesquieu in America, 1760–1801 (1941) As deists, they were influenced by ideas of John Toland (1670–1722) and Matthew Tindal (1656–1733). During the Enlightenment there was a great emphasis upon liberty, democracy, republicanism, and religious tolerance. Attempts to reconcile science and religion resulted in a widespread rejection of prophecy, miracle, and revealed religion in preference for Deism – especially by Thomas Paine in The Age of Reason and by Thomas Jefferson in his short Jefferson Bible – from which all supernatural aspects were removed.
Historiography
The Enlightenment has always been contested territory. Its supporters "hail it as the source of everything that is progressive about the modern world. For them, it stands for freedom of thought, rational inquiry, critical thinking, religious tolerance, political liberty, scientific achievement, the pursuit of happiness, and hope for the future."Keith Thomas, "The Great Fight Over the Enlightenment," The New York Review April 3, 2014 However, its detractors accuse it of 'shallow' rationalism, naïve optimism, unrealistic universalism, and moral darkness. From the start, conservative and clerical defenders of traditional religion attacked materialism and skepticism as evil forces that encouraged immorality. By 1794, they pointed to the Terror during the French Revolution as confirmation of their predictions. As the Enlightenment was ending, Romantic philosophers argued that excessive dependence on reason was a mistake perpetuated by the Enlightenment, because it disregarded the bonds of history, myth, faith and tradition that were necessary to hold society together.Thomas, 2014
Definition
The term "Enlightenment" emerged in English in the later part of the 19th century,Oxford English Dictionary, 3rd Edn (revised) with particular reference to French philosophy, as the equivalent of the French term 'Lumières' (used first by Dubos in 1733 and already well established by 1751). From Immanuel Kant's 1784 essay "Beantwortung der Frage: Was ist Aufklärung?" ("Answering the Question: What is Enlightenment?") the German term became 'Aufklärung' (aufklären = to illuminate; sich aufklären = to clear up). However, scholars have never agreed on a definition of the Enlightenment, or on its chronological or geographical extent. Terms like "les Lumières" (French), "illuminismo" (Italian), "ilustración" (Spanish) and "Aufklärung" (German) referred to partly overlapping movements. Not until the late nineteenth century did English scholars agree they were talking about "the Enlightenment."
thumb|If there is something you know, communicate it. If there is something you don't know, search for it.— An engraving from the 1772 edition of the Encyclopédie; Truth, in the top center, is surrounded by light and unveiled by the figures to the right, Philosophy and Reason.
Enlightenment historiography began in the period itself, from what Enlightenment figures said about their work. A dominant element was the intellectual angle they took. D'Alembert's Preliminary Discourse of l'Encyclopédie provides a history of the Enlightenment which comprises a chronological list of developments in the realm of knowledge – of which the Encyclopédie forms the pinnacle.Jean le Rond d'Alembert, Discours préliminaire de l'Encyclopédie In 1783, Jewish philosopher Moses Mendelssohn referred to Enlightenment as a process by which man was educated in the use of reason.Outram, 1. The past tense is used deliberately as whether man would educate himself or be educated by certain exemplary figures was a common issue at the time. D'Alembert's introduction to l'Encyclopédie, for example, along with Immanuel Kant's essay response (the "independent thinkers"), both support the later model. Immanuel Kant called Enlightenment "man's release from his self-incurred tutelage", tutelage being "man's inability to make use of his understanding without direction from another".Immanuel Kant, "What is Enlightenment?", 1. "For Kant, Enlightenment was mankind's final coming of age, the emancipation of the human consciousness from an immature state of ignorance." The German scholar Ernst Cassirer called the Enlightenment "a part and a special phase of that whole intellectual development through which modern philosophic thought gained its characteristic self-confidence and self-consciousness".Ernst Cassirer, The Philosophy of the Enlightenment, (1951), p. vi According to historian Roy Porter, the liberation of the human mind from a dogmatic state of ignorance is the epitome of what the Age of Enlightenment was trying to capture.
Bertrand Russell saw the Enlightenment as a phase in a progressive development, which began in antiquity, and that reason and challenges to the established order were constant ideals throughout that time.Russell, Bertrand. A History of Western Philosophy. pp. 492–94 Russell said that the Enlightenment was ultimately born out of the Protestant reaction against the Catholic counter-reformation, and that philosophical views such as affinity for democracy against monarchy originated among 16th-century Protestants to justify their desire to break away from the Catholic Church. Though many of these philosophical ideals were picked up by Catholics, Russell argues, by the 18th century the Enlightenment was the principal manifestation of the schism that began with Martin Luther.
Jonathan Israel rejects the attempts of postmodern and Marxian historians to understand the revolutionary ideas of the period purely as by-products of social and economic transformations. He instead focuses on the history of ideas in the period from 1650 to the end of the 18th century, and claims that it was the ideas themselves that caused the change that eventually led to the revolutions of the latter half of the 18th century and the early 19th century. Israel argues that until the 1650s Western civilization "was based on a largely shared core of faith, tradition and authority".
Time span
There is little consensus on the precise beginning of the Age of Enlightenment; the beginning of the 18th century (1701) or the middle of the 17th century (1650) are often used as epochs. French historians usually place the period, called the Siècle des Lumières (Century of Enlightenments), between 1715 and 1789, from the beginning of the reign of Louis XV until the French Revolution. If taken back to the mid-17th century, the Enlightenment would trace its origins to Descartes' Discourse on Method, published in 1637. In France, many cited the publication of Isaac Newton's Principia Mathematica in 1687.J. B. Shank, The Newton Wars and the Beginning of the French Enlightenment (2008), "Introduction" It is argued by several historians and philosophers that the beginning of the Enlightenment is when Descartes shifted the epistemological basis from external authority to internal certainty by his cogito ergo sum published in 1637.Martin Heidegger [1938] (2002) The Age of the World Picture quotation: Ingraffia, Brian D. (1995) Postmodern theory and biblical theology: vanquishing God's shadow p. 126Norman K. Swazo (2002) Crisis theory and world order: Heideggerian reflections pp. 97–99 As to its end, most scholars use the last years of the century, often choosing the French Revolution of 1789 or the beginning of the Napoleonic Wars (1804–15) as a convenient point in time with which to date the end of the Enlightenment.
Modern study
In the 1944 book Dialectic of Enlightenment Frankfurt School philosophers Max Horkheimer and Theodor W. Adorno argued that:
Enlightenment, understood in the widest sense as the advance of thought, has always aimed at liberating human beings from fear and installing them as masters. Yet the wholly enlightened earth radiates under the sign of disaster triumphant.
In the 1970s, study of the Enlightenment expanded to include the ways Enlightenment ideas spread to European colonies and how they interacted with indigenous cultures, and how the Enlightenment took place in formerly unstudied areas such as Italy, Greece, the Balkans, Poland, Hungary, and Russia.Outram, 6. See also, A. Owen Alridge (ed.), The Ibero-American Enlightenment (1971)., Franco Venturi, The End of the Old Regime in Europe 1768–1776: The First Crisis.
Intellectuals such as Robert Darnton and Jürgen Habermas have focused on the social conditions of the Enlightenment. Habermas described the creation of the "bourgeois public sphere" in 18th-century Europe, containing the new venues and modes of communication allowing for rational exchange. Habermas said that the public sphere was bourgeois, egalitarian, rational, and independent from the state, making it the ideal venue for intellectuals to critically examine contemporary politics and society, away from the interference of established authority. While the public sphere is generally an integral component of the social study of the Enlightenment, other historians have questioned whether the public sphere had these characteristics.For example, Robert Darnton, Roger Chartier, Brian Cowan, Donna T. Andrew.
thumb|A medal minted during the reign of Joseph II, Holy Roman Emperor, commemorating his grant of religious liberty to Jews and Protestants in Hungary. Another important reform of Joseph II was the abolition of serfdom.
Society and culture
In contrast to the intellectual historiographical approach of the Enlightenment, which examines the various currents or discourses of intellectual thought within the European context during the 17th and 18th centuries, the cultural (or social) approach examines the changes that occurred in European society and culture. This approach studies the process of changing sociabilities and cultural practices during the Enlightenment.
One of the primary elements of the culture of the Enlightenment was the rise of the public sphere, a "realm of communication marked by new arenas of debate, more open and accessible forms of urban public space and sociability, and an explosion of print culture," in the late 17th century and 18th century.James Van Horn Melton, The Rise of the Public in Enlightenment Europe (2001), p. 4. Elements of the public sphere included: it was egalitarian, it discussed the domain of "common concern," and argument was founded on reason.Jürgen Habermas, The Structural Transformation of the Public Sphere, (1989), pp. 36, 37. Habermas uses the term "common concern" to describe those areas of political/social knowledge and discussion that were previously the exclusive territory of the state and religious authorities, now open to critical examination by the public sphere. The values of this bourgeois public sphere included holding reason to be supreme, considering everything to be open to criticism (the public sphere is critical), and the opposition of secrecy of all sorts.Melton, 8.
thumb|left|German explorer Alexander von Humboldt showed his disgust for slavery and often criticized the colonial policies. He always acted out of a deeply humanistic conviction, borne by the ideas of the Enlightenment.Nicolaas A. Rupke (2008). "Alexander Von Humboldt: A Metabiography". University of Chicago Press. p. 138 ISBN 0-226-73149-9
The creation of the public sphere has been associated with two long-term historical trends: the rise of the modern nation state and the rise of capitalism. The modern nation state, in its consolidation of public power, created by counterpoint a private realm of society independent of the state, which allowed for the public sphere. Capitalism also increased society's autonomy and self-awareness, and an increasing need for the exchange of information. As the nascent public sphere expanded, it embraced a large variety of institutions; the most commonly cited were coffee houses and cafés, salons and the literary public sphere, figuratively localized in the Republic of Letters.Melton, 4, 5. Habermas, 14–26. In France, the creation of the public sphere was helped by the aristocracy's move from the King's palace at Versailles to Paris in about 1720, since their rich spending stimulated the trade in luxuries and artistic creations, especially fine paintings.
The context for the rise of the public sphere was the economic and social change commonly associated with the Industrial Revolution: "economic expansion, increasing urbanization, rising population and improving communications in comparison to the stagnation of the previous century"."Outram, Dorinda. The Enlightenment (2nd ed.). Cambridge University Press, 2005, p. 12. Rising efficiency in production techniques and communication lowered the prices of consumer goods and increased the amount and variety of goods available to consumers (including the literature essential to the public sphere). Meanwhile, the colonial experience (most European states had colonial empires in the 18th century) began to expose European society to extremely heterogeneous cultures, leading to the breaking down of "barriers between cultural systems, religious divides, gender differences and geographical areas".Outram 2005, p. 13.
The word "public" implies the highest level of inclusivity – the public sphere by definition should be open to all. However, this sphere was only public to relative degrees. Enlightenment thinkers frequently contrasted their conception of the "public" with that of the people: Condorcet contrasted "opinion" with populace, Marmontel "the opinion of men of letters" with "the opinion of the multitude," and d'Alembert the "truly enlightened public" with "the blind and noisy multitude".Chartier, 27. Additionally, most institutions of the public sphere excluded both women and the lower classes.Mona Ozouf, "'Public Opinion' at the End of the Old Regime Cross-class influences occurred through noble and lower class participation in areas such as the coffeehouses and the Masonic lodges.
Social and cultural implications in the arts
Because of the focus on reason over superstition, the Enlightenment cultivated the arts.David Beard and Kenneth Gloag, Musicology, The Key Concepts (New York: Routledge, 2005), 58. Emphasis on learning, art and music became more widespread, especially with the growing middle class. Areas of study such as literature, philosophy, science, and the fine arts increasingly explored subject matter that the general public in addition to the previously more segregated professionals and patrons could relate to.J. Peter Burkholder, Donald J. Grout and Claude V. Palisca, A History of Western Music, Seventh Edition, (New York: W.W. Norton & Company, Inc., 2006), 475.
thumb|left|George Frideric Handel
As musicians depended more and more on public support, public concerts became increasingly popular and helped supplement performers' and composers' incomes. The concerts also helped them to reach a wider audience. Handel, for example, epitomized this with his highly public musical activities in London. He gained considerable fame there with performances of his operas and oratorios. The music of Haydn and Mozart, with their Viennese Classical styles, are usually regarded as being the most in line with the Enlightenment ideals.Beard and Gloag, Musicology, 59.
The desire to explore, record and systematize knowledge had a meaningful impact on music publications. Jean-Jacques Rousseau's Dictionnaire de musique (published 1767 in Geneva and 1768 in Paris) was a leading text in the late 18th century. This widely available dictionary gave short definitions of words like genius and taste, and was clearly influenced by the Enlightenment movement. Another text influenced by Enlightenment values was Charles Burney's A General History of Music: From the Earliest Ages to the Present Period (1776), which was a historical survey and an attempt to rationalize elements in music systematically over time.Beard and Gloag, Musicology, 60. Recently, musicologists have shown renewed interest in the ideas and consequences of the Enlightenment. For example, Rose Rosengard Subotnik's Deconstructive Variations (subtitled Music and Reason in Western Society) compares Mozart's Die Zauberflöte (1791) using the Enlightenment and Romantic perspectives, and concludes that the work is "an ideal musical representation of the Enlightenment".
As the economy and the middle class expanded, there was an increasing number of amateur musicians. One manifestation of this involved women, who became more involved with music on a social level. Women were already engaged in professional roles as singers, and increased their presence in the amateur performers' scene, especially with keyboard music.Burkholder, Grout and Palisca, A History of Western Music, 475. Music publishers begin to print music that amateurs could understand and play. The majority of the works that were published were for keyboard, voice and keyboard, and chamber ensemble. After these initial genres were popularized, from the mid-century on, amateur groups sang choral music, which then became a new trend for publishers to capitalize on. The increasing study of the fine arts, as well as access to amateur-friendly published works, led to more people becoming interested in reading and discussing music. Music magazines, reviews, and critical works which suited amateurs as well as connoisseurs began to surface.
Dissemination of ideas
The philosophes spent a great deal of energy disseminating their ideas among educated men and women in cosmopolitan cities. They used many venues, some of them quite new.
The Republic of Letters
thumb|left|upright|French philosopher Pierre Bayle
The term "Republic of Letters" was coined by Pierre Bayle in 1664, in his journal Nouvelles de la Republique des Lettres. Towards the end of the 18th century, the editor of Histoire de la République des Lettres en France, a literary survey, described the Republic of Letters as being:
In the midst of all the governments that decide the fate of men; in the bosom of so many states, the majority of them despotic ... there exists a certain realm which holds sway only over the mind ... that we honour with the name Republic, because it preserves a measure of independence, and because it is almost its essence to be free. It is the realm of talent and of thought.
The Republic of Letters was the sum of a number of Enlightenment ideals: an egalitarian realm governed by knowledge that could act across political boundaries and rival state power. It was a forum that supported "free public examination of questions regarding religion or legislation".Chartier, 26. Immanuel Kant considered written communication essential to his conception of the public sphere; once everyone was a part of the "reading public", then society could be said to be enlightened.Chartier, 26, 26. Kant, "What is Enlightenment?" The people who participated in the Republic of Letters, such as Diderot and Voltaire, are frequently known today as important Enlightenment figures. Indeed, the men who wrote Diderot's Encyclopédie arguably formed a microcosm of the larger "republic".Outram, 23.
Many women played an essential part in the French Enlightenment, due to the role they played as salonnières in Parisian salons, as the contrast to the male philosophes. The salon was the principal social institution of the republic,Goodman, 3. and "became the civil working spaces of the project of Enlightenment." Women, as salonnières, were "the legitimate governors of [the] potentially unruly discourse" that took place within.Dena Goodman, The Republic of Letters: A Cultural History of the French Enlightenment (1994), 53. While women were marginalized in the public culture of the Ancien Régime, the French Revolution destroyed the old cultural and economic restraints of patronage and corporatism (guilds), opening French society to female participation, particularly in the literary sphere.Carla Hesse, The Other Enlightenment: How French Women Became Modern (2001), 42.
thumb|Front page of The Gentleman's Magazine, January 1731
In France, the established men of letters (gens de lettres) had fused with the elites (les grands) of French society by the mid-18th century. This led to the creation of an oppositional literary sphere, Grub Street, the domain of a "multitude of versifiers and would-be authors".Crébillon fils, quoted from Darnton, The Literary Underground, 17. These men came to London to become authors, only to discover that the literary market simply could not support large numbers of writers, who, in any case, were very poorly remunerated by the publishing-bookselling guilds.Darnton, The Literary Underground, 19, 20.
The writers of Grub Street, the Grub Street Hacks, were left feeling bitter about the relative success of the men of letters,Darnton, "The Literary Underground", 21, 23. and found an outlet for their literature which was typified by the libelle. Written mostly in the form of pamphlets, the libelles "slandered the court, the Church, the aristocracy, the academies, the salons, everything elevated and respectable, including the monarchy itself".Darnton, The Literary Underground, 29 Le Gazetier cuirassé by Charles Théveneau de Morande was a prototype of the genre. It was Grub Street literature that was most read by the public during the Enlightenment.Outram, 22. More importantly, according to Darnton, the Grub Street hacks inherited the "revolutionary spirit" once displayed by the philosophes, and paved the way for the French Revolution by desacralizing figures of political, moral and religious authority in France.Darnton, The Literary Underground, 35–40.
The book industry
thumb|ESTC data 1477–1799 by decade given with a regional differentiation.
The increased consumption of reading materials of all sorts was one of the key features of the "social" Enlightenment. Developments in the Industrial Revolution allowed consumer goods to be produced in greater quantities at lower prices, encouraging the spread of books, pamphlets, newspapers and journals – "media of the transmission of ideas and attitudes". Commercial development likewise increased the demand for information, along with rising populations and increased urbanisation.Outram, 17, 20. However, demand for reading material extended outside of the realm of the commercial, and outside the realm of the upper and middle classes, as evidenced by the Bibliothèque Bleue. Literacy rates are difficult to gauge, but in France at least, the rates doubled over the course of the 18th century.Darnton, "The Literary Underground", 16. Reflecting the decreasing influence of religion, the number of books about science and art published in Paris doubled from 1720 to 1780, while the number of books about religion dropped to just one-tenth of the total.
Reading underwent serious changes in the 18th century. In particular, Rolf Engelsing has argued for the existence of a Reading Revolution. Until 1750, reading was done "intensively: people tended to own a small number of books and read them repeatedly, often to small audience. After 1750, people began to read "extensively", finding as many books as they could, increasingly reading them alone.from Outram, 19. See Rolf Engelsing, "Die Perioden der Lesergeschichte in der Neuzeit. Das statische Ausmass und die soziokulturelle Bedeutung der Lektüre", Archiv für Geschichte des Buchwesens, 10 (1969), cols. 944–1002 and Der Bürger als Leser: Lesergeschichte in Deutschland, 1500–1800 (Stuttgart, 1974). This is supported by increasing literacy rates, particularly among women.
The vast majority of the reading public could not afford to own a private library, and while most of the state-run "universal libraries" set up in the 17th and 18th centuries were open to the public, they were not the only sources of reading material. On one end of the spectrum was the Bibliothèque Bleue, a collection of cheaply produced books published in Troyes, France. Intended for a largely rural and semi-literate audience these books included almanacs, retellings of medieval romances and condensed versions of popular novels, among other things. While some historians have argued against the Enlightenment's penetration into the lower classes, the Bibliothèque Bleue represents at least a desire to participate in Enlightenment sociability.Outram, 27–29 Moving up the classes, a variety of institutions offered readers access to material without needing to buy anything. Libraries that lent out their material for a small price started to appear, and occasionally bookstores would offer a small lending library to their patrons. Coffee houses commonly offered books, journals and sometimes even popular novels to their customers. The Tatler and The Spectator, two influential periodicals sold from 1709 to 1714, were closely associated with coffee house culture in London, being both read and produced in various establishments in the city.Erin Mackie, The Commerce of Everyday Life: Selections from The Tatler and The Spectator (Boston: Bedford/St. Martin's, 1998), 16. This is an example of the triple or even quadruple function of the coffee house: reading material was often obtained, read, discussed and even produced on the premises.See Mackie, Darnton, An Early Information Society
thumb|upright|Denis Diderot is best known as the editor of the Encyclopédie.
It is extremely difficult to determine what people actually read during the Enlightenment. For example, examining the catalogs of private libraries gives an image skewed in favor of the classes wealthy enough to afford libraries, and also ignores censured works unlikely to be publicly acknowledged. For this reason, a study of publishing would be much more fruitful for discerning reading habits.In particular, see Chapter 6, "Reading, Writing and Publishing"
Across continental Europe, but in France especially, booksellers and publishers had to negotiate censorship laws of varying strictness. The Encyclopédie, for example, narrowly escaped seizure and had to be saved by Malesherbes, the man in charge of the French censure. Indeed, many publishing companies were conveniently located outside France so as to avoid overzealous French censors. They would smuggle their merchandise across the border, where it would then be transported to clandestine booksellers or small-time peddlers.See Darnton, The Literary Underground, 184. The records of clandestine booksellers may give a better representation of what literate Frenchmen might have truly read, since their clandestine nature provided a less restrictive product choice.Darnton, The Literary Underground, 135–47. In one case, political books were the most popular category, primarily libels and pamphlets. Readers were more interested in sensationalist stories about criminals and political corruption than they were in political theory itself. The second most popular category, "general works" (those books "that did not have a dominant motif and that contained something to offend almost everyone in authority") demonstrated a high demand for generally low-brow subversive literature. However, these works never became part of literary canon, and are largely forgotten today as a result.
A healthy, and legal, publishing industry existed throughout Europe, although established publishers and book sellers occasionally ran afoul of the law. The Encyclopédie, for example, condemned not only by the King but also by Clement XII, nevertheless found its way into print with the help of the aforementioned Malesherbes and creative use of French censorship law.Darnton, The Business of Enlightenment, 12, 13. For a more detailed description of French censorship laws, see Darnton, The Literary Underground But many works were sold without running into any legal trouble at all. Borrowing records from libraries in England, Germany and North America indicate that more than 70 percent of books borrowed were novels. Less than 1 percent of the books were of a religious nature, indicating the general trend of declining religiosity.Outram, 21.
Natural history
thumb|upright|left|Georges Buffon is best remembered for his , a 44 volume encyclopedia describing everything known about the natural world.
A genre that greatly rose in importance was that of scientific literature. Natural history in particular became increasingly popular among the upper classes. Works of natural history include René-Antoine Ferchault de Réaumur's Histoire naturelle des insectes and Jacques Gautier d'Agoty's La Myologie complète, ou description de tous les muscles du corps humain (1746). Outside ancien régime France, natural history was an important part of medicine and industry, encompassing the fields of botany, zoology, meteorology, hydrology and mineralogy. Students in Enlightenment universities and academies were taught these subjects to prepare them for careers as diverse as medicine and theology. As shown by M D Eddy, natural history in this context was a very middle class pursuit and operated as a fertile trading zone for the interdisciplinary exchange of diverse scientific ideas.
The target audience of natural history was French polite society, evidenced more by the specific discourse of the genre than by the generally high prices of its works. Naturalists catered to polite society's desire for erudition – many texts had an explicit instructive purpose. However, natural history was often a political affair. As E. C. Spary writes, the classifications used by naturalists "slipped between the natural world and the social ... to establish not only the expertise of the naturalists over the natural, but also the dominance of the natural over the social".Emma Spary, "The 'Nature' of Enlightenment" in The Sciences in Enlightened Europe, William Clark, Jan Golinski, and Steven Schaffer, eds. (Chicago: University of Chicago Press, 1999), 281–82. The idea of taste (le goût) was a social indicator: to truly be able to categorize nature, one had to have the proper taste, an ability of discretion shared by all members of polite society. In this way natural history spread many of the scientific developments of the time, but also provided a new source of legitimacy for the dominant class.Spary, 289–93. From this basis, naturalists could then develop their own social ideals based on their scientific works.See Thomas Laqueur, Making sex: body and gender from the Greeks to Freud (1990).
Scientific and literary journals
thumb|upright|Journal des sçavans was the earliest academic journal published in Europe
The first scientific and literary journals were established during the Enlightenment. The first journal, the Parisian Journal des Sçavans, appeared in 1665. However, it was not until 1682 that periodicals began to be more widely produced. French and Latin were the dominant languages of publication, but there was also a steady demand for material in German and Dutch. There was generally low demand for English publications on the Continent, which was echoed by England's similar lack of desire for French works. Languages commanding less of an international market—such as Danish, Spanish and Portuguese—found journal success more difficult, and more often than not, a more international language was used instead. French slowly took over Latin's status as the lingua franca of learned circles. This in turn gave precedence to the publishing industry in Holland, where the vast majority of these French language periodicals were produced.
Jonathan Israel called the journals the most influential cultural innovation of European intellectual culture. They shifted the attention of the "cultivated public" away from established authorities to novelty and innovation, and promoted the "enlightened" ideals of toleration and intellectual objectivity. Being a source of knowledge derived from science and reason, they were an implicit critique of existing notions of universal truth monopolized by monarchies, parliaments, and religious authorities. They also advanced Christian enlightenment that upheld "the legitimacy of God-ordained authority"—the Bible—in which there had to be agreement between the biblical and natural theories.
Encyclopedias and dictionaries
thumb|upright|First page of the Encyclopedie published between 1751 and 1766
Although the existence of dictionaries and encyclopedias spanned into ancient times, the texts changed from simply defining words in a long running list to far more detailed discussions of those words in 18th-century encyclopedic dictionaries.Headrick, (2000), p. 144. The works were part of an Enlightenment movement to systematize knowledge and provide education to a wider audience than the elite. As the 18th century progressed, the content of encyclopedias also changed according to readers' tastes. Volumes tended to focus more strongly on secular affairs, particularly science and technology, rather than matters of theology.
Along with secular matters, readers also favoured an alphabetical ordering scheme over cumbersome works arranged along thematic lines.Headrick, (2000), p. 172. The historian Charles Porset, commenting on alphabetization, has said that "as the zero degree of taxonomy, alphabetical order authorizes all reading strategies; in this respect it could be considered an emblem of the Enlightenment." For Porset, the avoidance of thematic and hierarchical systems thus allows free interpretation of the works and becomes an example of egalitarianism.Porter, (2003), pp. 249–50. Encyclopedias and dictionaries also became more popular during the Age of Enlightenment as the number of educated consumers who could afford such texts began to multiply. In the later half of the 18th century, the number of dictionaries and encyclopedias published by decade increased from 63 between 1760 and 1769 to approximately 148 in the decade proceeding the French Revolution (1780–1789).Headrick, (2000), p. 168) Along with growth in numbers, dictionaries and encyclopedias also grew in length, often having multiple print runs that sometimes included in supplemented editions.
The first technical dictionary was drafted by John Harris and entitled Lexicon Technicum: Or, An Universal English Dictionary of Arts and Sciences. Harris' book avoided theological and biographical entries; instead it concentrated on science and technology. Published in 1704, the Lexicon technicum was the first book to be written in English that took a methodical approach to describing mathematics and commercial arithmetic along with the physical sciences and navigation. Other technical dictionaries followed Harris' model, including Ephraim Chambers' Cyclopaedia (1728), which included five editions, and was a substantially larger work than Harris'. The folio edition of the work even included foldout engravings. The Cyclopaedia emphasized Newtonian theories, Lockean philosophy, and contained thorough examinations of technologies, such as engraving, brewing, and dyeing. upright|thumb|left|"Figurative system of human knowledge", the structure that the Encyclopédie organised knowledge into. It had three main branches: memory, reason, and imagination In Germany, practical reference works intended for the uneducated majority became popular in the 18th century. The Marperger Curieuses Natur-, Kunst-, Berg-, Gewerkund Handlungs-Lexicon (1712) explained terms that usefully described the trades and scientific and commercial education. Jablonksi Allgemeines Lexicon (1721) was better known than the Handlungs-Lexicon, and underscored technical subjects rather than scientific theory. For example, over five columns of text were dedicated to wine, while geometry and logic were allocated only twenty-two and seventeen lines, respectively. The first edition of the Encyclopædia Britannica (1771) was modelled along the same lines as the German lexicons.Headrick, (2000), pp. 150–52.
However, the prime example of reference works that systematized scientific knowledge in the age of Enlightenment were universal encyclopedias rather than technical dictionaries. It was the goal of universal encyclopedias to record all human knowledge in a comprehensive reference work.Headrick, (2000), p. 153. The most well-known of these works is Denis Diderot and Jean le Rond d'Alembert's Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers. The work, which began publication in 1751, was composed of thirty-five volumes and over 71 000 separate entries. A great number of the entries were dedicated to describing the sciences and crafts in detail, and provided intellectuals across Europe with a high-quality survey of human knowledge. In d'Alembert's Preliminary Discourse to the Encyclopedia of Diderot, the work's goal to record the extent of human knowledge in the arts and sciences is outlined:
The massive work was arranged according to a "tree of knowledge." The tree reflected the marked division between the arts and sciences, which was largely a result of the rise of empiricism. Both areas of knowledge were united by philosophy, or the trunk of the tree of knowledge. The Enlightenment's desacrilization of religion was pronounced in the tree's design, particularly where theology accounted for a peripheral branch, with black magic as a close neighbour.Darnton, (1979), p. 7. As the Encyclopédie gained popularity, it was published in quarto and octavo editions after 1777. The quarto and octavo editions were much less expensive than previous editions, making the Encyclopédie more accessible to the non-elite. Robert Darnton estimates that there were approximately 25 000 copies of the Encyclopédie in circulation throughout France and Europe before the French Revolution.Darnton, (1979), p. 37. The extensive, yet affordable encyclopedia came to represent the transmission of Enlightenment and scientific education to an expanding audience.Darnton, (1979), p. 6.
Popularization of science
One of the most important developments that the Enlightenment era brought to the discipline of science was its popularization. An increasingly literate population seeking knowledge and education in both the arts and the sciences drove the expansion of print culture and the dissemination of scientific learning. The new literate population was due to a high rise in the availability of food. This enabled many people to rise out of poverty, and instead of paying more for food, they had money for education.Jacob, (1988), p. 191; Melton, (2001), pp. 82–83 Popularization was generally part of an overarching Enlightenment ideal that endeavoured "to make information available to the greatest number of people."Headrick, (2000), p. 15 As public interest in natural philosophy grew during the 18th century, public lecture courses and the publication of popular texts opened up new roads to money and fame for amateurs and scientists who remained on the periphery of universities and academies.Headrick, (2000), p. 19. More formal works included explanations of scientific theories for individuals lacking the educational background to comprehend the original scientific text. Sir Isaac Newton's celebrated Philosophiae Naturalis Principia Mathematica was published in Latin and remained inaccessible to readers without education in the classics until Enlightenment writers began to translate and analyze the text in the vernacular.
thumb|left|A portrait of Bernard de Fontenelle.
The first significant work that expressed scientific theory and knowledge expressly for the laity, in the vernacular, and with the entertainment of readers in mind, was Bernard de Fontenelle's Conversations on the Plurality of Worlds (1686). The book was produced specifically for women with an interest in scientific writing and inspired a variety of similar works.Phillips, (1991), pp. 85, 90 These popular works were written in a discursive style, which was laid out much more clearly for the reader than the complicated articles, treatises, and books published by the academies and scientists. Charles Leadbetter's Astronomy (1727) was advertised as "a Work entirely New" that would include "short and easie Rules and Astronomical Tables."Phillips, (1991), p. 90. The first French introduction to Newtonianism and the Principia was Eléments de la philosophie de Newton, published by Voltaire in 1738.Porter, (2003), p. 300. Émilie du Châtelet's translation of the Principia, published after her death in 1756, also helped to spread Newton's theories beyond scientific academies and the university.Porter, (2003), p. 101. Francesco Algarotti, writing for a growing female audience, published Il Newtonianism per le dame, which was a tremendously popular work and was translated from Italian into English by Elizabeth Carter. A similar introduction to Newtonianism for women was produced by Henry Pemberton. His A View of Sir Isaac Newton's Philosophy was published by subscription. Extant records of subscribers show that women from a wide range of social standings purchased the book, indicating the growing number of scientifically inclined female readers among the middling class.Phillips, (1991), p. 92. During the Enlightenment, women also began producing popular scientific works themselves. Sarah Trimmer wrote a successful natural history textbook for children titled The Easy Introduction to the Knowledge of Nature (1782), which was published for many years after in eleven editions.Phillips, (1991), p. 107.
Schools and universities
Most work on the Enlightenment emphasizes the ideals discussed by intellectuals, rather than the actual state of education at the time. Leading educational theorists like England's John Locke and Switzerland's Jean Jacques Rousseau both emphasized the importance of shaping young minds early. By the late Enlightenment, there was a rising demand for a more universal approach to education, particularly after the American and French Revolutions.
The predominant educational psychology from the 1750s onward, especially in northern European countries was associationism, the notion that the mind associates or dissociates ideas through repeated routines. In addition to being conducive to Enlightenment ideologies of liberty, self-determination and personal responsibility, it offered a practical theory of the mind that allowed teachers to transform longstanding forms of print and manuscript culture into effective graphic tools of learning for the lower and middle orders of society. Children were taught to memorize facts through oral and graphic methods that originated during the Renaissance.
Many of the leading universities associated with Enlightenment progressive principles were located in northern Europe, with the most renowned being the universities of Leiden, Göttingen, Halle, Montpellier, Uppsala and Edinburgh. These universities, especially Edinburgh, produced professors whose ideas had a significant impact on Britain's North American colonies and, later, the American Republic. Within the natural sciences, Edinburgh's medical also led the way in chemistry, anatomy and pharmacology. In other parts of Europe, the universities and schools of France and most of Europe were bastions of traditionalism and were not hospitable to the Enlightenment. In France, the major exception was the medical university at Montpellier.Elizabeth Williams, A Cultural History of Medical Vitalism in Enlightenment Montpellier (2003) p. 50
Learned academies
thumb|Louis XIV visiting the in 1671. "It is widely accepted that 'modern science' arose in the Europe of the 17th century, introducing a new understanding of the natural world." —Peter BarrettPeter Barrett (2004), Science and Theology Since Copernicus: The Search for Understanding, p. 14, Continuum International Publishing Group, ISBN 0-567-08969-X
The history of Academies in France during the Enlightenment begins with the Academy of Science, founded in 1635 in Paris. It was closely tied to the French state, acting as an extension of a government seriously lacking in scientists. It helped promote and organize new disciplines, and it trained new scientists. It also contributed to the enhancement of scientists' social status, considering them to be the "most useful of all citizens". Academies demonstrate the rising interest in science along with its increasing secularization, as evidenced by the small number of clerics who were members (13 percent).Daniel Roche, France in the Enlightenment, (1998), 420. The presence of the French academies in the public sphere cannot be attributed to their membership; although the majority of their members were bourgeois, the exclusive institution was only open to elite Parisian scholars. They perceived themselves as "interpreters of the sciences for the people". For example, it was with this in mind that academicians took it upon themselves to disprove the popular pseudo-science of mesmerism.Roche, 515, 516.
The strongest contribution of the French Academies to the public sphere comes from the concours académiques (roughly translated as 'academic contests') they sponsored throughout France. These academic contests were perhaps the most public of any institution during the Enlightenment.Caradonna JL. Annales, "Prendre part au siècle des Lumières: Le concours académique et la culture intellectuelle au XVIIIe siècle" The practice of contests dated back to the Middle Ages, and was revived in the mid-17th century. The subject matter had previously been generally religious and/or monarchical, featuring essays, poetry, and painting. By roughly 1725, however, this subject matter had radically expanded and diversified, including "royal propaganda, philosophical battles, and critical ruminations on the social and political institutions of the Old Regime." Topics of public controversy were also discussed such as the theories of Newton and Descartes, the slave trade, women's education, and justice in France.Jeremy L. Caradonna, "Prendre part au siècle des Lumières: Le concours académique et la culture intellectuelle au XVIIIe siècle", Annales. Histoire, Sciences sociales, vol.64 (mai-juin 2009), n.3, 633–62.
thumb|Antoine Lavoisier conducting an experiment related to combustion generated by amplified sun light.
More importantly, the contests were open to all, and the enforced anonymity of each submission guaranteed that neither gender nor social rank would determine the judging. Indeed, although the "vast majority" of participants belonged to the wealthier strata of society ("the liberal arts, the clergy, the judiciary, and the medical profession"), there were some cases of the popular classes submitting essays, and even winning.Caradonna, 634–36. Similarly, a significant number of women participated—and won—the competitions. Of a total of 2300 prize competitions offered in France, women won 49—perhaps a small number by modern standards, but very significant in an age in which most women did not have any academic training. Indeed, the majority of the winning entries were for poetry competitions, a genre commonly stressed in women's education.Caradonna, 653–54.
In England, the Royal Society of London also played a significant role in the public sphere and the spread of Enlightenment ideas. It was founded by a group of independent scientists and given a royal charter in 1662. The Society played a large role in spreading Robert Boyle's experimental philosophy around Europe, and acted as a clearinghouse for intellectual correspondence and exchange.Steven Shapin, A Social History of Truth: Civility and Science in Seventeenth-Century England, Chicago; London: University of Chicago Press, 1994. Boyle was "a founder of the experimental world in which scientists now live and operate," and his method based knowledge on experimentation, which had to be witnessed to provide proper empirical legitimacy. This is where the Royal Society came into play: witnessing had to be a "collective act", and the Royal Society's assembly rooms were ideal locations for relatively public demonstrations.Steven Shapin and Simon Schaffer, Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life (Princeton: Princeton University Press, 1985), 5, 56, 57. This same desire for multiple witnesses led to attempts at replication in other locations and a complex iconography and literary technology developed to provide visual and written proof of experimentation. See pp. 59–65. However, not just any witness was considered to be credible; "Oxford professors were accounted more reliable witnesses than Oxfordshire peasants." Two factors were taken into account: a witness's knowledge in the area; and a witness's "moral constitution". In other words, only civil society were considered for Boyle's public.Shapin and Schaffer, 58, 59.
Salons
Coffeehouses
Coffeehouses were especially important to the spread of knowledge during the Enlightenment because they created a unique environment in which people from many different walks of life gathered and shared ideas. They were frequently criticized by nobles who feared the possibility of an environment in which class and its accompanying titles and privileges were disregarded. Such an environment was especially intimidating to monarchs who derived much of their power from the disparity between classes of people. If classes were to join together under the influence of Enlightenment thinking, they might recognize the all-encompassing oppression and abuses of their monarchs and, because of their size, might be able to carry out successful revolts. Monarchs also resented the idea of their subjects convening as one to discuss political matters, especially those concerning foreign affairs - rulers thought political affairs to be their business only, a result of their supposed divine right to rule.Klein, Lawrence E. "Coffeehouse Civility, 1660–1714: An Aspect of Post-Courtly Culture in England." Huntington Library Quarterly 59.1 (1996): 30–51.
Coffeehouses represent a turning point in history during which people discovered that they could have enjoyable social lives within their communities. Coffeeshops became homes away from home for many who sought, for the first time, to engage in discourse with their neighbors and discuss intriguing and thought-provoking matters, especially those regarding philosophy to politics. Coffeehouses were essential to the Enlightenment, for they were centers of free-thinking and self-discovery. Although many coffeehouse patrons were scholars, a great deal were not. Coffeehouses attracted a diverse set of people, including not only the educated wealthy but also members of the bourgeoisie and the lower class. While it may seem positive that patrons, being doctors, lawyers, merchants, etc. represented almost all classes, the coffeeshop environment sparked fear in those who sought to preserve class distinction. One of the most popular critiques of the coffeehouse claimed that it "allowed promiscuous association among people from different rungs of the social ladder, from the artisan to the aristocrat" and was therefore compared to Noah's Ark, receiving all types of animals, clean or unclean.Klein, 35. This unique culture served as a catalyst for journalism when Joseph Addison and Richard Steele recognized its potential as an audience. Together, Steele and Addison published The Spectator (1711), a daily publication which aimed, through fictional narrator Mr. Spectator, both to entertain and to provoke discussion regarding serious philosophical matters.
The first English coffeehouse opened in Oxford in 1650. Brian Cowan said that Oxford coffeehouses developed into "penny universities", offering a locus of learning that was less formal than structured institutions. These penny universities occupied a significant position in Oxford academic life, as they were frequented by those consequently referred to as the "virtuosi", who conducted their research on some of the resulting premises. According to Cowan, "the coffeehouse was a place for like-minded scholars to congregate, to read, as well as learn from and to debate with each other, but was emphatically not a university institution, and the discourse there was of a far different order than any university tutorial."Cowan, 90, 91.
The Café Procope was established in Paris in 1686; by the 1720s there were around 400 cafés in the city. The Café Procope in particular became a center of Enlightenment, welcoming such celebrities as Voltaire and Rousseau. The Café Procope was where Diderot and D'Alembert decided to create the Encyclopédie.Colin Jones, Paris: Biography of a City (New York: Viking, 2004), 188, 189. The cafés were one of the various "nerve centers" for bruits publics, public noise or rumour. These bruits were allegedly a much better source of information than were the actual newspapers available at the time.
Debating societies
The debating societies are an example of the public sphere during the Enlightenment.Donna T. Andrew, "Popular Culture and Public Debate: London 1780", This Historical Journal, Vol. 39, No. 2. (June 1996), pp. 405–423. Their origins include:
Clubs of fifty or more men who, at the beginning of the 18th century, met in pubs to discuss religious issues and affairs of state.
Mooting clubs, set up by law students to practice rhetoric.
Spouting clubs, established to help actors train for theatrical roles.
John Henley's Oratory, which mixed outrageous sermons with even more absurd questions, like "Whether Scotland be anywhere in the world?"Andrew, 406. Andrew gives the name as "William Henley", which must be a lapse of writing.
thumb|An example of a French salon
In the late 1770s, popular debating societies began to move into more "genteel" rooms, a change which helped establish a new standard of sociability.Andrew, 408. The backdrop to these developments was "an explosion of interest in the theory and practice of public elocution". The debating societies were commercial enterprises that responded to this demand, sometimes very successfully. Some societies welcomed from 800 to 1200 spectators a night.Andrew, 406–08, 411.
The debating societies discussed an extremely wide range of topics. Before the Enlightenment, most intellectual debates revolved around "confessional" – that is, Catholic, Lutheran, Reformed (Calvinist), or Anglican issues, and the main aim of these debates was to establish which bloc of faith ought to have the "monopoly of truth and a God-given title to authority". After this date everything thus previously rooted in tradition was questioned and often replaced by new concepts in the light of philosophical reason. After the second half of the 17th century and during the 18th century, a "general process of rationalization and secularization set in," and confessional disputes were reduced to a secondary status in favor of the "escalating contest between faith and incredulity".
In addition to debates on religion, societies discussed issues such as politics and the role of women. It is important to note, however, that the critical subject matter of these debates did not necessarily translate into opposition to the government. In other words, the results of the debate quite frequently upheld the status quo.Andrew, 412–15. From a historical standpoint, one of the most important features of the debating society was their openness to the public; women attended and even participated in almost every debating society, which were likewise open to all classes providing they could pay the entrance fee. Once inside, spectators were able to participate in a largely egalitarian form of sociability that helped spread Enlightenment ideas.Andrew, 422.
Masonic lodges
thumb|Masonic initiation ceremony
Historians have long debated the extent to which the secret network of Freemasonry was a main factor in the Enlightenment. The leaders of the Enlightenment included Freemasons such as Diderot, Montesquieu, Voltaire, Lessing, Pope,Maynard Mack, Alexander Pope: A Life, Yale University Press, 1985 pages 437-440. Pope, a Catholic, was a Freemason in 1730, eight years before membership was prohibited by the Catholic Church (1738). Pope's name is on the membership list of the Goat Tavern Lodge (p. 439). Pope's name appears on a 1723 list and a 1730 list. Horace Walpole, Sir Robert Walpole, Mozart, Goethe, Frederick the Great, Benjamin Franklin, and George Washington. Norman Davies said that Freemasonry was a powerful force on behalf of Liberalism in Europe, from about 1700 to the twentieth century. It expanded rapidly during the Age of Enlightenment, reaching practically every country in Europe. It was especially attractive to powerful aristocrats and politicians as well as intellectuals, artists and political activists.Norman Davies, Europe: A History (1996) pp. 634–35
During the Age of Enlightenment, Freemasons comprised an international network of like-minded men, often meeting in secret in ritualistic programs at their lodges. they promoted the ideals of the Enlightenment, and helped diffuse these values across Britain and France and other places. Freemasonry as a systematic creed with its own myths, values and set of rituals originated in Scotland around 1600 and spread first to England and then across the Continent in the eighteenth century. They fostered new codes of conduct—including a communal understanding of liberty and equality inherited from guild sociability—"liberty, fraternity, and equality"Margaret C. Jacob's seminal work on Enlightenment freemasonry, Margaret C. Jacob, Living the Enlightenment: Free masonry and Politics in Eighteenth-Century Europe (Oxford University Press, 1991) p. 49. Scottish soldiers and Jacobite Scots brought to the Continent ideals of fraternity which reflected not the local system of Scottish customs but the institutions and ideals originating in the English Revolution against royal absolutism.Margaret C. Jacob, "Polite worlds of Enlightenment," in Martin Fitzpatrick and Peter Jones, eds. The Enlightenment World (Routledge, 2004) pp. 272–87. Freemasonry was particularly prevalent in France—by 1789, there were perhaps as many as 100,000 French Masons, making Freemasonry the most popular of all Enlightenment associations.Roche, 436. The Freemasons displayed a passion for secrecy and created new degrees and ceremonies. Similar societies, partially imitating Freemasonry, emerged in France, Germany, Sweden and Russia. One example was the "Illuminati" founded in Bavaria in 1776, which was copied after the Freemasons but was never part of the movement. The Illuminati was an overtly political group, which most Masonic lodges decidedly were not.Fitzpatrick and Jones, eds. The Enlightenment World p. 281
Masonic lodges created a private model for public affairs. They "reconstituted the polity and established a constitutional form of self-government, complete with constitutions and laws, elections and representatives." In other words, the micro-society set up within the lodges constituted a normative model for society as a whole. This was especially true on the Continent: when the first lodges began to appear in the 1730s, their embodiment of British values was often seen as threatening by state authorities. For example, the Parisian lodge that met in the mid 1720s was composed of English Jacobite exiles.Jacob, pp. 20, 73, 89. Furthermore, freemasons all across Europe explicitly linked themselves to the Enlightenment as a whole. In French lodges, for example, the line "As the means to be enlightened I search for the enlightened" was a part of their initiation rites. British lodges assigned themselves the duty to "initiate the unenlightened". This did not necessarily link lodges to the irreligious, but neither did this exclude them from the occasional heresy. In fact, many lodges praised the Grand Architect, the masonic terminology for the deistic divine being who created a scientifically ordered universe.Jacob, 145–47.
German historian Reinhart Koselleck claimed that "On the Continent there were two social structures that left a decisive imprint on the Age of Enlightenment: the Republic of Letters and the Masonic lodges."Reinhart Koselleck, Critique and Crisis, p. 62, (The MIT Press, 1988) Scottish professor Thomas Munck argues that "although the Masons did promote international and cross-social contacts which were essentially non-religious and broadly in agreement with enlightened values, they can hardly be described as a major radical or reformist network in their own right."Thomas Munck, 1994, p. 70. Many of the Masons values seemed to greatly appeal to Enlightenment values and thinkers. Diderot discusses the link between Freemason ideals and the enlightenment in D'Alembert's Dream, exploring masonry as a way of spreading enlightenment beliefs. Historian Margaret Jacob stresses the importance of the Masons in indirectly inspiring enlightened political thought.Margaret C. Jacob, Living the Enlightenment: Freemasonry and politics in eighteenth-century Europe (Oxford University Press, 1991.) On the negative side, Daniel Roche contests claims that Masonry promoted egalitarianism. He argues that the lodges only attracted men of similar social backgrounds.Roche, 437. The presence of noble women in the French "lodges of adoption" that formed in the 1780s was largely due to the close ties shared between these lodges and aristocratic society.Jacob, 139. See also Janet M. Burke, "Freemasonry, Friendship and Noblewomen: The Role of the Secret Society in Bringing Enlightenment Thought to Pre-Revolutionary Women Elites", History of European Ideas 10 no. 3 (1989): 283–94.
The major opponent of Freemasonry was the Roman Catholic Church, so that in countries with a large Catholic element, such as France, Italy, Spain, and Mexico, much of the ferocity of the political battles involve the confrontation between what Davies calls the reactionary Church and enlightened Freemasonry.Davies, Europe: A History (1996) pp. 634–35Richard Weisberger et al., eds., Freemasonry on both sides of the Atlantic: essays concerning the craft in the British Isles, Europe, the United States, and Mexico (2002) Even in France, Masons did not act as a group.Robert R. Palmer, The Age of the Democratic Revolution: The struggle (1970) p. 53 American historians, while noting that Benjamin Franklin and George Washington were indeed active Masons, have downplayed the importance of Freemasonry in causing the American Revolution because the Masonic order was non-political and included both Patriots and their enemy the Loyalists.Neil L. York, "Freemasons and the American Revolution", The Historian Volume: 55. Issue: 2. 1993, pp. 315+.
Important intellectuals
See also
Atlantic Revolutions (American Revolution, French Revolution, Latin American Revolutions, etc.)
Education in the Age of Enlightenment
European and American voyages of scientific exploration
Regional Enlightenments:
Scottish Enlightenment
American Enlightenment
Polish Enlightenment
Modern Greek Enlightenment
Russian Enlightenment
Spanish Enlightenment
Haskalah, "Jewish Enlightenment"
References
Further reading
Reference and surveys
Becker, Carl L. The Heavenly City of the Eighteenth-Century Philosophers. (1932), a famous short classic
Bronner, Stephen. The Great Divide: The Enlightenment and its Critics (1995)
Burns, William. Science in the Enlightenment: An Encyclopædia (2003) 353pp
Chisick, Harvey. Historical Dictionary of the Enlightenment. 2005. 512 pp.
Delon, Michel. Encyclopædia of the Enlightenment (2001) 1480 pp.
Dupre, Louis. The Enlightenment & the Intellectual Foundations of Modern Culture 2004
Gay, Peter. The Enlightenment: The Rise of Modern Paganism (1966, 2nd ed. 1995), 952 pp. excerpt and text search vol 1; The Enlightenment: The Science of Freedom, (1969 2nd ed. 1995), a highly influential study excerpt and text search vol 2;
Greensides F, Hyland P, Gomez O (ed.). The Enlightenment (2002)
Fitzpatrick, Martin et al., eds. The Enlightenment World. (2004). 714 pp. 39 essays by scholars
Hazard, Paul. European thought in the 18th century: From Montesquieu to Lessing (1965)
Himmelfarb, Gertrude. The Roads to Modernity: The British, French, and American Enlightenments (2004) excerpt and text search
Jacob, Margaret Enlightenment: A Brief History with Documents 2000
Kors, Alan Charles. Encyclopædia of the Enlightenment (4 vol. 1990; 2nd ed. 2003), 1984 pp. excerpt and text search
Munck, Thomas. Enlightenment: A Comparative Social History, 1721–1794 England. (1994)
Outram, Dorinda. The Enlightenment(1995) 157 pp. excerpt and text search
Outram, Dorinda. Panorama of the Enlightenment (2006), emphasis on Germany; heavily illustrated
Reill, Peter Hanns, and Wilson, Ellen Judy. Encyclopædia of the Enlightenment. (2nd ed. 2004). 670 pp.
Yolton, John W. et al. The Blackwell Companion to the Enlightenment. (1992). 581 pp.
Specialty studies
Aldridge, A. Owen (ed.). The Ibero-American Enlightenment (1971).
Andrew, Donna T. "Popular Culture and Public Debate: London 1780". The Historical Journal, Vol. 39, No. 2. (June 1996), pp. 405–23. in JSTOR
Brewer, Daniel. The Enlightenment Past: reconstructing 18th-century French thought. (2008).
Broadie, Alexander. The Scottish Enlightenment: The Historical Age of the Historical Nation (2007)
Broadie, Alexander. The Cambridge Companion to the Scottish Enlightenment (2003) excerpt and text search
Bronner, Stephen. Reclaiming the Enlightenment: Toward a Politics of Radical Engagement, 2004
Brown, Stuart, ed. British Philosophy in the Age of Enlightenment (2002)
Buchan, James. Crowded with Genius: The Scottish Enlightenment: Edinburgh's Moment of the Mind (2004) excerpt and text search
Campbell, R.S. and Skinner, A.S., (eds.) The Origins and Nature of the Scottish Enlightenment, Edinburgh, 1982
Cassirer, Ernst. The Philosophy of the Enlightenment. 1955. a highly influential study by a neoKantian philosopher excerpt and text search
Chartier, Roger. The Cultural Origins of the French Revolution. Translated by Lydia G. Cochrane. Duke University Press, 1991.
Cowan, Brian, The Social Life of Coffee: The Emergence of the British Coffeehouse. New Haven: Yale University Press, 2005
Darnton, Robert. The Literary Underground of the Old Regime. (1982).
Edelstein, Dan. The Enlightenment: A Genealogy (University of Chicago Press; 2010) 209 pp.
Goodman, Dena. The Republic of Letters: A Cultural History of the French Enlightenment. (1994).
Hesse, Carla. The Other Enlightenment: How French Women Became Modern. Princeton: Princeton University Press, 2001.
Hankins, Thomas L. Science and the Enlightenment (1985).
May, Henry F. The Enlightenment in America. 1976. 419 pp.
Melton, James Van Horn. The Rise of the Public in Enlightenment Europe. (2001).
Porter, Roy. The Creation of the Modern World: The Untold Story of the British Enlightenment. 2000. 608 pp. excerpt and text search
Redkop, Benjamin. The Enlightenment and Community, 1999
Reid-Maroney, Nina. Philadelphia's Enlightenment, 1740–1800: Kingdom of Christ, Empire of Reason. 2001. 199 pp.
Roche, Daniel. France in the Enlightenment. (1998).
Sorkin, David. The Religious Enlightenment: Protestants, Jews, and Catholics from London to Vienna (2008)
Staloff, Darren. Hamilton, Adams, Jefferson: The Politics of Enlightenment and the American Founding. 2005. 419 pp. excerpt and text search
Till, Nicholas. Mozart and the Enlightenment: Truth, Virtue, and Beauty in Mozart's Operas. 1993. 384 pp.
Tunstall, Kate E. Blindness and Enlightenment. An Essay. With a new translation of Diderot's Letter on the Blind (Continuum, 2011)
Venturi, Franco. Utopia and Reform in the Enlightenment. George Macaulay Trevelyan Lecture, (1971)
Primary sources
Broadie, Alexander, ed. The Scottish Enlightenment: An Anthology (2001) excerpt and text search
Diderot, Denis. Rameau's Nephew and other Works" (2008) excerpt and text search.
Diderot, Denis. "Letter on the Blind" in Tunstall, Kate E. Blindness and Enlightenment. An Essay. With a new translation of Diderot's Letter on the Blind (Continuum, 2011)
Diderot, Denis. The Encyclopédie of Diderot and D'Alembert: Selected Articles (1969) excerpt and text search Collaborative Translation Project of the University of Michigan
Gay, Peter, ed. The Enlightenment: A Comprehensive Anthology (1973)
Gomez, Olga, et al. eds. The Enlightenment: A Sourcebook and Reader (2001) excerpt and text search
Kramnick, Issac, ed. The Portable Enlightenment Reader (1995) excerpt and text search
Schmidt, James, ed. What is Enlightenment?: Eighteenth-Century Answers and Twentieth-Century Questions (1996) excerpt and text search
External links
Category:18th century
Category:18th-century philosophy
Enlightment
Category:History of philosophy
Category:History of Europe by period
Enlightenment The
Enlightenment
Category:Secularism | 30,758 | 2017-01 |
Tennessee | Tennessee () (, Tanasi) is a state located in the southeastern region of the United States. Tennessee is the 36th largest and the 16th most populous of the 50 United States. Tennessee is bordered by Kentucky and Virginia to the north, North Carolina to the east, Georgia, Alabama, and Mississippi to the south, and Arkansas and Missouri to the west. The Appalachian Mountains dominate the eastern part of the state, and the Mississippi River forms the state's western border. Tennessee's capital and second largest city is Nashville, which has a population of 654,610. Memphis is the state's largest city, with a population of 655,770.
The state of Tennessee is rooted in the Watauga Association, a 1772 frontier pact generally regarded as the first constitutional government west of the Appalachians.John Finger, Tennessee Frontiers: Three Regions in Transition (Bloomington, Ind.: Indiana University Press, 2001), pp. 46–47. What is now Tennessee was initially part of North Carolina, and later part of the Southwest Territory. Tennessee was admitted to the Union as the 16th state on June 1, 1796. Tennessee was the last state to leave the Union and join the Confederacy at the outbreak of the U.S. Civil War in 1861. Occupied by Union forces from 1862, it was the first state to be readmitted to the Union at the end of the war.
Tennessee furnished more soldiers for the Confederate Army than any other state besides Virginia, and more soldiers for the Union Army than the rest of the Confederacy combined. Beginning during Reconstruction, it had competitive party politics, but a Democratic takeover in the late 1880s resulted in passage of disfranchisement laws that excluded most blacks and many poor whites from voting. This sharply reduced competition in politics in the state until after passage of civil rights legislation in the mid-20th century. In the 20th century, Tennessee transitioned from an agrarian economy to a more diversified economy, aided by massive federal investment in the Tennessee Valley Authority and, in the early 1940s, the city of Oak Ridge. This city was established to house the Manhattan Project's uranium enrichment facilities, helping to build the world's first atomic bomb, which was used during World War II.
Tennessee's major industries include agriculture, manufacturing, and tourism. Poultry, soybeans, and cattle are the state's primary agricultural products, and major manufacturing exports include chemicals, transportation equipment, and electrical equipment. The Great Smoky Mountains National Park, the nation's most visited national park, is headquartered in the eastern part of the state, and a section of the Appalachian Trail roughly follows the Tennessee-North Carolina border. Other major tourist attractions include the Tennessee Aquarium in Chattanooga; Dollywood in Pigeon Forge; Ripley's Aquarium of the Smokies in Gatlinburg; the Parthenon, the Country Music Hall of Fame and Museum, and Ryman Auditorium in Nashville; the Jack Daniel's Distillery in Lynchburg; Elvis Presley's Graceland residence and tomb, the Memphis Zoo, and the National Civil Rights Museum in Memphis; and Bristol Motor Speedway in Bristol.
Etymology
thumb|right|Monument near the old site of Tanasi in Monroe County
The earliest variant of the name that became Tennessee was recorded by Captain Juan Pardo, the Spanish explorer, when he and his men passed through an American Indian village named "Tanasqui" in 1567 while traveling inland from South Carolina. In the early 18th century, British traders encountered a Cherokee town named Tanasi (or "Tanase") in present-day Monroe County, Tennessee. The town was located on a river of the same name (now known as the Little Tennessee River), and appears on maps as early as 1725. It is not known whether this was the same town as the one encountered by Juan Pardo, although recent research suggests that Pardo's "Tanasqui" was located at the confluence of the Pigeon River and the French Broad River, near modern Newport.Charles Hudson, The Juan Pardo Expeditions: Explorations of the Carolinas and Tennessee, 1566–1568 (Tuscaloosa, Ala.: University of Alabama Press, 2005), 36–40.
The meaning and origin of the word are uncertain. Some accounts suggest it is a Cherokee modification of an earlier Yuchi word. It has been said to mean "meeting place", "winding river", or "river of the great bend". According to ethnographer James Mooney, the name "can not be analyzed" and its meaning is lost.Mooney, pg. 534
The modern spelling, Tennessee, is attributed to James Glen, the governor of South Carolina, who used this spelling in his official correspondence during the 1750s. The spelling was popularized by the publication of Henry Timberlake's "Draught of the Cherokee Country" in 1765. In 1788, North Carolina created "Tennessee County", the third county to be established in what is now Middle Tennessee. (Tennessee County was the predecessor to current-day Montgomery County and Robertson County.) When a constitutional convention met in 1796 to organize a new state out of the Southwest Territory, it adopted "Tennessee" as the name of the state.
Nickname
Tennessee is known as the "Volunteer State", a nickname some claimed was earned during the War of 1812 because of the prominent role played by volunteer soldiers from Tennessee, especially during the Battle of New Orleans. Other sources differ on the origin of the state nickname; according to the Columbia Encyclopedia, the name refers to volunteers for the Mexican–American War. This explanation is more likely, because President Polk's call for 2,600 nationwide volunteers at the beginning of the Mexican-American War resulted in 30,000 volunteers from Tennessee alone, largely in response to the death of Davy Crockett and appeals by former Tennessee Governor and then Texas politician, Sam Houston.
Geography
thumb|right|Map of Tennessee
Tennessee borders eight other states: Kentucky and Virginia to the north; North Carolina to the east; Georgia, Alabama, and Mississippi on the south; Arkansas and Missouri on the Mississippi River to the west. Tennessee ties Missouri as the state bordering the most other states. The state is trisected by the Tennessee River.
The highest point in the state is Clingmans Dome at . Clingmans Dome, which lies on Tennessee's eastern border, is the highest point on the Appalachian Trail, and is the third highest peak in the United States east of the Mississippi River. The state line between Tennessee and North Carolina crosses the summit. The state's lowest point is the Mississippi River at the Mississippi state line: . The geographical center of the state is located in Murfreesboro.
The state of Tennessee is geographically, culturally, economically, and legally divided into three Grand Divisions: East Tennessee, Middle Tennessee, and West Tennessee. The state constitution allows no more than two justices of the five-member Tennessee Supreme Court to be from one Grand Division and a similar rule applies to certain commissions and boards.
Tennessee features six principal physiographic regions: the Blue Ridge, the Appalachian Ridge and Valley Region, the Cumberland Plateau, the Highland Rim, the Nashville Basin, and the Gulf Coastal Plain. Tennessee is home to the most caves in the United States, with over 10,000 documented caves to date.
East Tennessee
thumb|left|Map of Tennessee highlighting East Tennessee
The Blue Ridge area lies on the eastern edge of Tennessee, bordering North Carolina. This region of Tennessee is characterized by the high mountains and rugged terrain of the western Blue Ridge Mountains, which are subdivided into several subranges, namely the Great Smoky Mountains, the Bald Mountains, the Unicoi Mountains, the Unaka Mountains and Roan Highlands, and the Iron Mountains.
The average elevation of the Blue Ridge area is above sea level. Clingmans Dome, the state's highest point, is located in this region. The Blue Ridge area was never more than sparsely populated, and today much of it is protected by the Cherokee National Forest, the Great Smoky Mountains National Park, and several federal wilderness areas and state parks.
thumb|right|Bald Mountains
Stretching west from the Blue Ridge for approximately is the Ridge and Valley region, in which numerous tributaries join to form the Tennessee River in the Tennessee Valley. This area of Tennessee is covered by fertile valleys separated by wooded ridges, such as Bays Mountain and Clinch Mountain. The western section of the Tennessee Valley, where the depressions become broader and the ridges become lower, is called the Great Valley. In this valley are numerous towns and two of the region's three urban areas, Knoxville, the 3rd largest city in the state, and Chattanooga, the 4th largest city in the state. The third urban area, the Tri-Cities, comprising Bristol, Johnson City, and Kingsport and their environs, is located to the northeast of Knoxville.
The Cumberland Plateau rises to the west of the Tennessee Valley; this area is covered with flat-topped mountains separated by sharp valleys. The elevation of the Cumberland Plateau ranges from above sea level.
East Tennessee has several important transportation links with Middle and West Tennessee, as well as the rest of the nation and the world, including several major airports and interstates. Knoxville's McGhee Tyson Airport (TYS) and Chattanooga's Chattanooga Metropolitan Airport (CHA), as well as the Tri-Cities' Tri-Cities Regional Airport (TRI), provide air service to numerous destinations. I-24, I-81, I-40, I-75, and I-26 along with numerous state highways and other important roads, traverse the Grand Division and connect Chattanooga, Knoxville, and the Tri-Cities, along with other cities and towns such as Cleveland, Athens, and Sevierville.
Middle Tennessee
thumb|left|Map of Tennessee highlighting Middle Tennessee
West of the Cumberland Plateau is the Highland Rim, an elevated plain that surrounds the Nashville Basin. The northern section of the Highland Rim, known for its high tobacco production, is sometimes called the Pennyroyal Plateau; it is located primarily in Southwestern Kentucky. The Nashville Basin is characterized by rich, fertile farm country and great diversity of natural wildlife.
Middle Tennessee was a common destination of settlers crossing the Appalachians from Virginia in the late 18th century and early 19th century. An important trading route called the Natchez Trace, created and used for many generations by American Indians, connected Middle Tennessee to the lower Mississippi River town of Natchez. The route of the Natchez Trace was used as the basis for a scenic highway called the Natchez Trace Parkway.
Some of the last remaining large American chestnut trees grow in this region. They are being used to help breed blight-resistant trees.
Middle Tennessee is one of the primary state population and transportation centers along with the heart of state government. Nashville (the capital), Clarksville, and Murfreesboro are its largest cities. Fifty percent of the US population is within of Nashville. Interstates I-24, I-40, and I-65 service the Division, meeting in Nashville.
West Tennessee
thumb|left|Map of Tennessee highlighting West Tennessee
West of the Highland Rim and Nashville Basin is the Gulf Coastal Plain, which includes the Mississippi embayment. The Gulf Coastal Plain is, in terms of area, the predominant land region in Tennessee. It is part of the large geographic land area that begins at the Gulf of Mexico and extends north into southern Illinois. In Tennessee, the Gulf Coastal Plain is divided into three sections that extend from the Tennessee River in the east to the Mississippi River in the west.
The easternmost section, about in width, consists of hilly land that runs along the western bank of the Tennessee River. To the west of this narrow strip of land is a wide area of rolling hills and streams that stretches all the way to the Mississippi River; this area is called the Tennessee Bottoms or bottom land. In Memphis, the Tennessee Bottoms end in steep bluffs overlooking the river. To the west of the Tennessee Bottoms is the Mississippi Alluvial Plain, less than above sea level. This area of lowlands, flood plains, and swamp land is sometimes referred to as the Delta region. Memphis is the economic center of West Tennessee and the largest city in the state.
Most of West Tennessee remained Indian land until the Chickasaw Cession of 1818, when the Chickasaw ceded their land between the Tennessee River and the Mississippi River. The portion of the Chickasaw Cession that lies in Kentucky is known today as the Jackson Purchase.
Public lands
thumb|View from atop Mount Le Conte in the Great Smoky Mountains National Park, April 2007
Areas under the control and management of the National Park Service include the following:
Andrew Johnson National Historic Site in Greeneville
Appalachian National Scenic Trail
Big South Fork National River and Recreation Area
Chickamauga and Chattanooga National Military Park
Cumberland Gap National Historical Park
Foothills Parkway
Fort Donelson National Battlefield and Fort Donelson National Cemetery near Dover
Great Smoky Mountains National Park
Natchez Trace Parkway
Obed Wild and Scenic River near Wartburg
Overmountain Victory National Historic Trail
Shiloh National Cemetery and Shiloh National Military Park near Shiloh
Stones River National Battlefield and Stones River National Cemetery near Murfreesboro
Trail of Tears National Historic Trail
Fifty-four state parks, covering some as well as parts of the Great Smoky Mountains National Park and Cherokee National Forest, and Cumberland Gap National Historical Park are in Tennessee. Sportsmen and visitors are attracted to Reelfoot Lake, originally formed by the New Madrid earthquake; stumps and other remains of a once dense forest, together with the lotus bed covering the shallow waters, give the lake an eerie beauty.
Climate
thumb|A map of Köppen climate types in Tennessee
thumb|Autumn in Tennessee. Roadway to Lindsey Lake in David Crockett State Park, located a half mile west of Lawrenceburg.
Most of the state has a humid subtropical climate, with the exception of some of the higher elevations in the Appalachians, which are classified as having a mountain temperate climate or a humid continental climate due to cooler temperatures. The Gulf of Mexico is the dominant factor in the climate of Tennessee, with winds from the south being responsible for most of the state's annual precipitation. Generally, the state has hot summers and mild to cool winters with generous precipitation throughout the year, with highest average monthly precipitation generally in the winter and spring months, between December and April. The driest months, on average, are August to October. On average the state receives of precipitation annually. Snowfall ranges from in West Tennessee to over in the higher mountains in East Tennessee.
Summers in the state are generally hot and humid, with most of the state averaging a high of around during the summer months. Winters tend to be mild to cool, increasing in coolness at higher elevations. Generally, for areas outside the highest mountains, the average overnight lows are near freezing for most of the state. The highest recorded temperature is at Perryville on August 9, 1930, while the lowest recorded temperature is at Mountain City on December 30, 1917.
While the state is far enough from the coast to avoid any direct impact from a hurricane, the location of the state makes it likely to be impacted from the remnants of tropical cyclones which weaken over land and can cause significant rainfall, such as Tropical Storm Chris in 1982 and Hurricane Opal in 1995. The state averages around 50 days of thunderstorms per year, some of which can be severe with large hail and damaging winds. Tornadoes are possible throughout the state, with West and Middle Tennessee the most vulnerable. Occasionally, strong or violent tornadoes occur, such as the devastating April 2011 tornadoes that killed 20 people in North Georgia and Southeast Tennessee. On average, the state has 15 tornadoes per year. Tornadoes in Tennessee can be severe, and Tennessee leads the nation in the percentage of total tornadoes which have fatalities. Winter storms are an occasional problem, such as the infamous Blizzard of 1993, although ice storms are a more likely occurrence. Fog is a persistent problem in parts of the state, especially in East Tennessee.
Monthly Normal High and Low Temperatures For Various Tennessee Cities (F) City Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Bristol 44/25 49/27 57/34 66/41 74/51 81/60 85/64 84/62 79/56 68/43 58/35 48/27 Chattanooga 49/30 54/33 63/40 72/47 79/56 86/65 90/69 89/68 82/62 72/48 61/40 52/33 Knoxville 47/30 52/33 61/40 71/48 78/57 85/65 88/69 87/68 81/62 71/50 60/41 50/34 Memphis 49/31 55/36 63/44 72/52 80/61 89/69 92/73 91/71 85/64 75/52 62/43 52/34 Nashville 46/28 52/31 61/39 70/47 78/57 85/65 90/70 89/69 82/61 71/49 59/40 49/32
Major cities
The capital is Nashville, though Knoxville, Kingston, and Murfreesboro have all served as state capitals in the past. Memphis has the largest population of any city in the state. Nashville's 13-county metropolitan area has been the state's largest since c. 1990. Chattanooga and Knoxville, both in the eastern part of the state near the Great Smoky Mountains, each has approximately one-third of the population of Memphis or Nashville. The city of Clarksville is a fifth significant population center, northwest of Nashville. Murfreesboro is the sixth-largest city in Tennessee, consisting of 108,755 residents. Other populated areas in Tennessee include Franklin, TN and Jackson, TN, both with over 60,000 people, Johnson City, TN in the Tri-Cities region, and Hendersonville, TN outside of Nashville. Cleveland, TN and Morristown, TN also have metropolitan areas. Other notable cities include Bristol, TN, also in the Tri Cities, Oak Ridge, TN, home to a nuclear facility outside of Knoxville, and Cookeville, TN, a large regional city located about halfway between Nashville and Knoxville.
History
Early history
thumb|right|Mississippian-period shell gorget, Castalian Springs, Sumner County
thumb|right|Reconstruction of Fort Loudon, the first British settlement in Tennessee
The area now known as Tennessee was first inhabited by Paleo-Indians nearly 12,000 years ago."Archaeology and the Native Peoples of Tennessee". University of Tennessee, Frank H. McClung Museum. Retrieved on April 26, 2012. The names of the cultural groups that inhabited the area between first settlement and the time of European contact are unknown, but several distinct cultural phases have been named by archaeologists, including Archaic (8000–1000 BC), Woodland (1000 BC–1000 AD), and Mississippian (1000–1600 AD), whose chiefdoms were the cultural predecessors of the Muscogee people who inhabited the Tennessee River Valley before Cherokee migration into the river's headwaters.
The first recorded European excursions into what is now called Tennessee were three expeditions led by Spanish explorers, namely Hernando de Soto in 1540, Tristan de Luna in 1559, and Juan Pardo in 1567. Pardo recorded the name "Tanasqui" from a local Indian village, which evolved to the state's current name. At that time, Tennessee was inhabited by tribes of Muscogee and Yuchi people. Possibly because of European diseases devastating the Indian tribes, which would have left a population vacuum, and also from expanding European settlement in the north, the Cherokee moved south from the area now called Virginia. As European colonists spread into the area, the Indian populations were forcibly displaced to the south and west, including all Muscogee and Yuchi peoples, the Chickasaw and Choctaw, and ultimately, the Cherokee in 1838.
The first British settlement in what is now Tennessee was built in 1756 by settlers from the colony of South Carolina at Fort Loudoun, near present-day Vonore. Fort Loudoun became the westernmost British outpost to that date. The fort was designed by John William Gerard de Brahm and constructed by forces under British Captain Raymond Demeré. After its completion, Captain Raymond Demeré relinquished command on August 14, 1757 to his brother, Captain Paul Demeré. Hostilities erupted between the British and the neighboring Overhill Cherokees, and a siege of Fort Loudoun ended with its surrender on August 7, 1760. The following morning, Captain Paul Demeré and a number of his men were killed in an ambush nearby, and most of the rest of the garrison was taken prisoner.Stanley Folmsbee, Robert Corlew, and Enoch Mitchell, Tennessee: A Short History (Knoxville, Tenn.: University of Tennessee Press, 1969), p. 45.
In the 1760s, long hunters from Virginia explored much of East and Middle Tennessee, and the first permanent European settlers began arriving late in the decade. The vast majority of 18th century settlers were English or of primarily English descent but nearly 20% of them were also Scotch-Irish.Robert E. Corlew, Tennessee: A Short History (Knoxville: University of Tennessee Press, 1981), page 106 These settlers formed the Watauga Association, a community built on lands leased from the Cherokee peoples.
During the American Revolutionary War, Fort Watauga at Sycamore Shoals (in present-day Elizabethton) was attacked (1776) by Dragging Canoe and his warring faction of Cherokee who were aligned with the British Loyalists. These renegade Cherokee were referred to by settlers as the Chickamauga. They opposed North Carolina's annexation of the Washington District and the concurrent settling of the Transylvania Colony further north and west. The lives of many settlers were spared from the initial warrior attacks through the warnings of Dragging Canoe's cousin, Nancy Ward. The frontier fort on the banks of the Watauga River later served as a 1780 staging area for the Overmountain Men in preparation to trek over the Appalachian Mountains, to engage, and to later defeat the British Army at the Battle of Kings Mountain in South Carolina.
Three counties of the Washington District (now part of Tennessee) broke off from North Carolina in 1784 and formed the State of Franklin. Efforts to obtain admission to the Union failed, and the counties (now numbering eight) had re-joined North Carolina by 1789. North Carolina ceded the area to the federal government in 1790, after which it was organized into the Southwest Territory. In an effort to encourage settlers to move west into the new territory, in 1787 the mother state of North Carolina ordered a road to be cut to take settlers into the Cumberland Settlements—from the south end of Clinch Mountain (in East Tennessee) to French Lick (Nashville). The Trace was called the "North Carolina Road" or "Avery's Trace", and sometimes "The Wilderness Road" (although it should not be confused with Daniel Boone's "Wilderness Road" through the Cumberland Gap).
Statehood (1796)
Tennessee was admitted to the Union on June 1, 1796 as the 16th state. It was the first state created from territory under the jurisdiction of the United States federal government. Apart from the former Thirteen Colonies only Vermont and Kentucky predate Tennessee's statehood, and neither was ever a federal territory. The state boundaries, according to the Constitution of the State of Tennessee, Article I, Section 31, stated that the beginning point for identifying the boundary was the extreme height of the Stone Mountain, at the place where the line of Virginia intersects it, and basically ran the extreme heights of mountain chains through the Appalachian Mountains separating North Carolina from Tennessee past the Indian towns of Cowee and Old Chota, thence along the main ridge of the said mountain (Unicoi Mountain) to the southern boundary of the state; all the territory, lands and waters lying west of said line are included in the boundaries and limits of the newly formed state of Tennessee. Part of the provision also stated that the limits and jurisdiction of the state would include future land acquisition, referencing possible land trade with other states, or the acquisition of territory from west of the Mississippi River.
During the administration of U.S. President Martin Van Buren, nearly 17,000 Cherokees—along with approximately 2,000 black slaves owned by Cherokees—were uprooted from their homes between 1838 and 1839 and were forced by the U.S. military to march from "emigration depots" in Eastern Tennessee (such as Fort Cass) toward the more distant Indian Territory west of Arkansas.Carter (III), Samuel (1976). Cherokee sunset: A nation betrayed: a narrative of travail and triumph, persecution and exile. New York: Doubleday, p. 232. During this relocation an estimated 4,000 Cherokees died along the way west. In the Cherokee language, the event is called Nunna daul Isunyi—"the Trail Where We Cried." The Cherokees were not the only American Indians forced to emigrate as a result of the Indian removal efforts of the United States, and so the phrase "Trail of Tears" is sometimes used to refer to similar events endured by other American Indian peoples, especially among the "Five Civilized Tribes". The phrase originated as a description of the earlier emigration of the Choctaw nation.
Civil War and Reconstruction
In February 1861, secessionists in Tennessee's state government—led by Governor Isham Harris—sought voter approval for a convention to sever ties with the United States, but Tennessee voters rejected the referendum by a 54–46% margin. The strongest opposition to secession came from East Tennessee (which later tried to form a separate Union-aligned state). Following the Confederate attack upon Fort Sumter in April and Lincoln's call for troops from Tennessee and other states in response, Governor Isham Harris began military mobilization, submitted an ordinance of secession to the General Assembly, and made direct overtures to the Confederate government. The Tennessee legislature ratified an agreement to enter a military league with the Confederate States on May 7, 1861. On June 8, 1861, with people in Middle Tennessee having significantly changed their position, voters approved a second referendum calling for secession, becoming the last state to do so.
Many major battles of the American Civil War were fought in Tennessee—most of them Union victories. Ulysses S. Grant and the U.S. Navy captured control of the Cumberland and Tennessee rivers in February 1862. They held off the Confederate counterattack at Shiloh in April. Memphis fell to the Union in June, following a naval battle on the Mississippi River in front of the city. The Capture of Memphis and Nashville gave the Union control of the western and middle sections; this control was confirmed at the Battle of Murfreesboro in early January 1863 and by the subsequent Tullahoma Campaign.
thumb|right|The Battle of Franklin, November 30, 1864
Confederates held East Tennessee despite the strength of Unionist sentiment there, with the exception of extremely pro-Confederate Sullivan County. The Confederates, led by General James Longstreet, did attack General Burnside's Fort Sanders at Knoxville and lost. It was a big blow to East Tennessee Confederate momentum, but Longstreet won the Battle of Bean's Station a few weeks later. The Confederates besieged Chattanooga during the Chattanooga Campaign in early fall 1863, but were driven off by Grant in November. Many of the Confederate defeats can be attributed to the poor strategic vision of General Braxton Bragg, who led the Army of Tennessee from Perryville, Kentucky to another Confederate defeat at Chattanooga.
The last major battles came when the Confederates invaded Middle Tennessee in November 1864 and were checked at Franklin, then completely dispersed by George Thomas at Nashville in December. Meanwhile, the civilian Andrew Johnson was appointed military governor of the state by President Abraham Lincoln.
When the Emancipation Proclamation was announced, Tennessee was mostly held by Union forces. Thus, Tennessee was not among the states enumerated in the Proclamation, and the Proclamation did not free any slaves there. Nonetheless, enslaved African Americans escaped to Union lines to gain freedom without waiting for official action. Old and young, men, women and children camped near Union troops. Thousands of former slaves ended up fighting on the Union side, nearly 200,000 in total across the South.
Tennessee's legislature approved an amendment to the state constitution prohibiting slavery on February 22, 1865. Voters in the state approved the amendment in March. It also ratified the Thirteenth Amendment to the United States Constitution (abolishing slavery in every state) on April 7, 1865.
In 1864, Andrew Johnson (a War Democrat from Tennessee) was elected Vice President under Abraham Lincoln. He became President after Lincoln's assassination in 1865. Under Johnson's lenient re-admission policy, Tennessee was the first of the seceding states to have its elected members readmitted to the U.S. Congress, on July 24, 1866. Because Tennessee had ratified the Fourteenth Amendment, it was the only one of the formerly secessionist states that did not have a military governor during the Reconstruction period.
After the formal end of Reconstruction, the struggle over power in Southern society continued. Through violence and intimidation against freedmen and their allies, White Democrats regained political power in Tennessee and other states across the South in the late 1870s and 1880s. Over the next decade, the state legislature passed increasingly restrictive laws to control African Americans. In 1889 the General Assembly passed four laws described as electoral reform, with the cumulative effect of essentially disfranchising most African Americans in rural areas and small towns, as well as many poor Whites. Legislation included implementation of a poll tax, timing of registration, and recording requirements. Tens of thousands of taxpaying citizens were without representation for decades into the 20th century.Connie Lester, "Disfranchising Laws", The Tennessee Encyclopedia of History and Culture. Retrieved March 11, 2008. Disfranchising legislation accompanied Jim Crow laws passed in the late 19th century, which imposed segregation in the state. In 1900, African Americans made up nearly 24% of the state's population, and numbered 480,430 citizens who lived mostly in the central and western parts of the state.Historical Census Browser, 1900 US Census, University of Virginia. Retrieved March 15, 2008.
In 1897, Tennessee celebrated its centennial of statehood (though one year late of the 1896 anniversary) with a great exposition in Nashville. A full-scale replica of the Parthenon was constructed for the celebration, located in what is now Nashville's Centennial Park.
20th century
thumb|A group of workers at Norris Dam construction camp site. The TVA was formed as part of Roosevelt's New Deal legislation.
On August 18, 1920, Tennessee became the thirty-sixth and final state necessary to ratify the Nineteenth Amendment to the United States Constitution, which provided women the right to vote. Disfranchising voter registration requirements continued to keep most African Americans and many poor whites, both men and women, off the voter rolls.
The need to create work for the unemployed during the Great Depression, a desire for rural electrification, the need to control annual spring flooding and improve shipping capacity on the Tennessee River were all factors that drove the federal creation of the Tennessee Valley Authority (TVA) in 1933. Through the power of the TVA projects, Tennessee quickly became the nation's largest public utility supplier.
During World War II, the availability of abundant TVA electrical power led the Manhattan Project to locate one of the principal sites for production and isolation of weapons-grade fissile material in East Tennessee. The planned community of Oak Ridge was built from scratch to provide accommodations for the facilities and workers. These sites are now Oak Ridge National Laboratory, the Y-12 National Security Complex, and the East Tennessee Technology Park.
Despite recognized effects of limiting voting by poor whites, successive legislatures expanded the reach of the disfranchising laws until they covered the state. Political scientist V. O. Key, Jr. argued in 1949 that:
...the size of the poll tax did not inhibit voting as much as the inconvenience of paying it. County officers regulated the vote by providing opportunities to pay the tax (as they did in Knoxville), or conversely by making payment as difficult as possible. Such manipulation of the tax, and therefore the vote, created an opportunity for the rise of urban bosses and political machines. Urban politicians bought large blocks of poll tax receipts and distributed them to blacks and whites, who then voted as instructed.
In 1953 state legislators amended the state constitution, removing the poll tax. In many areas both blacks and poor whites still faced subjectively applied barriers to voter registration that did not end until after passage of national civil rights legislation, including the Voting Rights Act of 1965.
Tennessee celebrated its bicentennial in 1996. With a yearlong statewide celebration entitled "Tennessee 200", it opened a new state park (Bicentennial Mall) at the foot of Capitol Hill in Nashville.
The state has had major disasters, such as the Great Train Wreck of 1918, one of the worst train accidents in U.S. history, and the Sultana explosion on the Mississippi River near Memphis, the deadliest maritime disaster in U.S. history.
21st century
In 2002, businessman Phil Bredesen was elected as the 48th governor. Also in 2002, Tennessee amended the state constitution to allow for the establishment of a lottery. Tennessee's Bob Corker was the only freshman Republican elected to the United States Senate in the 2006 midterm elections. The state constitution was amended to reject same-sex marriage. In January 2007, Ron Ramsey became the first Republican elected as Speaker of the State Senate since Reconstruction, as a result of the realignment of the Democratic and Republican parties in the South since the late 20th century, with Republicans now elected by conservative voters, who previously had supported Democrats.
In 2010, during the 2010 midterm elections, Bill Haslam succeeded Bredesen, who was term-limited, to become the 49th Governor of Tennessee. In April and May 2010, flooding in Middle Tennessee devastated Nashville and other parts of Middle Tennessee. In 2011, parts of East Tennessee, including Hamilton County and Apison in Bradley County, were devastated by the April 2011 tornado outbreak.
Demographics
The United States Census Bureau estimates that the population of Tennessee was 6,651,194 on July 1, 2016, an increase of 304,896 people since the 2010 United States Census, or 4.8%. This includes a natural increase since the last census of 110,000 people (that is 502,451 births minus 392,451 deaths), and an increase from net migration of 191,384 people into the state. Immigration from outside the United States resulted in a net increase of 55,613 people, and migration within the country produced a net increase of 135,771 people.
Twenty percent of Tennesseans were born outside the South in 2008, compared to a figure of 13.5% in 1990. In recent years, Tennessee has received an influx of people relocating from California, Florida, and several northern states for the low cost of living, and the booming healthcare and automobile industries. Metropolitan Nashville is one of the fastest-growing areas in the country due in part to these factors.
The center of population of Tennessee is located in Rutherford County, in the city of Murfreesboro.
As of the 2010 census, the racial composition of Tennessee's population was as follows:
Racial composition 1990Historical Census Statistics on Population Totals By Race, 1790 to 1990, and By Hispanic Origin, 1970 to 1990, For The United States, Regions, Divisions, and States 2000Population of Tennessee: Census 2010 and 2000 Interactive Map, Demographics, Statistics, Quick Facts 20102010 Census Data 2013 est. White 83.0% 80.2% 77.6% 79.1% Black 16.0% 16.4% 16.7% 17.0% Asian 0.7% 1.0% 1.4% 1.6% Native 0.2% 0.3% 0.3% 0.4% Native Hawaiian andother Pacific Islander – – 0.1% 0.1% Other race 0.2% 1.0% 2.2% - Two or more races – 1.1% 1.7% 1.7%
In the same year 4.6% of the total population was of Hispanic or Latino origin (they may be of any race).
thumb|right|Tennessee population density map, 2010
In 2000, the five most common self-reported ethnic groups in the state were: American (17.3%), African American (13.0%), Irish (9.3%), English (9.1%), and German (8.3%). Most Tennesseans who self-identify as having American ancestry are of English and Scotch-Irish ancestry. An estimated 21–24% of Tennesseans are of predominantly English ancestry. In the 1980 census 1,435,147 Tennesseans claimed "English" or "mostly English" ancestry out of a state population of 3,221,354 making them 45% of the state at the time.
As of 2011, 36.3% of Tennessee's population younger than age 1 were minorities.
According to the 2010 census, 6.4% of Tennessee's population were reported as under 5 years of age, 23.6% under 18, and 13.4% were 65 or older. Females made up approximately 51.3% of the population.
On June 19, 2010, the Tennessee Commission of Indian Affairs granted state recognition to six Indian tribes which was later repealed by the state's Attorney General because the action by the commission was illegal. The tribes were as follows:
The Cherokee Wolf Clan in western Tennessee, with members in Carroll County, Benton, Decatur, Henderson, Henry, Weakley, Gibson and Madison counties.
The Chikamaka Band, based historically on the South Cumberland Plateau, said to have members in Franklin, Grundy, Marion, Sequatchie, Warren and Coffee counties.
Central Band of Cherokee, also known as the Cherokee of Lawrence County, Tennessee.
United Eastern Lenapee Nation of Winfield, Tennessee.
The Tanasi Council, said to have members in Shelby, Dyer, Gibson, Humphreys and Perry counties; and
Remnant Yuchi Nation, with members in Sullivan, Carter, Greene, Hawkins, Unicoi, Johnson and Washington counties.Tom Humphrey, "State grants six Indian tribes recognition: Cherokee Nation may try to have action by Indian Affairs voided", Knoxville News Sentinel, June 21, 2010, accessed June 30, 2010
Religion
The religious affiliations of the people of Tennessee as of 2014: (2014). Religious composition of adults in Tennessee.
Christian: 81%
Evangelical Protestant: 52%
Baptist: 33%
Restorationist: 6%
Non-denominational: 4%
Pentecostal: 4%
Presbyterian: 2%
Episcopalian: <1%
Lutheran: <1%
Methodist: <1%
Mainline Protestant: 13%
Historically Black Protestant: 8%
Roman Catholic: 6%
Mormon: 1%
Orthodox Christian: <1%
Other Christian (includes unspecified "Christian" and "Protestant"): <1%
Islam: 1%
Jewish: 1%
Other religions: 3%
Non-religious: 14%
Atheist: 1%
Agnostic: 3%
Nothing in particular: 11%
The largest denominations by number of adherents in 2010 were the Southern Baptist Convention with 1,483,356; the United Methodist Church with 375,693; the Roman Catholic Church with 222,343; and the Churches of Christ with 214,118.
As of January 1, 2009, The Church of Jesus Christ of Latter-day Saints (LDS Church) reported 43,179 members, 10 stakes, 92 Congregations (68 wards and 24 branches), two missions, and two temples in Tennessee.United States Information: Tennessee, Church News, The Church of Jesus Christ of Latter-day Saints website, February 2, 2010. Retrieved: February 7, 2013.
Tennessee is home to several Protestant denominations, such as the National Baptist Convention (headquartered in Nashville); the Church of God in Christ and the Cumberland Presbyterian Church (both headquartered in Memphis); the Church of God and The Church of God of Prophecy (both headquartered in Cleveland). The Free Will Baptist denomination is headquartered in Antioch; its main Bible college is in Nashville. The Southern Baptist Convention maintains its general headquarters in Nashville. Publishing houses of several denominations are located in Nashville.
Economy
thumb|A geomap showing the counties of Tennessee colored by the relative range of that county's median income. Data is sourced from the 2014 ACS 5-year Estimate Report put out by the US Census Bureau
According to the U.S. Bureau of Economic Analysis, in 2011 Tennessee's real gross state product was $233.997 billion.
In 2003, the per capita personal income was $28,641, 36th in the nation, and 91% of the national per capita personal income of $31,472. In 2004, the median household income was $38,550, 41st in the nation, and 87% of the national median of $44,472.
For 2012, the state held an asset surplus of $533 million, one of only eight states in the nation to report a surplus.
Major outputs for the state include textiles, cotton, cattle, and electrical power. Tennessee has over 82,000 farms, roughly 59 percent of which accommodate beef cattle. Although cotton was an early crop in Tennessee, large-scale cultivation of the fiber did not begin until the 1820s with the opening of the land between the Tennessee and Mississippi Rivers. The upper wedge of the Mississippi Delta extends into southwestern Tennessee, and it was in this fertile section that cotton took hold. Soybeans are also heavily planted in West Tennessee, focusing on the northwest corner of the state.
Major corporations with headquarters in Tennessee include FedEx, AutoZone and International Paper, all based in Memphis; Pilot Corporation and Regal Entertainment Group, based in Knoxville; Eastman Chemical Company, based in Kingsport; the North American headquarters of Nissan Motor Company, based in Franklin; Hospital Corporation of America and Caterpillar Financial, based in Nashville; and Unum, based in Chattanooga. Tennessee is also the location of the Volkswagen factory in Chattanooga, a $2 billion polysilicon production facility by Wacker Chemie in Bradley County, and a $1.2 billion polysilicon production facility by Hemlock Semiconductor in Clarksville.
Tennessee is a right to work state, as are most of its Southern neighbors. Unionization has historically been low and continues to decline as in most of the U.S. generally. As of May 2016, the state had an unemployment rate of 4.3%.; Local Area Unemployment Statistics
Tax
The Tennessee income tax does not apply to salaries and wages, but most income from stock, bonds and notes receivable is taxable. All taxable dividends and interest which exceed the $1,250 single exemption or the $2,500 joint exemption are taxable at the rate of 6%. The state's sales and use tax rate for most items is 7%. Food is taxed at a lower rate of 5.25%, but candy, dietary supplements and prepared food are taxed at the full 7% rate. Local sales taxes are collected in most jurisdictions, at rates varying from 1.5% to 2.75%, bringing the total sales tax to between 8.5% and 9.75%, one of the highest levels in the nation. Intangible property is assessed on the shares of stock of stockholders of any loan company, investment company, insurance company or for-profit cemetery companies. The assessment ratio is 40% of the value multiplied by the tax rate for the jurisdiction. Tennessee imposes an inheritance tax on decedents' estates that exceed maximum single exemption limits ($1,000,000 for deaths in 2006 and thereafter).
Tourism
Tourism contributes billions of dollars each year to the state's economy and Tennessee is ranked among the Top 10 destinations in the US. In 2014 a record 100 million people visited the state resulting in $17.7 billion in tourism related spending within the state, an increase of 6.3% over 2013; tax revenue from tourism equaled $1.5 billion. Each county in Tennessee saw at least $1 million from tourism while 19 counties received at least $100 million (Davidson, Shelby, and Sevier counties were the top three). Tourism-generated jobs for the state reached 152,900, a 2.8% increase. International travelers to Tennessee accounted for $533 million in spending.
In 2013 tourism within the state from local citizens accounted for 39.9% of tourists, the second highest originating location for tourists to Tennessee is the state of Georgia, accounting for 8.4% of tourists. Forty-four percent of stays in the state were "day trips", 25% stayed one night, 15% stayed two nights, and 11% stayed 4 or more nights. The average stay was 2.16 nights, compared to 2.03 nights for the US as a whole. The average person spent $118 per day: 29% on transportation, 24% on food, 17% on accommodation, and 28% on shopping and entertainment.
Some of the top tourist attractions in the state are: the Great Smoky Mountains National Park, Graceland, Beale Street, Lower Broadway, the Ryman Auditorium, the Gaylord Opryland Resort, Lookout Mountain, and the Tennessee Aquarium.
Music
Tennessee has played a critical role in the development of many forms of American popular music, including rock and roll, blues, country, and rockabilly. Beale Street in Memphis is considered by many to be the birthplace of the blues, with musicians such as W. C. Handy performing in its clubs as early as 1909. Memphis is also home to Sun Records, where musicians such as Elvis Presley, Johnny Cash, Carl Perkins, Jerry Lee Lewis, Roy Orbison, and Charlie Rich began their recording careers, and where rock and roll took shape in the 1950s. The 1927 Victor recording sessions in Bristol generally mark the beginning of the country music genre and the rise of the Grand Ole Opry in the 1930s helped make Nashville the center of the country music recording industry. Three brick-and-mortar museums recognize Tennessee's role in nurturing various forms of popular music: the Memphis Rock N' Soul Museum, the Country Music Hall of Fame and Museum in Nashville, and the International Rock-A-Billy Museum in Jackson. Moreover, the Rockabilly Hall of Fame, an online site recognizing the development of rockabilly in which Tennessee played a crucial role, is based in Nashville.
Transportation
Interstate highways
thumb|right|The Hernando de Soto Bridge spans the Mississippi River in Memphis
Interstate 40 crosses the state in a west-east orientation. Its branch interstate highways include I-240 in Memphis; I-440 in Nashville; I-840 in Nashville; I-140 from Knoxville to Alcoa; and I-640 in Knoxville. I-26, although technically an east-west interstate, runs from the North Carolina border below Johnson City to its terminus at Kingsport. I-24 is an east-west interstate that runs cross-state from Chattanooga to Clarksville. In a north-south orientation are highways I-55, I-65, I-75, and I-81. Interstate 65 crosses the state through Nashville, while Interstate 75 serves Chattanooga and Knoxville and Interstate 55 serves Memphis. Interstate 81 enters the state at Bristol and terminates at its junction with I-40 near Dandridge. I-155 is a branch highway from I-55. The only spur highway of I-75 in Tennessee is I-275, which is in Knoxville. When completed, I-69 will travel through the western part of the state, from South Fulton to Memphis. A branch interstate, I-269 also exists from Millington to Collierville.
Airports
Major airports within the state include Memphis International Airport (MEM), Nashville International Airport (BNA), McGhee Tyson Airport (TYS) in Alcoa, Chattanooga Metropolitan Airport (CHA), Tri-Cities Regional Airport (TRI), and McKellar-Sipes Regional Airport (MKL), in Jackson. Because Memphis International Airport is the major hub for FedEx Corporation, it is the world's largest air cargo operation.
Railroads
For passenger rail service, Memphis and Newbern, Tennessee, are served by the Amtrak City of New Orleans line on its run between Chicago, Illinois, and New Orleans, Louisiana. Nashville is served by the Music City Star commuter rail service.
Cargo services in Tennessee are primarily served by CSX Transportation, which has a hump yard in Nashville called Radnor Yard. Norfolk Southern Railway operates lines in East Tennessee, through cities including Knoxville and Chattanooga, and operates a classification yard near Knoxville, the John Sevier Yard. BNSF operates a major intermodal facility in Memphis.
Governance
thumb|Tennessee State Capitol in Nashville
Tennessee's governor holds office for a four-year term and may serve a maximum of two consecutive terms. The governor is the only official who is elected statewide. Unlike most states, the state does not elect the lieutenant governor directly; the Tennessee Senate elects its Speaker, who serves as lieutenant governor.
The Tennessee General Assembly, the state legislature, consists of the 33-member Senate and the 99-member House of Representatives. Senators serve four-year terms, and House members serve two-year terms. Each chamber chooses its own speaker. The speaker of the state Senate also holds the title of lieutenant-governor. Constitutional officials in the legislative branch are elected by a joint session of the legislature.
The highest court in Tennessee is the state Supreme Court. It has a chief justice and four associate justices. No more than two justices can be from the same Grand Division. The Supreme Court of Tennessee also appoints the Attorney General, a practice that is not found in any of the other 49 states in the Union. Both the Court of Appeals and the Court of Criminal Appeals have 12 judges. A number of local, circuit, and federal courts provide judicial services.
Tennessee's current state constitution was adopted in 1870. The state had two earlier constitutions. The first was adopted in 1796, the year Tennessee joined the union, and the second was adopted in 1834. The 1870 Constitution outlaws martial law within its jurisdiction. This may be a result of the experience of Tennessee residents and other Southerners during the period of military control by Union (Northern) forces of the U.S. government after the American Civil War.
Politics
+ Presidential elections results Year Republican Democratic201661.07% 1,517,402 34.90% 867,110201259.42% 1,462,330 39.04% 960,709200856.85% 1,479,178 41.79% 1,087,437200456.80% 1,384,375 42.53% 1,036,477200051.15% 1,061,949 47.28% 981,7201996 45.59% 863,53048.00% 909,1461992 42.43% 841,30047.08% 933,521198857.89% 947,233 41.55% 679,794198457.84% 990,212 41.57% 711,714198048.70% 787,761 48.41% 783,0511976 42.94% 633,96955.94% 825,879197267.70% 813,147 29.75% 357,293196837.85% 472,592 28.13% 351,2331964 44.49% 508,96555.50% 634,947196052.92% 556,577 45.77% 481,453
Tennessee politics, like that of most U.S. states, are dominated by the Republican and the Democratic parties. Historian Dewey W. Grantham traces divisions in the state to the period of the American Civil War: for decades afterward, the eastern third of the state was Republican and the western two thirds voted Democrat.Shaun A. Martin, "Dewey W. Grantham", Tennessee Encyclopedia of History and Culture, 2010 This division was related to the state's pattern of farming, plantations and slaveholding. The eastern section was made up of yeoman farmers, but Middle and West Tennessee cultivated crops, such as tobacco and cotton, that were dependent on the use of slave labor. These areas became defined as Democratic after the war.
During Reconstruction, freedmen and former free people of color were granted the right to vote; most joined the Republican Party. Numerous African Americans were elected to local offices, and some to state office. Following Reconstruction, Tennessee continued to have competitive party politics. But in the 1880s, the white-dominated state government passed four laws, the last of which imposed a poll tax requirement for voter registration. These served to disenfranchise most African Americans, and their power in the Republican Party, the state, and cities where they had significant population was markedly reduced. In 1900 African Americans comprised 23.8 percent of the state's population, concentrated in Middle and West Tennessee.Historical Census Browser, 1900 Federal Census, University of Virginia, accessed March 15, 2008 In the early 1900s, the state legislature approved a form of commission government for cities based on at-large voting for a few positions on a Board of Commission; several adopted this as another means to limit African-American political participation. In 1913 the state legislature enacted a bill enabling cities to adopt this structure without legislative approval.BUCHANAN v. CITY OF JACKSON, 683 F. Supp. 1515 (W.D. Tenn. 1988), Case Text website
After disenfranchisement of blacks, the GOP in Tennessee was historically a sectional party supported by whites only in the eastern part of the state. In the 20th century, except for two nationwide Republican landslides of the 1920s (in 1920, when Tennessee narrowly supported Warren G. Harding over Ohio Governor James Cox, and in 1928, when it more decisively voted for Herbert Hoover over New York Governor Al Smith, a Catholic), the state was part of the Democratic Solid South until the 1950s. In that postwar decade, it twice voted for Republican Dwight D. Eisenhower, former Allied Commander of the Armed Forces during World War II. Since then, more of the state's voters have shifted to supporting Republicans, and Democratic presidential candidates have carried Tennessee only four times.
By 1960 African Americans comprised 16.45% of the state's population. It was not until after the mid-1960s and passage of the Voting Rights Act of 1965 that they were able to vote in full again, but new devices, such as at-large commission city governments, had been adopted in several jurisdictions to limit their political participation. Former Gov. Winfield Dunn and former U.S. Sen. Bill Brock wins in 1970 helped make the Republican Party competitive among whites for the statewide victory. Tennessee has selected governors from different parties since 1970. Increasingly the Republican Party has become the party of white conservatives.
In the early 21st century, Republican voters control most of the state, especially in the more rural and suburban areas outside of the cities; Democratic strength is mostly confined to the urban cores of the four major cities, and is particularly strong in the cities of Nashville and Memphis. The latter area includes a large African-American population.Tennessee by County – GCT-PL. Race and Hispanic or Latino 2000 U.S. Census Bureau Historically, Republicans had their greatest strength in East Tennessee before the 1960s. Tennessee's 1st and 2nd congressional districts, based in the Tri-Cities and Knoxville, respectively, are among the few historically Republican districts in the South. Those districts' residents supported the Union over the Confederacy during the Civil War; they identified with the GOP after the war and have stayed with that party ever since. The 1st has been in Republican hands continuously since 1881, and Republicans (or their antecedents) have held it for all but four years since 1859. The 2nd has been held continuously by Republicans or their antecedents since 1859.
In the 2000 presidential election, Vice President Al Gore, a former Democratic U.S. Senator from Tennessee, failed to carry his home state, an unusual occurrence but indicative of strengthening Republican support. Republican George W. Bush received increased support in 2004, with his margin of victory in the state increasing from 4% in 2000 to 14% in 2004.Tennessee: McCain Leads Both Democrats by Double Digits Rasumussen Reports, April 6, 2008 Democratic presidential nominees from Southern states (such as Lyndon B. Johnson, Jimmy Carter, Bill Clinton) usually fare better than their Northern counterparts do in Tennessee, especially among split-ticket voters outside the metropolitan areas.
Tennessee sends nine members to the US House of Representatives, of whom there are seven Republicans and two Democrats. Lieutenant Governor Ron Ramsey is the first Republican speaker of the state Senate in 140 years. In the 2008 elections, the Republican party gained control of both houses of the Tennessee state legislature for the first time since Reconstruction. In 2008, some 30% of the state's electorate identified as independents.
The Baker v. Carr (1962) decision of the US Supreme Court established the principle of "one man, one vote", requiring state legislatures to redistrict to bring Congressional apportionment in line with decennial censuses. It also required both houses of state legislatures to be based on population for representation and not geographic districts such as counties. This case arose out of a lawsuit challenging the longstanding rural bias of apportionment of seats in the Tennessee legislature. After decades in which urban populations had been underrepresented in many state legislatures, this significant ruling led to an increased (and proportional) prominence in state politics by urban and, eventually, suburban, legislators and statewide officeholders in relation to their population within the state. The ruling also applied to numerous other states long controlled by rural minorities, such as Alabama, Vermont, and Montana.
Law enforcement
State agencies
The state of Tennessee maintains four dedicated law enforcement entities: the Tennessee Highway Patrol, the Tennessee Wildlife Resources Agency (TWRA), the Tennessee Bureau of Investigation (TBI), and the Tennessee Department of Environment and Conservation (TDEC).
The Highway Patrol is the primary law enforcement entity that concentrates on highway safety regulations and general non-wildlife state law enforcement and is under the jurisdiction of the Tennessee Department of Safety. The TWRA is an independent agency tasked with enforcing all wildlife, boating, and fisheries regulations outside of state parks. The TBI maintains state-of-the-art investigative facilities and is the primary state-level criminal investigative department. Tennessee State Park Rangers are responsible for all activities and law enforcement inside the Tennessee State Parks system.
Local government
Local law enforcement is divided between County Sheriff's Offices and Municipal Police Departments. Tennessee's Constitution requires that each County have an elected Sheriff. In 94 of the 95 counties the Sheriff is the chief law enforcement officer in the county and has jurisdiction over the county as a whole. Each Sheriff's Office is responsible for warrant service, court security, jail operations and primary law enforcement in the unincorporated areas of a county as well as providing support to the municipal police departments. Incorporated municipalities are required to maintain a police department to provide police services within their corporate limits.
The three counties in Tennessee to adopt metropolitan governments have taken different approaches to resolving the conflict that a Metro government presents to the requirement to have an elected Sheriff.
Nashville/Davidson County converted law enforcement duties entirely to the Metro Nashville Police Chief. In this instance the Sheriff is no longer the chief law enforcement officer for Davidson County. The Davidson County Sheriff's duties focus on warrant service and jail operations. The Metropolitan Police Chief is the chief law enforcement officer and the Metropolitan Police Department provides primary law enforcement for the entire county.
Lynchburg/Moore County took a much simpler approach and abolished the Lynchburg Police Department when it consolidated and placed all law enforcement responsibility under the sheriff's office.
Hartsville/Trousdale County, although the smallest county in Tennessee, adopted a system similar to Nashville's that retains the sheriff's office but also has a metropolitan police department.
Firearms
Gun laws in Tennessee regulate the sale, possession, and use of firearms and ammunition. Concealed carry and open-carry of a handgun is permitted with a Tennessee handgun carry permit or an equivalent permit from a reciprocating state. As of July 1, 2014, a permit is no longer required to possess a loaded handgun in a motor vehicle.
Capital punishment
Capital punishment has existed in Tennessee at various times since statehood. Before 1913 the method of execution was hanging. From 1913 to 1915 there was a hiatus on executions but they were reinstated in 1916 when electrocution became the new method. From 1972 to 1978, after the Supreme Court ruled (Furman v. Georgia) capital punishment unconstitutional, there were no further executions. Capital punishment was restarted in 1978, although those prisoners awaiting execution between 1960 and 1978 had their sentences mostly commuted to life in prison. From 1916 to 1960 the state executed 125 inmates. For a variety of reasons there were no further executions until 2000. Since 2000, Tennessee has executed six prisoners and has 73 prisoners on death row (as of April 2015).
Lethal injection was approved by the legislature in 1998, though those who were sentenced to death before January 1, 1999, may request electrocution.
In May 2014 the Tennessee General Assembly passed a law allowing the use of the electric chair for death row executions when lethal injection drugs are not available.
Tribal
The Mississippi Band of Choctaw Indians is the only federally recognized Native American Indian tribe in the state. It owns in Henning, which was placed into federal trust by the tribe in 2012. This is governed directly by the tribe.State of Tennessee Department of Children's Services, Child and Family Service Plan July 1, 2015 – June 30, 2019"Vision of Trust Management Model, Responsibility and Reform", Chief Phyliss J. Anderson, Mississippi Band of Choctaw Indians, April 29, 2013, to the Secretarial Commission on Indian Trust Administration and Reform
Education
thumb|upright|right| University of Tennessee, Knoxville
thumb|upright|right| Vanderbilt University, Nashville
thumb|upright|right| Rhodes College, Memphis
thumb|upright|right| Tennessee State University, Nashville
thumb|upright|right| Middle Tennessee State University, Murfreesboro
Tennessee has a rich variety of public, private, charter, and specialized education facilities ranging from pre-school through university education.
Colleges and universities
Public higher education is under the oversight of the Tennessee Higher Education Commission which provides guidance to two public university systems – the University of Tennessee system and the Tennessee Board of Regents. In addition a number of private colleges and universities are located throughout the state.
American Baptist College
Aquinas College
The Art Institute of Tennessee – Nashville
Austin Peay State University
Baptist College of Health Sciences
Belmont University
Bethel College
Bryan College
Carson–Newman University
Chattanooga State Community College
Christian Brothers University
Cleveland State Community College
Columbia State Community College
Crown College
Cumberland University
Dyersburg State Community College
East Tennessee State University
Emmanuel Christian Seminary
Fisk University
Freed–Hardeman University
Jackson State Community College
Johnson University
King University
Knoxville College
Lane College
Lee University
LeMoyne–Owen College
Lincoln Memorial University
Lipscomb University
Martin Methodist College
Maryville College
Meharry Medical College
Memphis College of Art
Memphis Theological Seminary
Mid-America Baptist Theological Seminary
Middle Tennessee State University
Milligan College
Motlow State Community College
Nashville School of Law
Nashville State Community College
Northeast State Community College
O'More College of Design
Pellissippi State Community College
Rhodes College
Roane State Community College
Sewanee: The University of the South
Southern Adventist University
Southern College of Optometry
Southwest Tennessee Community College
Tennessee Colleges of Applied Technology
Tennessee State University
Tennessee Technological University
Tennessee Temple University
Tennessee Wesleyan University
Trevecca Nazarene University
Tusculum College
Union University
University of Memphis
University of Tennessee system
University of Tennessee (Knoxville)
University of Tennessee Health Science Center (Memphis)
University of Tennessee Space Institute
University of Tennessee at Chattanooga
University of Tennessee at Martin
Vanderbilt University
Volunteer State Community College
Walters State Community College
Watkins College of Art, Design & Film
Welch College
Williamson College Williamson College
Local school districts
Public primary and secondary education systems are operated by county, city, or special school districts to provide education at the local level. These school districts operate under the direction of the Tennessee Department of Education. Private schools are found in many counties.
Sports
ClubSportLeagueTennessee TitansFootballNational Football LeagueMemphis GrizzliesBasketballNational Basketball AssociationNashville PredatorsIce hockeyNational Hockey LeagueMemphis RedbirdsBaseballPacific Coast League (Triple-A)Nashville SoundsBaseballPacific Coast League (Triple-A)Chattanooga LookoutsBaseballSouthern League (Double-A)Tennessee SmokiesBaseballSouthern League (Double-A)Jackson GeneralsBaseballSouthern League (Double-A)Elizabethton TwinsBaseballAppalachian League (Rookie)Greeneville AstrosBaseballAppalachian League (Rookie)Johnson City CardinalsBaseballAppalachian League (Rookie)Kingsport MetsBaseballAppalachian League (Rookie)Knoxville Ice BearsIce hockeySouthern Professional Hockey LeagueChattanooga FCSoccerNational Premier Soccer LeagueKnoxville ForceSoccerNational Premier Soccer LeagueNashville FCSoccerNational Premier Soccer League
In Knoxville, the Tennessee Volunteers college team has played in the Southeastern Conference (SEC) of the National Collegiate Athletic Association since the conference was formed in 1932. The football team has won 13 SEC championships and 25 bowls, including four Sugar Bowls, three Cotton Bowls, an Orange Bowl and a Fiesta Bowl. Meanwhile, the men's basketball team has won four SEC championships and reached the NCAA Elite Eight in 2010. In addition, the women's basketball team has won a host of SEC regular-season and tournament titles along with 8 national titles.
In Nashville, the Vanderbilt Commodores are also charter members of the SEC. In June 2014, Vanderbilt won its first men's national championship by winning the 2014 College World Series.
The state is home to 10 other NCAA Division I programs. Two of these participate in the top level of college football, the Football Bowl Subdivision. The Memphis Tigers are members of the American Athletic Conference, and the Middle Tennessee Blue Raiders from Murfreesboro play in Conference USA. In addition to the Commodores, Nashville is also home to the Belmont Bruins and Tennessee State Tigers, both members of the Ohio Valley Conference (OVC), and the Lipscomb Bisons, members of the Atlantic Sun Conference. Tennessee State plays football in Division I's second level, the Football Championship Subdivision (FCS), while Belmont and Lipscomb do not have football teams. Belmont and Lipscomb have an intense rivalry in men's and women's basketball known as the Battle of the Boulevard, with both schools' men's and women's teams playing two games each season against each other (a rare feature among non-conference rivalries). The OVC also includes the Austin Peay Governors from Clarksville, the Tennessee–Martin Skyhawks from Martin, and the Tennessee Tech Golden Eagles from Cookeville. These three schools, along with fellow OVC member Tennessee State, play each season in football for the Sgt. York Trophy. The Chattanooga Mocs and Johnson City's East Tennessee State Buccaneers are full members, including football, of the Southern Conference.
Tennessee is also home to Bristol Motor Speedway which features NASCAR Sprint Cup racing two weekends a year, routinely selling out more than 160,000 seats on each date; it also was the home of the Nashville Superspeedway, which held Nationwide and IndyCar races until it was shut down in 2012. Tennessee's only graded stakes horse race, the Iroquois Steeplechase, is also held in Nashville each May.
The FedEx St. Jude Classic is a PGA Tour golf tournament held at Memphis since 1958. The U.S. National Indoor Tennis Championships has been held at Memphis since 1976 (men's) and 2002 (women's).
State symbols
State symbols, found in Tennessee Code Annotated Title 4, Chapter 1, Part 3, include:
State amphibian – Tennessee cave salamander
State bird – mockingbird
State game bird – bobwhite quail
State butterfly – zebra swallowtail
State sport fish – smallmouth bass
State commercial fish – channel catfish
State cultivated flower – iris
State wild flowers – passion flower and Tennessee echinacea
State insects – firefly and lady beetle
State agricultural insect – honey bee
State wild animal – raccoon
State horse – Tennessee Walking Horse
State reptile – eastern box turtle
State rifle - Barrett M82
State tree – tulip poplar
State evergreen tree – eastern red cedar
State beverage – milk
State dance – square dance
State fruit – tomato
State fossil – Pterotrigonia (Scabrotrigonia) thoracica
State gem – Tennessee River pearl
State mineral – agate
State rock – limestone
State motto – Agriculture and Commerce
State poem – "Oh Tennessee, My Tennessee" by Admiral William Lawrence
State slogan – Tennessee – America at its Best
State songs – nine songs
See also
Outline of Tennessee – organized list of topics about Tennessee
Index of Tennessee-related articles
References
Further reading
External links
State Government Website
Tennessee Department of Tourist Development
Tennessee State Databases – Annotated list of searchable databases produced by Tennessee state agencies and compiled by the Government Documents Roundtable of the American Library Association.
Tennessee Encyclopedia of History and Culture
Tennessee State Library and Archives
TNGenWeb Project- free genealogy resources for the state
Energy Profile for Tennessee
USGS real-time, geographic, and other scientific resources of Tennessee
U.S. Census Bureau
Tennessee Blue Book – All things Tennessee
Annotated Tennessee Primary Law Book – from LexisNexis
Timeline of Modern Tennessee Politics
USDA Tennessee state facts
Tennessee landforms
The Annals of Tennessee to the End of the Eighteenth Century – a history by J. G. M. Ramsey, 1853
Category:State of Franklin
Category:States of the Confederate States of America
Category:States of the United States
Category:States and territories established in 1796
Category:Southern United States
Category:U.S. states with multiple time zones
Category:1796 establishments in the United States | 30,395 | 2017-01 |
Electric motor | thumb|Various electric motors, compared to 9 V battery.
An electric motor is an electrical machine that converts electrical energy into mechanical energy. The reverse of this is the conversion of mechanical energy into electrical energy and is done by an electric generator.
In normal motoring mode, most electric motors operate through the interaction between an electric motor's magnetic field and winding currents to generate force within the motor. In certain applications, such as in the transportation industry with traction motors, electric motors can operate in both motoring and generating or braking modes to also produce electrical energy from mechanical energy.
Found in applications as diverse as industrial fans, blowers and pumps, machine tools, household appliances, power tools, and disk drives, electric motors can be powered by direct current (DC) sources, such as from batteries, motor vehicles or rectifiers, or by alternating current (AC) sources, such as from the power grid, inverters or generators. Small motors may be found in electric watches. General-purpose motors with highly standardized dimensions and characteristics provide convenient mechanical power for industrial use. The largest of electric motors are used for ship propulsion, pipeline compression and pumped-storage applications with ratings reaching 100 megawatts. Electric motors may be classified by electric power source type, internal construction, application, type of motion output, and so on.
Electric motors are used to produce linear or rotary force (torque), and should be distinguished from devices such as magnetic solenoids and loudspeakers that convert electricity into motion but do not generate usable mechanical powers, which are respectively referred to as actuators and transducers.
thumb|upright=1.35|Cutaway view through stator of induction motor.
History
Early motors
thumb|upright|200px|Faraday's electromagnetic experiment, 1821
Perhaps the first electric motors were simple electrostatic devices created by the Scottish monk Andrew Gordon in the 1740s.Tom McInally, The Sixth Scottish University. The Scots Colleges Abroad: 1575 to 1799 (Brill, Leiden, 2012) p. 115 The theoretical principle behind production of mechanical force by the interactions of an electric current and a magnetic field, Ampère's force law, was discovered later by André-Marie Ampère in 1820.
The conversion of electrical energy into mechanical energy by electromagnetic means was demonstrated by the British scientist Michael Faraday in 1821. A free-hanging wire was dipped into a pool of mercury, on which a permanent magnet (PM) was placed. When a current was passed through the wire, the wire rotated around the magnet, showing that the current gave rise to a close circular magnetic field around the wire. This motor is often demonstrated in physics experiments, brine substituting for toxic mercury. Though Barlow's wheel was an early refinement to this Faraday demonstration, these and similar homopolar motors were to remain unsuited to practical application until late in the century.
thumb|200px|Jedlik's "electromagnetic self-rotor", 1827 (Museum of Applied Arts, Budapest). The historic motor still works perfectly today.
In 1827, Hungarian physicist Ányos Jedlik started experimenting with electromagnetic coils. After Jedlik solved the technical problems of the continuous rotation with the invention of the commutator, he called his early devices "electromagnetic self-rotors". Although they were used only for instructional purposes, in 1828 Jedlik demonstrated the first device to contain the three main components of practical DC motors: the stator, rotor and commutator. The device employed no permanent magnets, as the magnetic fields of both the stationary and revolving components were produced solely by the currents flowing through their windings.
Success with DC motors
After many other more or less successful attempts with relatively weak rotating and reciprocating apparatus the Prussian Moritz von Jacobi created the first real rotating electric motor in May 1834 that actually developed a remarkable mechanical output power. His motor set a world record which was improved only four years later in September 1838 by Jacobi himself. His second motor was powerful enough to drive a boat with 14 people across a wide river. It was not until 1839/40 that other developers worldwide managed to build motors of similar and later also of higher performance.
The first commutator DC electric motor capable of turning machinery was invented by the British scientist William Sturgeon in 1832. Following Sturgeon's work, a commutator-type direct-current electric motor made with the intention of commercial use was built by the American inventor Thomas Davenport, which he patented in 1837. The motors ran at up to 600 revolutions per minute, and powered machine tools and a printing press. Due to the high cost of primary battery power, the motors were commercially unsuccessful and Davenport went bankrupt. Several inventors followed Sturgeon in the development of DC motors but all encountered the same battery power cost issues. No electricity distribution had been developed at the time. Like Sturgeon's motor, there was no practical commercial market for these motors.
In 1855, Jedlik built a device using similar principles to those used in his electromagnetic self-rotors that was capable of useful work. He built a model electric vehicle that same year.
A major turning point in the development of DC machines took place in 1864, when Antonio Pacinotti described for the first time the ring armature with its symmetrically grouped coils closed upon themselves and connected to the bars of a commutator, the brushes of which delivered practically non-fluctuating current. The first commercially successful DC motors followed the invention by Zénobe Gramme who, in 1871, reinvented Pacinotti's design. In 1873, Gramme showed that his dynamo could be used as a motor, which he demonstrated to great effect at exhibitions in Vienna and Philadelphia by connecting two such DC motors at a distance of up to 2 km away from each other, one as a generator. (See also 1873 : l'expérience décisive [Decisive Workaround] .)
In 1886, Frank Julian Sprague invented the first practical DC motor, a non-sparking motor that maintained relatively constant speed under variable loads. Other Sprague electric inventions about this time greatly improved grid electric distribution (prior work done while employed by Thomas Edison), allowed power from electric motors to be returned to the electric grid, provided for electric distribution to trolleys via overhead wires and the trolley pole, and provided controls systems for electric operations. This allowed Sprague to use electric motors to invent the first electric trolley system in 1887–88 in Richmond VA, the electric elevator and control system in 1892, and the electric subway with independently powered centrally controlled cars, which were first installed in 1892 in Chicago by the South Side Elevated Railway where it became popularly known as the "L". Sprague's motor and related inventions led to an explosion of interest and use in electric motors for industry, while almost simultaneously another great inventor was developing its primary competitor, which would become much more widespread.
The development of electric motors of acceptable efficiency was delayed for several decades by failure to recognize the extreme importance of a relatively small air gap between rotor and stator. Efficient designs have a comparatively small air gap.
The St. Louis motor, long used in classrooms to illustrate motor principles, is extremely inefficient for the same reason, as well as appearing nothing like a modern motor.
Application of electric motors revolutionized industry. Industrial processes were no longer limited by power transmission using line shafts, belts, compressed air or hydraulic pressure. Instead every machine could be equipped with its own electric motor, providing easy control at the point of use, and improving power transmission efficiency. Electric motors applied in agriculture eliminated human and animal muscle power from such tasks as handling grain or pumping water. Household uses of electric motors reduced heavy labor in the home and made higher standards of convenience, comfort and safety possible. Today, electric motors stand for more than half of the electric energy consumption in the US.
Emergence of AC motors
In 1824, the French physicist François Arago formulated the existence of rotating magnetic fields, termed Arago's rotations, which, by manually turning switches on and off, Walter Baily demonstrated in 1879 as in effect the first primitive induction motor.
In the 1880s, many inventors were trying to develop workable AC motors because AC's advantages in long-distance high-voltage transmission were counterbalanced by the inability to operate motors on AC. The first alternating-current commutatorless induction motors were independently invented by Galileo Ferraris and Nikola Tesla, a working motor model having been demonstrated by the former in 1885 and by the latter in 1887. In 1888, the Royal Academy of Science of Turin published Ferraris's research detailing the foundations of motor operation while however concluding that "the apparatus based on that principle could not be of any commercial importance as motor."
In 1888, Tesla presented his paper A New System for Alternating Current Motors and Transformers to the AIEE that described three patented two-phase four-stator-pole motor types: one with a four-pole rotor forming a non-self-starting reluctance motor, another with a wound rotor forming a self-starting induction motor, and the third a true synchronous motor with separately excited DC supply to rotor winding.
One of the patents Tesla filed in 1887, however, also described a shorted-winding-rotor induction motor. George Westinghouse promptly bought Tesla's patents, employed Tesla to develop them, and assigned C. F. Scott to help Tesla; however, Tesla left for other pursuits in 1889. The constant speed AC induction motor was found not to be suitable for street cars, but Westinghouse engineers successfully adapted it to power a mining operation in Telluride, Colorado in 1891.
Steadfast in his promotion of three-phase development, Mikhail Dolivo-Dobrovolsky invented the three-phase cage-rotor induction motor in 1889 and the three-limb transformer in 1890. This type of motor is now used for the vast majority of commercial applications. However, he claimed that Tesla's motor was not practical because of two-phase pulsations, which prompted him to persist in his three-phase work. Although Westinghouse achieved its first practical induction motor in 1892 and developed a line of polyphase 60 hertz induction motors in 1893, these early Westinghouse motors were two-phase motors with wound rotors until B. G. Lamme developed a rotating bar winding rotor.
The General Electric Company began developing three-phase induction motors in 1891. By 1896, General Electric and Westinghouse signed a cross-licensing agreement for the bar-winding-rotor design, later called the squirrel-cage rotor. Induction motor improvements flowing from these inventions and innovations were such that a 100 horsepower (HP) induction motor currently has the same mounting dimensions as a 7.5 HP motor in 1897.
Motor construction
thumb|right|Electric motor rotor (left) and stator (right)
Rotor
In an electric motor the moving part is the rotor which turns the shaft to deliver the mechanical power. The rotor usually has conductors laid into it which carry currents that interact with the magnetic field of the stator to generate the forces that turn the shaft. However, some rotors carry permanent magnets, and the stator holds the conductors.
Stator
The stator is the stationary part of the motor’s electromagnetic circuit and usually consists of either windings or permanent magnets. The stator core is made up of many thin metal sheets, called laminations. Laminations are used to reduce energy losses that would result if a solid core were used.
Air gap
The distance between the rotor and stator is called the air gap. The air gap has important effects, and is generally as small as possible, as a large gap has a strong negative effect on the performance of an electric motor. It is the main source of the low power factor at which motors operate.The air gap increases the magnetizing current needed. For this reason air gap should be minimum . Very small gaps may pose mechanical problems in addition to noise and losses.
thumb|right|Salient-pole rotor
Windings
Windings are wires that are laid in coils, usually wrapped around a laminated soft iron magnetic core so as to form magnetic poles when energized with current.
Electric machines come in two basic magnet field pole configurations: salient-pole machine and nonsalient-pole machine. In the salient-pole machine the pole's magnetic field is produced by a winding wound around the pole below the pole face. In the nonsalient-pole, or distributed field, or round-rotor, machine, the winding is distributed in pole face slots. A shaded-pole motor has a winding around part of the pole that delays the phase of the magnetic field for that pole.
Some motors have conductors which consist of thicker metal, such as bars or sheets of metal, usually copper, although sometimes aluminum is used. These are usually powered by electromagnetic induction.
Commutator
thumb|A toy's small DC motor with its commutator
A commutator is a mechanism used to switch the input of most DC machines and certain AC machines consisting of slip ring segments insulated from each other and from the electric motor's shaft. The motor's armature current is supplied through the stationary brushes in contact with the revolving commutator, which causes required current reversal and applies power to the machine in an optimal manner as the rotor rotates from pole to pole.Hameyer, §5.1, p. 62Lynn, §83, p. 812 In absence of such current reversal, the motor would brake to a stop. In light of significant advances in the past few decades due to improved technologies in electronic controller, sensorless control, induction motor, and permanent magnet motor fields, electromechanically commutated motors are increasingly being displaced by externally commutated induction and permanent-magnet motors.
Motor supply and control
Motor supply
A DC motor is usually supplied through slip ring commutator as described above. AC motors' commutation can be either slip ring commutator or externally commutated type, can be fixed-speed or variable-speed control type, and can be synchronous or asynchronous type. Universal motors can run on either AC or DC.
Motor control
Fixed-speed controlled AC motors are provided with direct-on-line or soft-start starters.
Variable speed controlled AC motors are provided with a range of different power inverter, variable-frequency drive or electronic commutator technologies.
The term electronic commutator is usually associated with self-commutated brushless DC motor and switched reluctance motor applications.
Major categories
Electric motors operate on three different physical principles: magnetic, electrostatic and piezoelectric. By far the most common is magnetic.
In magnetic motors, magnetic fields are formed in both the rotor and the stator. The product between these two fields gives rise to a force, and thus a torque on the motor shaft. One, or both, of these fields must be made to change with the rotation of the motor. This is done by switching the poles on and off at the right time, or varying the strength of the pole.
The main types are DC motors and AC motors, the former increasingly being displaced by the latter.
AC electric motors are either asynchronous or synchronous.
Once started, a synchronous motor requires synchronism with the moving magnetic field's synchronous speed for all normal torque conditions.
In synchronous machines, the magnetic field must be provided by means other than induction such as from separately excited windings or permanent magnets.
A fractional horsepower (FHP) motor either has a rating below about 1 horsepower (0.746 kW), or is manufactured with a standard frame size smaller than a standard 1 HP motor. Many household and industrial motors are in the fractional horsepower class.
Major CategoriesbyType of Motor CommutationSelf-CommutatedExternally CommutatedMechanical-Commutator MotorsElectronic-Commutator (EC)MotorsAsynchronousMachinesSynchronousMachines2AC DCAC5, 6AC6* Universal motor (AC commutator series motor or AC/DC motor)1* Repulsion motorElectricallyexcited DC motor:* Separately excited* Series* Shunt* CompoundPM DC motorWith PM rotor:* BLDC motorWithferromagneticrotor:* SRMThree-phase motors:* SCIM3, 8* WRIM4, 7, 8 AC motors:10* Capacitor* Resistance* Split* Shaded-poleThree-phase motors:* WRSM* PMSM orBLAC motor- IPMSM- SPMSM* HybridAC motors:10* Permanent-split capacitor* Hysteresis* Stepper* SyRM* SyRM-PM hybridSimple electronicsRectifier,linear transistor(s) or DC chopperMore elaborateelectronicsMost elaborateelectronics (VFD), when provided
Notes:
Rotation is independent of the frequency of the AC voltage.
Rotation is equal to synchronous speed (motor stator field speed).
In SCIM fixed-speed operation rotation is equal to synchronous speed less slip speed.
In non-slip energy recovery systems WRIM is usually used for motor starting but can be used to vary load speed.
Variable-speed operation.
Whereas induction and synchronous motor drives are typically with either six-step or sinusoidal waveform output, BLDC motor drives are usually with trapezoidal current waveform; the behavior of both sinusoidal and trapezoidal PM machines is however identical in terms of their fundamental aspects.
In variable-speed operation WRIM is used in slip energy recovery and double-fed induction machine applications.
A cage winding is a shorted-circuited squirrel-cage rotor, a wound winding is connected externally through slip rings.
Mostly single-phase with some three-phase.
Abbreviations:
BLAC - Brushless AC
BLDC - Brushless DC
BLDM - Brushless DC motor
EC - Electronic commutator
PM - Permanent magnet
IPMSM - Interior permanent magnet synchronous motor
PMSM - Permanent magnet synchronous motor
SPMSM - Surface permanent magnet synchronous motor
SCIM - Squirrel-cage induction motor
SRM - Switched reluctance motor
SyRM - Synchronous reluctance motor
VFD - Variable-frequency drive
WRIM - Wound-rotor induction motor
WRSM - Wound-rotor synchronous motor
Self-commutated motor
Brushed DC motor
All self-commutated DC motors are by definition run on DC electric power. Most DC motors are small PM types. They contain a brushed internal mechanical commutation to reverse motor windings' current in synchronism with rotation.
Electrically excited DC motor
thumb|right|200px|Workings of a brushed electric motor with a two-pole rotor and PM stator. ("N" and "S" designate polarities on the inside faces of the magnets; the outside faces have opposite polarities.)
A commutated DC motor has a set of rotating windings wound on an armature mounted on a rotating shaft. The shaft also carries the commutator, a long-lasting rotary electrical switch that periodically reverses the flow of current in the rotor windings as the shaft rotates. Thus, every brushed DC motor has AC flowing through its rotating windings. Current flows through one or more pairs of brushes that bear on the commutator; the brushes connect an external source of electric power to the rotating armature.
The rotating armature consists of one or more coils of wire wound around a laminated, magnetically "soft" ferromagnetic core. Current from the brushes flows through the commutator and one winding of the armature, making it a temporary magnet (an electromagnet). The magnetic field produced by the armature interacts with a stationary magnetic field produced by either PMs or another winding (a field coil), as part of the motor frame. The force between the two magnetic fields tends to rotate the motor shaft. The commutator switches power to the coils as the rotor turns, keeping the magnetic poles of the rotor from ever fully aligning with the magnetic poles of the stator field, so that the rotor never stops (like a compass needle does), but rather keeps rotating as long as power is applied.
Many of the limitations of the classic commutator DC motor are due to the need for brushes to press against the commutator. This creates friction. Sparks are created by the brushes making and breaking circuits through the rotor coils as the brushes cross the insulating gaps between commutator sections. Depending on the commutator design, this may include the brushes shorting together adjacent sections – and hence coil ends – momentarily while crossing the gaps. Furthermore, the inductance of the rotor coils causes the voltage across each to rise when its circuit is opened, increasing the sparking of the brushes. This sparking limits the maximum speed of the machine, as too-rapid sparking will overheat, erode, or even melt the commutator. The current density per unit area of the brushes, in combination with their resistivity, limits the output of the motor. The making and breaking of electric contact also generates electrical noise; sparking generates RFI. Brushes eventually wear out and require replacement, and the commutator itself is subject to wear and maintenance (on larger motors) or replacement (on small motors). The commutator assembly on a large motor is a costly element, requiring precision assembly of many parts. On small motors, the commutator is usually permanently integrated into the rotor, so replacing it usually requires replacing the whole rotor.
While most commutators are cylindrical, some are flat discs consisting of several segments (typically, at least three) mounted on an insulator.
Large brushes are desired for a larger brush contact area to maximize motor output, but small brushes are desired for low mass to maximize the speed at which the motor can run without the brushes excessively bouncing and sparking. (Small brushes are also desirable for lower cost.) Stiffer brush springs can also be used to make brushes of a given mass work at a higher speed, but at the cost of greater friction losses (lower efficiency) and accelerated brush and commutator wear. Therefore, DC motor brush design entails a trade-off between output power, speed, and efficiency/wear.
DC machines are defined as follows:
Armature circuit - A winding where the load current is carried, such that can be either stationary or rotating part of motor or generator.
Field circuit - A set of windings that produces a magnetic field so that the electromagnetic induction can take place in electric machines.
Commutation: A mechanical technique in which rectification can be achieved, or from which DC can be derived, in DC machines.
thumb|300px|A: shunt B: series C: compound f = field coil
There are five types of brushed DC motor:-
DC shunt-wound motor
DC series-wound motor
DC compound motor (two configurations):
Cumulative compound
Differentially compounded
PM DC motor (not shown)
Separately excited (not shown).
Permanent magnet DC motor
A PM motor does not have a field winding on the stator frame, instead relying on PMs to provide the magnetic field against which the rotor field interacts to produce torque. Compensating windings in series with the armature may be used on large motors to improve commutation under load. Because this field is fixed, it cannot be adjusted for speed control. PM fields (stators) are convenient in miniature motors to eliminate the power consumption of the field winding. Most larger DC motors are of the "dynamo" type, which have stator windings. Historically, PMs could not be made to retain high flux if they were disassembled; field windings were more practical to obtain the needed amount of flux. However, large PMs are costly, as well as dangerous and difficult to assemble; this favors wound fields for large machines.
To minimize overall weight and size, miniature PM motors may use high energy magnets made with neodymium or other strategic elements; most such are neodymium-iron-boron alloy. With their higher flux density, electric machines with high-energy PMs are at least competitive with all optimally designed singly-fed synchronous and induction electric machines. Miniature motors resemble the structure in the illustration, except that they have at least three rotor poles (to ensure starting, regardless of rotor position) and their outer housing is a steel tube that magnetically links the exteriors of the curved field magnets.
Electronic commutator (EC) motor
Brushless DC motor
Some of the problems of the brushed DC motor are eliminated in the BLDC design. In this motor, the mechanical "rotating switch" or commutator is replaced by an external electronic switch synchronised to the rotor's position. BLDC motors are typically 85–90% efficient or more. Efficiency for a BLDC motor of up to 96.5% have been reported, whereas DC motors with brushgear are typically 75–80% efficient.
The BLDC motor's characteristic trapezoidal back-emf waveform is derived partly from the stator windings being evenly distributed, and partly from the placement of the rotor's PMs. Also known as electronically commutated DC or inside out DC motors, the stator windings of trapezoidal BLDC motors can be with single-phase, two-phase or three-phase and use Hall effect sensors mounted on their windings for rotor position sensing and low cost closed-loop control of the electronic commutator.
BLDC motors are commonly used where precise speed control is necessary, as in computer disk drives or in video cassette recorders, the spindles within CD, CD-ROM (etc.) drives, and mechanisms within office products such as fans, laser printers and photocopiers. They have several advantages over conventional motors:
Compared to AC fans using shaded-pole motors, they are very efficient, running much cooler than the equivalent AC motors. This cool operation leads to much-improved life of the fan's bearings.
Without a commutator to wear out, the life of a BLDC motor can be significantly longer compared to a DC motor using brushes and a commutator. Commutation also tends to cause a great deal of electrical and RF noise; without a commutator or brushes, a BLDC motor may be used in electrically sensitive devices like audio equipment or computers.
The same Hall effect sensors that provide the commutation can also provide a convenient tachometer signal for closed-loop control (servo-controlled) applications. In fans, the tachometer signal can be used to derive a "fan OK" signal as well as provide running speed feedback.
The motor can be easily synchronized to an internal or external clock, leading to precise speed control.
BLDC motors have no chance of sparking, unlike brushed motors, making them better suited to environments with volatile chemicals and fuels. Also, sparking generates ozone which can accumulate in poorly ventilated buildings risking harm to occupants' health.
BLDC motors are usually used in small equipment such as computers and are generally used in fans to get rid of unwanted heat.
They are also acoustically very quiet motors which is an advantage if being used in equipment that is affected by vibrations.
Modern BLDC motors range in power from a fraction of a watt to many kilowatts. Larger BLDC motors up to about 100 kW rating are used in electric vehicles. They also find significant use in high-performance electric model aircraft.
Switched reluctance motor
thumb|right|6/4 pole switched reluctance motor
The SRM has no brushes or PMs, and the rotor has no electric currents.
Instead, torque comes from a slight misalignment of poles on the rotor with poles on the stator.
The rotor aligns itself with the magnetic field of the stator, while the stator field windings are sequentially energized to rotate the stator field.
The magnetic flux created by the field windings follows the path of least magnetic reluctance, meaning the flux will flow through poles of the rotor that are closest to the energized poles of the stator, thereby magnetizing those poles of the rotor and creating torque. As the rotor turns, different windings will be energized, keeping the rotor turning.
SRMs are now being used in some appliances.
Universal AC-DC motor
thumb|Modern low-cost universal motor, from a vacuum cleaner. Field windings are dark copper-colored, toward the back, on both sides. The rotor's laminated core is gray metallic, with dark slots for winding the coils. The commutator (partly hidden) has become dark from use; it is toward the front. The large brown molded-plastic piece in the foreground supports the brush guides and brushes (both sides), as well as the front motor bearing.
A commutated electrically excited series or parallel wound motor is referred to as a universal motor because it can be designed to operate on AC or DC power. A universal motor can operate well on AC because the current in both the field and the armature coils (and hence the resultant magnetic fields) will alternate (reverse polarity) in synchronism, and hence the resulting mechanical force will occur in a constant direction of rotation.
Operating at normal power line frequencies, universal motors are often found in a range less than . Universal motors also formed the basis of the traditional railway traction motor in electric railways. In this application, the use of AC to power a motor originally designed to run on DC would lead to efficiency losses due to eddy current heating of their magnetic components, particularly the motor field pole-pieces that, for DC, would have used solid (un-laminated) iron and they are now rarely used.
An advantage of the universal motor is that AC supplies may be used on motors which have some characteristics more common in DC motors, specifically high starting torque and very compact design if high running speeds are used. The negative aspect is the maintenance and short life problems caused by the commutator. Such motors are used in devices such as food mixers and power tools which are used only intermittently, and often have high starting-torque demands. Multiple taps on the field coil provide (imprecise) stepped speed control. Household blenders that advertise many speeds frequently combine a field coil with several taps and a diode that can be inserted in series with the motor (causing the motor to run on half-wave rectified AC). Universal motors also lend themselves to electronic speed control and, as such, are an ideal choice for devices like domestic washing machines. The motor can be used to agitate the drum (both forwards and in reverse) by switching the field winding with respect to the armature.
Whereas SCIMs cannot turn a shaft faster than allowed by the power line frequency, universal motors can run at much higher speeds. This makes them useful for appliances such as blenders, vacuum cleaners, and hair dryers where high speed and light weight are desirable. They are also commonly used in portable power tools, such as drills, sanders, circular and jig saws, where the motor's characteristics work well. Many vacuum cleaner and weed trimmer motors exceed , while many similar miniature grinders exceed .
Externally commutated AC machine
The design of AC induction and synchronous motors is optimized for operation on single-phase or polyphase sinusoidal or quasi-sinusoidal waveform power such as supplied for fixed-speed application from the AC power grid or for variable-speed application from VFD controllers. An AC motor has two parts: a stationary stator having coils supplied with AC to produce a rotating magnetic field, and a rotor attached to the output shaft that is given a torque by the rotating field.
Induction motor
thumb|Large 4,500 HP AC Induction Motor.
Cage and wound rotor induction motor
An induction motor is an asynchronous AC motor where power is transferred to the rotor by electromagnetic induction, much like transformer action. An induction motor resembles a rotating transformer, because the stator (stationary part) is essentially the primary side of the transformer and the rotor (rotating part) is the secondary side. Polyphase induction motors are widely used in industry.
Induction motors may be further divided into Squirrel Cage Induction Motors and Wound Rotor Induction Motors. SCIMs have a heavy winding made up of solid bars, usually aluminum or copper, joined by rings at the ends of the rotor. When one considers only the bars and rings as a whole, they are much like an animal's rotating exercise cage, hence the name.
Currents induced into this winding provide the rotor magnetic field. The shape of the rotor bars determines the speed-torque characteristics. At low speeds, the current induced in the squirrel cage is nearly at line frequency and tends to be in the outer parts of the rotor cage. As the motor accelerates, the slip frequency becomes lower, and more current is in the interior of the winding. By shaping the bars to change the resistance of the winding portions in the interior and outer parts of the cage, effectively a variable resistance is inserted in the rotor circuit. However, the majority of such motors have uniform bars.
In a WRIM, the rotor winding is made of many turns of insulated wire and is connected to slip rings on the motor shaft. An external resistor or other control devices can be connected in the rotor circuit. Resistors allow control of the motor speed, although significant power is dissipated in the external resistance. A converter can be fed from the rotor circuit and return the slip-frequency power that would otherwise be wasted back into the power system through an inverter or separate motor-generator.
The WRIM is used primarily to start a high inertia load or a load that requires a very high starting torque across the full speed range. By correctly selecting the resistors used in the secondary resistance or slip ring starter, the motor is able to produce maximum torque at a relatively low supply current from zero speed to full speed. This type of motor also offers controllable speed.
Motor speed can be changed because the torque curve of the motor is effectively modified by the amount of resistance connected to the rotor circuit. Increasing the value of resistance will move the speed of maximum torque down. If the resistance connected to the rotor is increased beyond the point where the maximum torque occurs at zero speed, the torque will be further reduced.
When used with a load that has a torque curve that increases with speed, the motor will operate at the speed where the torque developed by the motor is equal to the load torque. Reducing the load will cause the motor to speed up, and increasing the load will cause the motor to slow down until the load and motor torque are equal. Operated in this manner, the slip losses are dissipated in the secondary resistors and can be very significant. The speed regulation and net efficiency is also very poor.
Torque motor
A torque motor is a specialized form of electric motor which can operate indefinitely while stalled, that is, with the rotor blocked from turning, without incurring damage. In this mode of operation, the motor will apply a steady torque to the load (hence the name).
A common application of a torque motor would be the supply- and take-up reel motors in a tape drive. In this application, driven from a low voltage, the characteristics of these motors allow a relatively constant light tension to be applied to the tape whether or not the capstan is feeding tape past the tape heads. Driven from a higher voltage, (and so delivering a higher torque), the torque motors can also achieve fast-forward and rewind operation without requiring any additional mechanics such as gears or clutches. In the computer gaming world, torque motors are used in force feedback steering wheels.
Another common application is the control of the throttle of an internal combustion engine in conjunction with an electronic governor. In this usage, the motor works against a return spring to move the throttle in accordance with the output of the governor. The latter monitors engine speed by counting electrical pulses from the ignition system or from a magnetic pickup and, depending on the speed, makes small adjustments to the amount of current applied to the motor. If the engine starts to slow down relative to the desired speed, the current will be increased, the motor will develop more torque, pulling against the return spring and opening the throttle. Should the engine run too fast, the governor will reduce the current being applied to the motor, causing the return spring to pull back and close the throttle.
Synchronous motor
A synchronous electric motor is an AC motor distinguished by a rotor spinning with coils passing magnets at the same rate as the AC and resulting magnetic field which drives it. Another way of saying this is that it has zero slip under usual operating conditions. Contrast this with an induction motor, which must slip to produce torque. One type of synchronous motor is like an induction motor except the rotor is excited by a DC field. Slip rings and brushes are used to conduct current to the rotor. The rotor poles connect to each other and move at the same speed hence the name synchronous motor. Another type, for low load torque, has flats ground onto a conventional squirrel-cage rotor to create discrete poles. Yet another, such as made by Hammond for its pre-World War II clocks, and in the older Hammond organs, has no rotor windings and discrete poles. It is not self-starting. The clock requires manual starting by a small knob on the back, while the older Hammond organs had an auxiliary starting motor connected by a spring-loaded manually operated switch.
Finally, hysteresis synchronous motors typically are (essentially) two-phase motors with a phase-shifting capacitor for one phase. They start like induction motors, but when slip rate decreases sufficiently, the rotor (a smooth cylinder) becomes temporarily magnetized. Its distributed poles make it act like a PMSM. The rotor material, like that of a common nail, will stay magnetized, but can also be demagnetized with little difficulty. Once running, the rotor poles stay in place; they do not drift.
Low-power synchronous timing motors (such as those for traditional electric clocks) may have multi-pole PM external cup rotors, and use shading coils to provide starting torque. Telechron clock motors have shaded poles for starting torque, and a two-spoke ring rotor that performs like a discrete two-pole rotor.
Doubly-fed electric machine
Doubly fed electric motors have two independent multiphase winding sets, which contribute active (i.e., working) power to the energy conversion process, with at least one of the winding sets electronically controlled for variable speed operation. Two independent multiphase winding sets (i.e., dual armature) are the maximum provided in a single package without topology duplication. Doubly-fed electric motors are machines with an effective constant torque speed range that is twice synchronous speed for a given frequency of excitation. This is twice the constant torque speed range as singly-fed electric machines, which have only one active winding set.
A doubly-fed motor allows for a smaller electronic converter but the cost of the rotor winding and slip rings may offset the saving in the power electronics components. Difficulties with controlling speed near synchronous speed limit applications.
Special magnetic motors
Rotary
Ironless or coreless rotor motor
thumb|A miniature coreless motor
Nothing in the principle of any of the motors described above requires that the iron (steel) portions of the rotor actually rotate. If the soft magnetic material of the rotor is made in the form of a cylinder, then (except for the effect of hysteresis) torque is exerted only on the windings of the electromagnets. Taking advantage of this fact is the coreless or ironless DC motor, a specialized form of a PM DC motor. Optimized for rapid acceleration, these motors have a rotor that is constructed without any iron core. The rotor can take the form of a winding-filled cylinder, or a self-supporting structure comprising only the magnet wire and the bonding material. The rotor can fit inside the stator magnets; a magnetically soft stationary cylinder inside the rotor provides a return path for the stator magnetic flux. A second arrangement has the rotor winding basket surrounding the stator magnets. In that design, the rotor fits inside a magnetically soft cylinder that can serve as the housing for the motor, and likewise provides a return path for the flux.
Because the rotor is much lighter in weight (mass) than a conventional rotor formed from copper windings on steel laminations, the rotor can accelerate much more rapidly, often achieving a mechanical time constant under one ms. This is especially true if the windings use aluminum rather than the heavier copper. But because there is no metal mass in the rotor to act as a heat sink, even small coreless motors must often be cooled by forced air. Overheating might be an issue for coreless DC motor designs. Modern software, such as Motor-CAD, can help to increase the thermal efficiency of motors while still in the design stage.
Among these types are the disc-rotor types, described in more detail in the next section.
Vibrator motors for cellular phones are sometimes tiny cylindrical PM field types, but there are also disc-shaped types which have a thin multipolar disc field magnet, and an intentionally unbalanced molded-plastic rotor structure with two bonded coreless coils. Metal brushes and a flat commutator switch power to the rotor coils.
Related limited-travel actuators have no core and a bonded coil placed between the poles of high-flux thin PMs. These are the fast head positioners for rigid-disk ("hard disk") drives. Although the contemporary design differs considerably from that of loudspeakers, it is still loosely (and incorrectly) referred to as a "voice coil" structure, because some earlier rigid-disk-drive heads moved in straight lines, and had a drive structure much like that of a loudspeaker.
Pancake or axial rotor motor
A rather unusual motor design, the printed armature or pancake motor has the windings shaped as a disc running between arrays of high-flux magnets. The magnets are arranged in a circle facing the rotor with space in between to form an axial air gap. This design is commonly known as the pancake motor because of its extremely flat profile, although the technology has had many brand names since its inception, such as ServoDisc.
The printed armature (originally formed on a printed circuit board) in a printed armature motor is made from punched copper sheets that are laminated together using advanced composites to form a thin rigid disc. The printed armature has a unique construction in the brushed motor world in that it does not have a separate ring commutator. The brushes run directly on the armature surface making the whole design very compact.
An alternative manufacturing method is to use wound copper wire laid flat with a central conventional commutator, in a flower and petal shape. The windings are typically stabilized by being impregnated with electrical epoxy potting systems. These are filled epoxies that have moderate mixed viscosity and a long gel time. They are highlighted by low shrinkage and low exotherm, and are typically UL 1446 recognized as a potting compound insulated with 180 °C, Class H rating.
The unique advantage of ironless DC motors is that there is no cogging (torque variations caused by changing attraction between the iron and the magnets). Parasitic eddy currents cannot form in the rotor as it is totally ironless, although iron rotors are laminated. This can greatly improve efficiency, but variable-speed controllers must use a higher switching rate (>40 kHz) or DC because of the decreased electromagnetic induction.
These motors were originally invented to drive the capstan(s) of magnetic tape drives in the burgeoning computer industry, where minimal time to reach operating speed and minimal stopping distance were critical. Pancake motors are still widely used in high-performance servo-controlled systems, robotic systems, industrial automation and medical devices. Due to the variety of constructions now available, the technology is used in applications from high temperature military to low cost pump and basic servos.
Servo motor
A servomotor is a motor, very often sold as a complete module, which is used within a position-control or speed-control feedback control system mainly control valves, such as motor-operated control valves. Servomotors are used in applications such as machine tools, pen plotters, and other process systems. Motors intended for use in a servomechanism must have well-documented characteristics for speed, torque, and power. The speed vs. torque curve is quite important and is high ratio for a servo motor. Dynamic response characteristics such as winding inductance and rotor inertia are also important; these factors limit the overall performance of the servomechanism loop. Large, powerful, but slow-responding servo loops may use conventional AC or DC motors and drive systems with position or speed feedback on the motor. As dynamic response requirements increase, more specialized motor designs such as coreless motors are used. AC motors' superior power density and acceleration characteristics compared to that of DC motors tends to favor PM synchronous, BLDC, induction, and SRM drive applications.
A servo system differs from some stepper motor applications in that the position feedback is continuous while the motor is running; a stepper system relies on the motor not to "miss steps" for short term accuracy, although a stepper system may include a "home" switch or other element to provide long-term stability of control. For instance, when a typical dot matrix computer printer starts up, its controller makes the print head stepper motor drive to its left-hand limit, where a position sensor defines home position and stops stepping. As long as power is on, a bidirectional counter in the printer's microprocessor keeps track of print-head position.
Stepper motor
thumb|A stepper motor with a soft iron rotor, with active windings shown. In 'A' the active windings tend to hold the rotor in position. In 'B' a different set of windings are carrying a current, which generates torque and rotation.
Stepper motors are a type of motor frequently used when precise rotations are required. In a stepper motor an internal rotor containing PMs or a magnetically soft rotor with salient poles is controlled by a set of external magnets that are switched electronically. A stepper motor may also be thought of as a cross between a DC electric motor and a rotary solenoid. As each coil is energized in turn, the rotor aligns itself with the magnetic field produced by the energized field winding. Unlike a synchronous motor, in its application, the stepper motor may not rotate continuously; instead, it "steps"—starts and then quickly stops again—from one position to the next as field windings are energized and de-energized in sequence. Depending on the sequence, the rotor may turn forwards or backwards, and it may change direction, stop, speed up or slow down arbitrarily at any time.
Simple stepper motor drivers entirely energize or entirely de-energize the field windings, leading the rotor to "cog" to a limited number of positions; more sophisticated drivers can proportionally control the power to the field windings, allowing the rotors to position between the cog points and thereby rotate extremely smoothly. This mode of operation is often called microstepping. Computer controlled stepper motors are one of the most versatile forms of positioning systems, particularly when part of a digital servo-controlled system.
Stepper motors can be rotated to a specific angle in discrete steps with ease, and hence stepper motors are used for read/write head positioning in computer floppy diskette drives. They were used for the same purpose in pre-gigabyte era computer disk drives, where the precision and speed they offered was adequate for the correct positioning of the read/write head of a hard disk drive. As drive density increased, the precision and speed limitations of stepper motors made them obsolete for hard drives—the precision limitation made them unusable, and the speed limitation made them uncompetitive—thus newer hard disk drives use voice coil-based head actuator systems. (The term "voice coil" in this connection is historic; it refers to the structure in a typical (cone type) loudspeaker. This structure was used for a while to position the heads. Modern drives have a pivoted coil mount; the coil swings back and forth, something like a blade of a rotating fan. Nevertheless, like a voice coil, modern actuator coil conductors (the magnet wire) move perpendicular to the magnetic lines of force.)
Stepper motors were and still are often used in computer printers, optical scanners, and digital photocopiers to move the optical scanning element, the print head carriage (of dot matrix and inkjet printers), and the platen or feed rollers. Likewise, many computer plotters (which since the early 1990s have been replaced with large-format inkjet and laser printers) used rotary stepper motors for pen and platen movement; the typical alternatives here were either linear stepper motors or servomotors with closed-loop analog control systems.
So-called quartz analog wristwatches contain the smallest commonplace stepping motors; they have one coil, draw very little power, and have a PM rotor. The same kind of motor drives battery-powered quartz clocks. Some of these watches, such as chronographs, contain more than one stepping motor.
Closely related in design to three-phase AC synchronous motors, stepper motors and SRMs are classified as variable reluctance motor type.Bose, pp. 569–570, 891 Stepper motors were and still are often used in computer printers, optical scanners, and computer numerical control (CNC) machines such as routers, plasma cutters and CNC lathes.
Linear motor
A linear motor is essentially any electric motor that has been "unrolled" so that, instead of producing a torque (rotation), it produces a straight-line force along its length.
Linear motors are most commonly induction motors or stepper motors. Linear motors are commonly found in many roller-coasters where the rapid motion of the motorless railcar is controlled by the rail. They are also used in maglev trains, where the train "flies" over the ground. On a smaller scale, the 1978 era HP 7225A pen plotter used two linear stepper motors to move the pen along the X and Y axes.
Comparison by major categories
+ Comparison of motor types Type Advantages Disadvantages Typical application Typical drive, outputSelf-commutated motors Brushed DC Simple speed controlLow initial cost Maintenance (brushes)Medium lifespan Costly commutator and brushes Steel mills Paper making machines Treadmill exercisersAutomotive accessories Rectifier, linear transistor(s) or DC chopper controller.Stölting, p. 9 BrushlessDC motor(BLDC)or(BLDM) Long lifespanLow maintenanceHigh efficiency Higher initial costRequires EC controller with closed-loop control Rigid ("hard") disk drivesCD/DVD playersElectric vehicles RC Vehicles UAVs Synchronous; single-phase or three-phase with PM rotor and trapezoidal stator winding; VFD typically VS PWM inverter type. Switchedreluctancemotor(SRM) Long lifespanLow maintenanceHigh efficiencyNo permanent magnetsLow costSimple construction Mechanical resonancepossibleHigh iron lossesNot possible:* Open or vector control* Parallel operationRequires EC controller AppliancesElectric VehiclesTextile millsAircraft applications PWM and various other drive types, which tend to be used in very specialized / OEM applications. Universal motor High starting torque, compact, high speed. Maintenance (brushes)Shorter lifespan Usually acoustically noisy Only small ratings are economical Handheld power tools, blenders, vacuum cleaners, insulation blowers Variable single phase AC, half-wave or full-wave phase-angle control with triac(s); closed-loop control optional.AC asynchronous motors AC polyphasesquirrel-cageorwound-rotorinduction motor(SCIM)or(WRIM) Self-startingLow costRobustReliableRatings to 1+ MWStandardized types. High starting currentLower efficiencydue to need for magnetization. Fixed-speed, traditionally, SCIM the world's workhorse especially in low performance applications of all typesVariable-speed, traditionally, low-performance variable-torque pumps, fans, blowers and compressors. Variable-speed, increasingly, other high-performance constant-torque and constant-power or dynamic loads. Fixed-speed, low performance applications of all types.Variable-speed, traditionally, WRIM drives or fixed-speed V/Hz-controlled VSDs.Variable-speed, increasingly, vector-controlled VSDs displacing DC, WRIM and single-phase AC induction motor drives. AC SCIMsplit-phasecapacitor-start High powerhigh starting torque Speed slightly below synchronous Starting switch or relay required AppliancesStationary Power Tools Fixed or variable single-phase AC, variable speed being derived, typically, by full-wave phase-angle control with triac(s); closed-loop control optional.Stölting, p. 9 AC SCIMsplit-phasecapacitor-run Moderate powerHigh starting torque No starting switchComparatively long life Speed slightly below synchronousSlightly more costly Industrial blowersIndustrial machinery AC SCIMsplit-phase,auxiliarystart winding Moderate powerLow starting torque Speed slightly below synchronous Starting switch or relay required AppliancesStationary power tools AC induction shaded-polemotor Low costLong life Speed slightly below synchronous Low starting torque Small ratings low efficiency Fans, appliances, record playersAC synchronous motors Wound-rotorsynchronousmotor(WRSM) Synchronous speedInherentlymore efficientinduction motor,low power factor More costly Industrial motors Fixed or variable speed, three-phase; VFD typically six-step CS load-commutated inverter type or VS PWM inverter type.Bose, pp. 480–481 Hysteresismotor Accurate speed controlLow noiseNo vibrationHigh starting torque Very low efficiency Clocks, timers, sound producing or recording equipment, hard drive, capstan drive Single-phase AC, two-phase capacitor-start, capacitor run motor Synchronousreluctancemotor(SyRM) Equivalent to SCIMexcept more robust, more efficient, runs cooler, smaller footprintCompetes with PM synchronous motor without demagnetization issues Requires a controllerNot widely availableHigh cost AppliancesElectric vehiclesTextile millsAircraft applications VFD can be standard DTC type or VS inverter PWM type.Specialty motors Pancakeor axialrotormotors Compact designSimple speed control Medium costMedium lifespan Office EquipFans/Pumps, fast industrial and military servos Drives can typically be brushed or brushless DC type. Steppermotor Precision positioningHigh holding torque Some can be costlyRequire a controller Positioning in printers and floppy disc drives; industrial machine tools Not a VFD. Stepper position is determined by pulse counting.Stölting, p. 10Bose, p. 389
Electromagnetism
Force and torque
The fundamental purpose of the vast majority of the world's electric motors is to electromagnetically induce relative movement in an air gap between a stator and rotor to produce useful torque or linear force.
According to Lorentz force law the force of a winding conductor can be given simply by:
or more generally, to handle conductors with any geometry:
The most general approaches to calculating the forces in motors use tensors.
Power
Where rpm is shaft speed and T is torque, a motor's mechanical power output Pem is given by,
in British units with T expressed in foot-pounds,
(horsepower), and,
in SI units with shaft angular speed expressed in radians per second, and T expressed in newton-meters,
(watts).
For a linear motor, with force F expressed in newtons and velocity v expressed in meters per second,
(watts).
In an asynchronous or induction motor, the relationship between motor speed and air gap power is, neglecting skin effect, given by the following:
, where
Rr - rotor resistance
Ir2 - square of current induced in the rotor
s - motor slip; ie, difference between synchronous speed and slip speed, which provides the relative movement needed for current induction in the rotor.
Back emf
Since the armature windings of a direct-current or universal motor are moving through a magnetic field, they have a voltage induced in them. This voltage tends to oppose the motor supply voltage and so is called "back electromotive force (emf)". The voltage is proportional to the running speed of the motor. The back emf of the motor, plus the voltage drop across the winding internal resistance and brushes, must equal the voltage at the brushes. This provides the fundamental mechanism of speed regulation in a DC motor. If the mechanical load increases, the motor slows down; a lower back emf results, and more current is drawn from the supply. This increased current provides the additional torque to balance the new load.
In AC machines, it is sometimes useful to consider a back emf source within the machine; this is of particular concern for close speed regulation of induction motors on VFDs, for example.
Losses
Motor losses are mainly due to resistive losses in windings, core losses and mechanical losses in bearings, and aerodynamic losses, particularly where cooling fans are present, also occur.
Losses also occur in commutation, mechanical commutators spark, and electronic commutators and also dissipate heat.
Efficiency
To calculate a motor's efficiency, the mechanical output power is divided by the electrical input power:
,
where is energy conversion efficiency, is electrical input power, and is mechanical output power:
where is input voltage, is input current, is output torque, and is output angular velocity. It is possible to derive analytically the point of maximum efficiency. It is typically at less than 1/2 the stall torque.
Various regulatory authorities in many countries have introduced and implemented legislation to encourage the manufacture and use of higher-efficiency electric motors.
Goodness factor
Professor Eric Laithwaite proposed a metric to determine the 'goodness' of an electric motor:
Where:
is the goodness factor (factors above 1 are likely to be efficient)
are the cross sectional areas of the magnetic and electric circuit
are the lengths of the magnetic and electric circuits
is the permeability of the core
is the angular frequency the motor is driven at
From this, he showed that the most efficient motors are likely to have relatively large magnetic poles. However, the equation only directly relates to non PM motors.
Performance parameters
Torque capability of motor types
All the electromagnetic motors, and that includes the types mentioned here derive the torque from the vector product of the interacting fields. For calculating the torque it is necessary to know the fields in the air gap . Once these have been established by mathematical analysis using FEA or other tools the torque may be calculated as the integral of all the vectors of force multiplied by the radius of each vector. The current flowing in the winding is producing the fields and for a motor using a magnetic material the field is not linearly proportional to the current. This makes the calculation difficult but a computer can do the many calculations needed.
Once this is done a figure relating the current to the torque can be used as a useful parameter for motor selection. The maximum torque for a motor will depend on the maximum current although this will usually be only usable until thermal considerations take precedence.
When optimally designed within a given core saturation constraint and for a given active current (i.e., torque current), voltage, pole-pair number, excitation frequency (i.e., synchronous speed), and air-gap flux density, all categories of electric motors or generators will exhibit virtually the same maximum continuous shaft torque (i.e., operating torque) within a given air-gap area with winding slots and back-iron depth, which determines the physical size of electromagnetic core. Some applications require bursts of torque beyond the maximum operating torque, such as short bursts of torque to accelerate an electric vehicle from standstill. Always limited by magnetic core saturation or safe operating temperature rise and voltage, the capacity for torque bursts beyond the maximum operating torque differs significantly between categories of electric motors or generators.
Capacity for bursts of torque should not be confused with field weakening capability. Field weakening allows an electric machine to operate beyond the designed frequency of excitation. Field weakening is done when the maximum speed cannot be reached by increasing the applied voltage. This applies to only motors with current controlled fields and therefore cannot be achieved with PM motors.
Electric machines without a transformer circuit topology, such as that of WRSMs or PMSMs, cannot realize bursts of torque higher than the maximum designed torque without saturating the magnetic core and rendering any increase in current as useless. Furthermore, the PM assembly of PMSMs can be irreparably damaged, if bursts of torque exceeding the maximum operating torque rating are attempted.
Electric machines with a transformer circuit topology, such as induction machines, induction doubly-fed electric machines, and induction or synchronous wound-rotor doubly-fed (WRDF) machines, exhibit very high bursts of torque because the emf-induced active current on either side of the transformer oppose each other and thus contribute nothing to the transformer coupled magnetic core flux density, which would otherwise lead to core saturation.
Electric machines that rely on induction or asynchronous principles short-circuit one port of the transformer circuit and as a result, the reactive impedance of the transformer circuit becomes dominant as slip increases, which limits the magnitude of active (i.e., real) current. Still, bursts of torque that are two to three times higher than the maximum design torque are realizable.
The brushless wound-rotor synchronous doubly-fed (BWRSDF) machine is the only electric machine with a truly dual ported transformer circuit topology (i.e., both ports independently excited with no short-circuited port). The dual ported transformer circuit topology is known to be unstable and requires a multiphase slip-ring-brush assembly to propagate limited power to the rotor winding set. If a precision means were available to instantaneously control torque angle and slip for synchronous operation during motoring or generating while simultaneously providing brushless power to the rotor winding set, the active current of the BWRSDF machine would be independent of the reactive impedance of the transformer circuit and bursts of torque significantly higher than the maximum operating torque and far beyond the practical capability of any other type of electric machine would be realizable. Torque bursts greater than eight times operating torque have been calculated.
Continuous torque density
The continuous torque density of conventional electric machines is determined by the size of the air-gap area and the back-iron depth, which are determined by the power rating of the armature winding set, the speed of the machine, and the achievable air-gap flux density before core saturation. Despite the high coercivity of neodymium or samarium-cobalt PMs, continuous torque density is virtually the same amongst electric machines with optimally designed armature winding sets. Continuous torque density relates to method of cooling and permissible period of operation before destruction by overheating of windings or PM damage.
Continuous power density
The continuous power density is determined by the product of the continuous torque density and the constant torque speed range of the electric machine.
Standards
The following are major design, manufacturing, and testing standards covering electric motors:
American Petroleum Institute: API 541 Form-Wound Squirrel Cage Induction Motors - 375 kW (500 Horsepower) and Larger
American Petroleum Institute: API 546 Brushless Synchronous Machines - 500 kVA and Larger
American Petroleum Institute: API 547 General-purpose Form-Wound Squirrel Cage Induction Motors - 250 Hp and Larger
Institute of Electrical and Electronics Engineers: IEEE Std 112 Standard Test Procedure for Polyphase Induction Motors and Generators
Institute of Electrical and Electronics Engineers: IEEE Std 115 Guide for Test Procedures for Synchronous Machines
Institute of Electrical and Electronics Engineers: IEEE Std 841 Standard for Petroleum and Chemical Industry - Premium Efficiency Severe Duty Totally Enclosed Fan-Cooled (TEFC) Squirrel Cage Induction Motors - Up to and Including 370 kW (500 Hp)
International Electrotechnical Commission: IEC 60034 Rotating Electrical Machines
International Electrotechnical Commission: IEC 60072 Dimensions and output series for rotating electrical machines
National Electrical Manufacturers Association: MG-1 Motors and Generators
Underwriters Laboratories: UL 1004 - Standard for Electric Motors
Non-magnetic motors
An electrostatic motor is based on the attraction and repulsion of electric charge. Usually, electrostatic motors are the dual of conventional coil-based motors. They typically require a high-voltage power supply, although very small motors employ lower voltages. Conventional electric motors instead employ magnetic attraction and repulsion, and require high current at low voltages. In the 1750s, the first electrostatic motors were developed by Benjamin Franklin and Andrew Gordon. Today the electrostatic motor finds frequent use in micro-electro-mechanical systems (MEMS) where their drive voltages are below 100 volts, and where moving, charged plates are far easier to fabricate than coils and iron cores. Also, the molecular machinery which runs living cells is often based on linear and rotary electrostatic motors.
A piezoelectric motor or piezo motor is a type of electric motor based upon the change in shape of a piezoelectric material when an electric field is applied. Piezoelectric motors make use of the converse piezoelectric effect whereby the material produces acoustic or ultrasonic vibrations in order to produce a linear or rotary motion. In one mechanism, the elongation in a single plane is used to make a series stretches and position holds, similar to the way a caterpillar moves.
An electrically powered spacecraft propulsion system uses electric motor technology to propel spacecraft in outer space, most systems being based on electrically powering propellant to high speed, with some systems being based on electrodynamic tethers principles of propulsion to the magnetosphere.
See also
Electric generator
Goodness factor
Motor capacitor
Notes
References
Bibliography
Fink, Donald G.; Beaty, H. Wayne, Standard Handbook for Electrical Engineers, '14th ed., McGraw-Hill, 1999, ISBN 0-07-022005-0.
Houston, Edwin J.; Kennelly, Arthur, Recent Types of Dynamo-Electric Machinery, American Technical Book Company 1897, published by P.F. Collier and Sons New York, 1902
Rosenblatt, Jack; Friedman, M. Harold, Direct and Alternating Current Machinery, 2nd ed., McGraw-Hill, 1963
Further reading
External links
SparkMuseum: Early Electric Motors
The Invention of the Electric Motor 1800 to 1893, hosted by Karlsrushe Institute of Technology's Martin Doppelbauer
Electric Motors and Generators, a U. of NSW Physclips multimedia resource
IEA 4E - Efficient Electrical End-Use Equipment.
iPES Rotating Magnetic Field, animation
Category:Electrical engineering
Category:Electromagnetic components
Category:Energy conversion
Category:British inventions
Category:Hungarian inventions
Category:Magnetic propulsion devices | 76,086 | 2017-01 |
Marvel Comics | Marvel Comics is the common name and primary imprint of Marvel Worldwide Inc., formerly Marvel Publishing, Inc. and Marvel Comics Group, an American publisher of comic books and related media. In 2009, The Walt Disney Company acquired Marvel Entertainment, Marvel Worldwide's parent company.
Marvel started in 1939 as Timely Publications, and by the early 1950s had generally become known as Atlas Comics. Marvel's modern incarnation dates from 1961, the year that the company launched The Fantastic Four and other superhero titles created by Stan Lee, Jack Kirby, Steve Ditko and many others.
Marvel counts among its characters such well-known superheroes as Iron Man, Captain America, Hulk, Thor, Doctor Strange, Spider-Man, Ms. Marvel, Wolverine and Ant-Man, such teams as the Avengers, the Guardians of the Galaxy, the Fantastic Four, the Defenders, the X-Men and the Inhumans, and antagonists such as Doctor Doom, Red Skull, Green Goblin, Ultron, Doctor Octopus, Thanos, Magneto and Loki. Most of Marvel's fictional characters operate in a single reality known as the Marvel Universe, with locations that mirror real-life cities. Characters such as Spider-Man, the Fantastic Four, the Avengers, Daredevil and Doctor Strange are based in New York City,Sanderson, Peter (November 20, 2007). The Marvel Comics Guide to New York City. Gallery Books."8 Real & Fictional Addresses of Superheroes in New York City". Dorkly. April 27, 2011. whereas the X-Men have historically been based in Salem Center, New YorkClaremont, Chris (w), Byrne, John (a), Austin, Terry (i). "Elegy". The Uncanny X-Men #138 (October 1980). Marvel Comics.Johnston, Rich (September 28, 2011). "Teen Titans #1 Burnt Down The X-Men Mansion". Bleeding Cool.Lascala, Marisa (June 2011). "We’re Not in Westchester Anymore, Toto: X-Men Movie Confuses Westchester with England". Hudson Valley. and Hulk's stories often have been set in the American Southwest.Phegley, Kiel (February 26, 2013). "Waid's 'Indestructible Hulk' Goes On Tour With Walt Simonson". Comic Book Resources.
History
Timely Publications
left|thumb|Marvel Comics #1 (Oct. 1939), the first comic from Marvel precursor Timely Comics. Cover art by Frank R. Paul.
Martin Goodman founded the company later known as Marvel Comics under the name Timely Publications in 1939.Postal indicia in issue, per Marvel Comics #1 [1st printing] (October 1939) at the Grand Comics Database: "Vol.1, No.1, MARVEL COMICS, Oct, 1939 Published monthly by Timely Publications, … Art and editorial by Funnies Incorporated..."Per statement of ownership, dated October 2, 1939, published in Marvel Mystery Comics #4 (Feb. 1940), p. 40; reprinted in Marvel Masterworks: Golden Age Marvel Comics Volume 1 (Marvel Comics, 2004, ISBN 0-7851-1609-5), p. 239 Martin Goodman, a pulp magazine publisher who had started with a Western pulp in 1933, was expanding into the emerging—and by then already highly popular—new medium of comic books. Launching his new line from his existing company's offices at 330 West 42nd Street, New York City, he officially held the titles of editor, managing editor, and business manager, with Abraham Goodman officially listed as publisher.
Timely's first publication, Marvel Comics #1 (cover dated Oct. 1939), included the first appearance of Carl Burgos' android superhero the Human Torch, and the first appearances of Bill Everett's anti-hero Namor the Sub-Mariner,Writer-artist Bill Everett's Sub-Mariner had actually been created for an undistributed movie-theater giveaway comic, Motion Picture Funnies Weekly earlier that year, with the previously unseen, eight-page original story expanded by four pages for Marvel Comics #1. among other features. The issue was a great success, with it and a second printing the following month selling, combined, nearly 900,000 copies.Per researcher Keif Fromm, Alter Ego #49, p. 4 (caption), Marvel Comics #1, cover-dated October 1939, quickly sold out 80,000 copies, prompting Goodman to produce a second printing, cover-dated November 1939. The latter appears identical except for a black bar over the October date in the inside front-cover indicia, and the November date added at the end. That sold approximately 800,000 copies—a large figure in the market of that time. Also per Fromm, the first issue of Captain America Comics sold nearly one million copies. While its contents came from an outside packager, Funnies, Inc., Timely had its own staff in place by the following year. The company's first true editor, writer-artist Joe Simon, teamed with artist and emerging industry notable Jack Kirby to create one of the first patriotically themed superheroes,. Preceding Captain America were MLJ Comics' the Shield and Fawcett Comics' Minute-Man. Captain America, in Captain America Comics #1 (March 1941). It, too, proved a hit, with sales of nearly one million. Goodman formed Timely Comics, Inc., beginning with comics cover-dated April 1941 or Spring 1941."Marvel : Timely Publications (Indicia Publisher)" at the Grand Comics Database. "This is the original business name under which Martin Goodman began publishing comics in 1939. It was used on all issues up to and including those cover-dated March 1941 or Winter 1940–1941, spanning the period from Marvel Comics #1 to Captain America Comics #1. It was replaced by Timely Comics, Inc. starting with all issues cover-dated April 1941 or Spring 1941."
While no other Timely character would achieve the success of these three characters, some notable heroes—many of which continue to appear in modern-day retcon appearances and flashbacks—include the Whizzer, Miss America, the Destroyer, the original Vision, and the Angel. Timely also published one of humor cartoonist Basil Wolverton's best-known features, "Powerhouse Pepper", as well as a line of children's funny-animal comics featuring popular characters like Super Rabbit and the duo Ziggy Pig and Silly Seal.
Goodman hired his wife's cousin, Stanley Lieber, as a general office assistant in 1939. When editor Simon left the company in late 1941, Goodman made Lieber—by then writing pseudonymously as "Stan Lee"—interim editor of the comics line, a position Lee kept for decades except for three years during his military service in World War II. Lee wrote extensively for Timely, contributing to a number of different titles.
Goodman's business strategy involved having his various magazines and comic books published by a number of corporations all operating out of the same office and with the same staff. One of these shell companies through which Timely Comics was published was named Marvel Comics by at least Marvel Mystery Comics #55 (May 1944). As well, some comics' covers, such as All Surprise Comics #12 (Winter 1946–47), were labeled "A Marvel Magazine" many years before Goodman would formally adopt the name in 1961.Cover, All Surprise Comics #12 at the Grand Comics Database
Atlas Comics
The post-war American comic market saw superheroes falling out of fashion. Goodman's comic book line dropped them for the most part and expanded into a wider variety of genres than even Timely had published, featuring horror, Westerns, humor, funny animal, men's adventure-drama, giant monster, crime, and war comics, and later adding jungle books, romance titles, espionage, and even medieval adventure, Bible stories and sports.
Goodman began using the globe logo of the Atlas News Company, the newsstand-distribution company he owned, on comics cover-dated November 1951 even though another company, Kable News, continued to distribute his comics through the August 1952 issues.Marvel : Atlas [wireframe globe] (Brand) at the Grand Comics Database This globe branding united a line put out by the same publisher, staff and freelancers through 59 shell companies, from Animirth Comics to Zenith Publications.Marvel Indicia Publishers at the Grand Comics Database
Atlas, rather than innovate, took a proven route of following popular trends in television and movies—Westerns and war dramas prevailing for a time, drive-in movie monsters another time—and even other comic books, particularly the EC horror line.Per Les Daniels in Marvel: Five Fabulous Decades of the World's Greatest Comics, pp. 67–68: "The success of EC had a definite influence on Marvel. As Stan Lee recalls, 'Martin Goodman would say, "Stan, let's do a different kind of book," and it was usually based on how the competition was doing. When we found that EC's horror books were doing well, for instance, we published a lot of horror books'". Atlas also published a plethora of children's and teen humor titles, including Dan DeCarlo's Homer the Happy Ghost (à la Casper the Friendly Ghost) and Homer Hooper (à la Archie Andrews). Atlas unsuccessfully attempted to revive superheroes from late 1953 to mid-1954, with the Human Torch (art by Syd Shores and Dick Ayers, variously), the Sub-Mariner (drawn and most stories written by Bill Everett), and Captain America (writer Stan Lee, artist John Romita Sr.). Atlas did not achieve any breakout hits and, according to Stan Lee, Atlas survived chiefly because it produced work quickly, cheaply, and at a passable quality.
thumb|left|The Fantastic Four #1 (Nov. 1961). Cover art by Jack Kirby (penciler) and unconfirmed inker.
Comic Code Authority
During this time, the Comic Code Authority made its debut in September 1954, spearheaded by German-American psychiatrist Fredrick Wortham. Wortham published the book Seduction of the Innocent in order to force people to see that comics were impacting American youth. He believed violent comics were causing children to be reckless and were turning them into delinquents. In September 1954, comic book publishers got together to set up their own self-censorship organization—the Comics Magazine Association of America—in order to appease audiences. The next month, the code was published, forcing comic book companies to send their comics to them in order to gain their seal of approval. The stamp on the cover showed audiences that the comics were considered wholesome, entertaining, and educational.
Marvel Comics
The first modern comic books under the Marvel Comics brand were the science-fiction anthology Journey into Mystery #69 and the teen-humor title Patsy Walker #95 (both cover dated June 1961), which each displayed an "MC" box on its cover.Marvel : MC (Brand) at the Grand Comics Database. Then, in the wake of DC Comics' success in reviving superheroes in the late 1950s and early 1960s, particularly with the Flash, Green Lantern, and other members of the team the Justice League of America, Marvel followed suit.
In 1961, writer-editor Stan Lee revolutionized superhero comics by introducing superheroes designed to appeal to more older readers than the predominantly child audiences of the medium. Modern Marvel's first superhero team, the titular stars of The Fantastic Four #1 (Nov. 1961), broke convention with other comic book archetypes of the time by squabbling, holding grudges both deep and petty, and eschewing anonymity or secret identities in favor of celebrity status. Subsequently, Marvel comics developed a reputation for focusing on characterization and adult issues to a greater extent than most superhero comics before them, a quality which the new generation of older readers appreciated. This applied to The Amazing Spider-Man title in particular, which turned out to be Marvel's most successful book. Its young hero suffered from self-doubt and mundane problems like any other teenager, something readers could identify with.
Lee and freelance artist and eventual co-plotter Jack Kirby's Fantastic Four originated in a Cold War culture that led their creators to revise the superhero conventions of previous eras to better reflect the psychological spirit of their age. Eschewing such comic-book tropes as secret identities and even costumes at first, having a monster as one of the heroes, and having its characters bicker and complain in what was later called a "superheroes in the real world" approach, the series represented a change that proved to be a great success.Comics historian Greg Theakston has suggested that the decision to include monsters and initially to distance the new breed of superheroes from costumes was a conscious one, and born of necessity. Since DC distributed Marvel's output at the time, Theakston theorizes that, "Goodman and Lee decided to keep their superhero line looking as much like their horror line as they possibly could," downplaying "the fact that [Marvel] was now creating heroes" with the effect that they ventured "into deeper waters, where DC had never considered going". See Ro, pp. 87–88
Marvel often presented flawed superheroes, freaks, and misfits—unlike the perfect, handsome, athletic heroes found in previous traditional comic books. Some Marvel heroes looked like villains and monsters such as the Hulk and the Thing. This naturalistic approach even extended into topical politics.
Comics historian Mike Benton also noted:
All of these elements struck a chord with the older readers, such as college-aged adults, and they successfully gained in a way not seen before. In 1965, Spider-Man and the Hulk were both featured in Esquire magazine's list of 28 college campus heroes, alongside John F. Kennedy and Bob Dylan. In 2009 writer Geoff Boucher reflected that, "Superman and DC Comics instantly seemed like boring old Pat Boone; Marvel felt like The Beatles and the British Invasion. It was Kirby's artwork with its tension and psychedelia that made it perfect for the times—or was it Lee's bravado and melodrama, which was somehow insecure and brash at the same time?"
In addition to Spider-Man and the Fantastic Four, Marvel began publishing further superhero titles featuring such heroes and antiheroes as the Hulk, Thor, Ant-Man, Iron Man, the X-Men, Daredevil, the Inhumans, Black Panther, Doctor Strange, Captain Marvel and the Silver Surfer, and such memorable antagonists as Doctor Doom, Magneto, Galactus, Loki, the Green Goblin, and Doctor Octopus, all existing in a shared reality known as the Marvel Universe, with locations that mirror real-life cities such as New York, Los Angeles and Chicago.
Marvel even lampooned itself and other comics companies in a parody comic, Not Brand Echh (a play on Marvel's dubbing of other companies as "Brand Echh", à la the then-common phrase "Brand X").
thumb|The Avengers #4 (March 1964), with (from left to right), the Wasp, Giant-Man, Captain America, Iron Man, Thor and (inset) the Sub-Mariner. Cover art by Jack Kirby and George Roussos.
Cadence Industries ownership
In 1968, while selling 50 million comic books a year, company founder Goodman revised the constraining distribution arrangement with Independent News he had reached under duress during the Atlas years, allowing him now to release as many titles as demand warranted. Late that year he sold Marvel Comics and his other publishing businesses to the Perfect Film and Chemical Corporation, which continued to group them as the subsidiary Magazine Management Company, with Goodman remaining as publisher.Daniels, Les (September 1991). Marvel: Five Fabulous Decades of the World's Greatest Comics, Harry N Abrams. p. 139. In 1969, Goodman finally ended his distribution deal with Independent by signing with Curtis Circulation Company.
In 1971, the United States Department of Health, Education, and Welfare approached Marvel Comics editor-in-chief Stan Lee to do a comic book story about drug abuse. Lee agreed and wrote a three-part Spider-Man story portraying drug use as dangerous and unglamorous. However, the industry's self-censorship board, the Comics Code Authority, refused to approve the story because of the presence of narcotics, deeming the context of the story irrelevant. Lee, with Goodman's approval, published the story regardless in The Amazing Spider-Man #96–98 (May–July 1971), without the Comics Code seal. The market reacted well to the storyline, and the CCA subsequently revised the Code the same year.Nyberg, Amy Kiste. Seal of Approval: History of the Comics Code. University Press of Mississippi, Jackson, Miss., 1998
left|thumb|Howard the Duck #8 (Jan. 1977). Cover art by Gene Colan and Steve Leialoha
Goodman retired as publisher in 1972 and installed his son, Chip, as publisher, Shortly thereafter, Lee succeeded him as publisher and also became Marvel's president for a brief time.Lee, Mair, p. 5. During his time as president, he appointed as editor-in-chief Roy Thomas, who added "Stan Lee Presents" to the opening page of each comic book.
A series of new editors-in-chief oversaw the company during another slow time for the industry. Once again, Marvel attempted to diversify, and with the updating of the Comics Code achieved moderate to strong success with titles themed to horror (The Tomb of Dracula), martial arts, (Shang-Chi: Master of Kung Fu), sword-and-sorcery (Conan the Barbarian, Red Sonja), satire (Howard the Duck) and science fiction (2001: A Space Odyssey, "Killraven" in Amazing Adventures, Battlestar Galactica, Star Trek, and, late in the decade, the long-running Star Wars series). Some of these were published in larger-format black and white magazines, under its Curtis Magazines imprint. Marvel was able to capitalize on its successful superhero comics of the previous decade by acquiring a new newsstand distributor and greatly expanding its comics line. Marvel pulled ahead of rival DC Comics in 1972, during a time when the price and format of the standard newsstand comic were in flux. Goodman increased the price and size of Marvel's November 1971 cover-dated comics from 15 cents for 36 pages total to 25 cents for 52 pages. DC followed suit, but Marvel the following month dropped its comics to 20 cents for 36 pages, offering a lower-priced product with a higher distributor discount.Daniels, Marvel, pp.154–155
Goodman, now disconnected from Marvel, set up a new company called Seaboard Periodicals in 1974, reviving Marvel's old Atlas name for a new Atlas Comics line, but this lasted only a year and a half.
In the mid-1970s a decline of the newsstand distribution network affected Marvel. Cult hits such as Howard the Duck fell victim to the distribution problems, with some titles reporting low sales when in fact the first specialty comic book stores resold them at a later date. But by the end of the decade, Marvel's fortunes were reviving, thanks to the rise of direct market distribution—selling through those same comics-specialty stores instead of newsstands.
Marvel held its own comic book convention, Marvelcon '75, in spring 1975, and promised a Marvelcon '76. At the 1975 event, Stan Lee used a Fantastic Four panel discussion to announce that Jack Kirby, the artist co-creator of most of Marvel's signature characters, was returning to Marvel after having left in 1970 to work for rival DC Comics.Bullpen Bulletins: "The King is Back! 'Nuff Said!", in Marvel Comics cover dated October 1975, including Fantastic Four #163 In October 1976, Marvel, which already licensed reprints in different countries, including the UK, created a superhero specifically for the British market. Captain Britain debuted exclusively in the UK, and later appeared in American comics.Specific series- and issue-dates in article are collectively per GCD and other databases given under References
thumb|Marvel Super Heroes Secret Wars #1 (May 1984). Cover art by Mike Zeck.Both pencils and inks per UHBMCC; GCD remains uncertain on inker.
In 1978, Jim Shooter became Marvel's editor-in-chief. Although a controversial personality, Shooter cured many of the procedural ills at Marvel, including repeatedly missed deadlines. During Shooter's nine-year tenure as editor-in-chief, Chris Claremont and John Byrne's run on the Uncanny X-Men and Frank Miller's run on Daredevil became critical and commercial successes. Shooter brought Marvel into the rapidly evolving direct market, institutionalized creator royalties, starting with the Epic Comics imprint for creator-owned material in 1982; introduced company-wide crossover story arcs with Contest of Champions and Secret Wars; and in 1986 launched the ultimately unsuccessful New Universe line to commemorate the 25th anniversary of the Marvel Comics imprint. Star Comics, a children-oriented line differing from the regular Marvel titles, was briefly successful during this period.
Despite Marvel's successes in the early 1980s, it lost ground to rival DC in the latter half of the decade as many former Marvel stars defected to the competitor. DC scored critical and sales victories with titles and limited series such as Watchmen, Batman: The Dark Knight Returns, Crisis on Infinite Earths, Byrne's revamp of Superman, and Alan Moore's Swamp Thing.
Marvel Entertainment Group ownership
In 1986, Marvel's parent, Marvel Entertainment Group (MEG), was sold to New World Entertainment, which within three years sold it to MacAndrews and Forbes, owned by Revlon executive Ronald Perelman in 1989. In 1991 Perelman took MEG public. Following the rapid rise of this stock, Perelman issued a series of junk bonds that he used to acquire other entertainment companies, secured by MEG stock.
left|thumb|Spider-Man #1, later renamed "Peter Parker: Spider-Man" (August 1990; second printing). Cover art by Todd McFarlane.
Marvel earned a great deal of money and recognition during the comic book boom of the early 1990s, launching the successful 2099 line of comics set in the future (Spider-Man 2099, etc.) and the creatively daring though commercially unsuccessful Razorline imprint of superhero comics created by novelist and filmmaker Clive Barker. In 1990, Marvel began selling Marvel Universe Cards with trading card maker SkyBox International. These were collectible trading cards that featured the characters and events of the Marvel Universe. The 1990s saw the rise of variant covers, cover enhancements, swimsuit issues, and company-wide crossovers that affected the overall continuity of the fictional Marvel Universe
Marvel suffered a blow in early 1992, when seven of its most prized artists — Todd McFarlane (known for his work on Spider-Man), Jim Lee (X-Men), Rob Liefeld (X-Force), Marc Silvestri (Wolverine), Erik Larsen (The Amazing Spider-Man), Jim Valentino (Guardians of the Galaxy), and Whilce Portacio — left to form Image Comics in a deal brokered by Malibu Comics' owner Scott Mitchell Rosenberg. Three years later Rosenberg sold Malibu to Marvel on November 3, 1994,Ehrenreich, Ben. "PHENOMENON; Comic Genius?" New York Times magazine (November 11, 2007).Reynolds, Eric. "The Rumors are True: Marvel Buys Malibu," The Comics Journal #173 (December 1994), pp. 29–33."News!" Indy magazine #8 (1994), p. 7.Reynolds, Eric. "The Rumors are True: Marvel Buys Malibu," The Comics Journal #173 (December 1994), pp. 29-33."Comics Publishers Suffer Tough Summer: Body Count Rises in Market Shakedown," The Comics Journal #172 (Nov. 1994), pp. 13-18. who acquired the then-leading standard for computer coloring of comic books (developed by Rosenberg) in the process, but also integrating the Genesis Universe (Earth-1136) and the Ultraverse (Earth-93060) into Marvel's multiverse.
thumb|right|Marvel's logo, circa 1990s
In late 1994, Marvel acquired the comic book distributor Heroes World Distribution to use as its own exclusive distributor.Duin, Steve and Richardson, Mike (ed.s) "Capital City" in Comics Between the Panels (Dark Horse Publishing, 1998) ISBN 1-56971-344-8, p. 69 As the industry's other major publishers made exclusive distribution deals with other companies, the ripple effect resulted in the survival of only one other major distributor in North America, Diamond Comic Distributors Inc. Then, by the middle of the decade, the industry had slumped, and in December 1996 MEG filed for Chapter 11 bankruptcy protection. In early 1997, when Marvel's Heroes World endeavor failed, Diamond also forged an exclusive deal with Marvel"Hello Again: Marvel Goes with Diamond", The Comics Journal #193 (February 1997), pp. 9–10.—giving the company its own section of its comics catalog Previews.Duin, Steve and Richardson, Mike (ed.s) "Diamond Comic Distributors" in Comics Between the Panels (Dark Horse Publishing, 1998) ISBN 1-56971-344-8, p. 125-126
In 1996, Marvel had some of its titles participate in "Heroes Reborn", a crossover that allowed Marvel to relaunch some of its flagship characters such as the Avengers and the Fantastic Four, and outsource them to the studios of two of the former Marvel artists turned Image Comics founders, Jim Lee and Rob Liefeld. The relaunched titles, which saw the characters transported to a parallel universe with a history distinct from the mainstream Marvel Universe, were a solid success amidst a generally struggling industry, but Marvel discontinued the experiment after a one-year run and returned the characters to the Marvel Universe proper.
Marvel Enterprises
In 1997, Toy Biz and MEG merged to end the bankruptcy, forming a new corporation, Marvel Enterprises. With his business partner Avi Arad, publisher Bill Jemas, and editor-in-chief Bob Harras, Toy Biz co-owner Isaac Perlmutter helped stabilize the comics line.
In 1998, the company launched the imprint Marvel Knights, taking place just outside Marvel continuity with better production qualtity. The imprint was helmed by soon-to-become editor-in-chief Joe Quesada; it featured tough, gritty stories showcasing such characters as the Daredevil,McMillan, Graeme. Page 10. "Leaving an Imprint: 10 Defunct MARVEL Publishing Lines". Newsarama (10 January 2013). Inhumans and Black Panther.
With the new millennium, Marvel Comics emerged from bankruptcy and again began diversifying its offerings. In 2001, Marvel withdrew from the Comics Code Authority and established its own Marvel Rating System for comics. The first title from this era to not have the code was X-Force #119 (October 2001). Marvel also created new imprints, such as MAX (an explicit-content line) and Marvel Adventures (developed for child audiences). In addition, the company created an alternate universe imprint, Ultimate Marvel, that allowed the company to reboot its major titles by revising and updating its characters to introduce to a new generation.
Some of its characters have been turned into successful film franchises, such as the Men in Black movie series, starting in 1997, Blade movie series, starting in 1998, X-Men movie series, starting in 2000, and the highest grossing series Spider-Man, beginning in 2002.
In a cross-promotion, the November 1, 2006, episode of the CBS soap opera The Guiding Light, titled "She's a Marvel", featured the character Harley Davidson Cooper (played by Beth Ehlers) as a superheroine named the Guiding Light. The character's story continued in an eight-page backup feature, "A New Light", that appeared in several Marvel titles published November 1 and 8. Also that year, Marvel created a wiki on its Web site.
In late 2007 the company launched Marvel Digital Comics Unlimited, a digital archive of over 2,500 back issues available for viewing, for a monthly or annual subscription fee.Colton, David. "Marvel Comics Shows Its Marvelous Colors in Online Archive", USA Today, November 12, 2007
In 2009 Marvel Comics closed its Open Submissions Policy, in which the company had accepted unsolicited samples from aspiring comic book artists, saying the time-consuming review process had produced no suitably professional work. The same year, the company commemorated its 70th anniversary, dating to its inception as Timely Comics, by issuing the one-shot Marvel Mystery Comics 70th Anniversary Special #1 and a variety of other special issues.Frisk, Andy. Marvel Mystery Comics 70th Anniversary Special #1 (review), ComicBookBin.com, June 6, 2009.
Disney conglomerate unit (2009–present)
thumb|Writers of Marvel titles in the 2010s include (seated left to right) Ed Brubaker, Christos Gage, Matt Fraction and Brian Michael Bendis.
On August 31, 2009, The Walt Disney Company announced a deal to acquire Marvel Comics' parent corporation, Marvel Entertainment, for $4 billion or $4.2 billion, with Marvel shareholders to receive $30 and 0.745 Disney shares for each share of Marvel they own. As of 2008, Marvel and its major, longtime competitor DC Comics shared over 80% of the American comic-book market. As of September 2010, Marvel switched its bookstores distribution company from Diamond Book Distributors to Hachette Distribution Services.
Marvel relaunched the CrossGen imprint, owned by Disney Publishing Worldwide, in March 2011. Marvel and Disney Publishing began jointly publishing Disney/Pixar Presents magazine that May.
Marvel discontinued its Marvel Adventures imprint in March 2012, and replaced them with a line of two titles connected to the Marvel Universe TV block. Also in March, Marvel announced its Marvel ReEvolution initiative that included Infinite Comics, a line of digital comics, Marvel AR, an application software that provides an augmented reality experience to readers and Marvel NOW!, a relaunch of most of the company's major titles with different creative teams. Marvel NOW! also saw the debut of new flagship titles including Uncanny Avengers and All-New X-Men.
In April 2013, Marvel and other Disney conglomerate components began announcing joint projects. With ABC, a Once Upon a Time graphic novel was announced for publication in September.Sands, Rich. (April 12, 2013) First Look: The Once Upon a Time Graphic Novel. TV Guide.com. Accessed on November 4, 2013. With Disney, Marvel announced in October 2013 that in January 2014 it would release its first title under their joint "Disney Kingdoms" imprint "Seekers of the Weird", a five-issue miniseries. On January 3, 2014, fellow Disney subsidiary Lucasfilm Limited, LLC announced that as of 2015, Star Wars comics would once again be published by Marvel.
Following the events of the company-wide crossover Secret Wars in 2015, a relaunched Marvel universe began in September 2015, called the All-New, All-Different Marvel.
Officers
Michael Z. Hobson, Executive Vice President, Publishing Group vice-president, publishing (1986)
Stan Lee, executive vice president & publisher (1986)
Joseph Calamari, executive vice president (1986)
Jim Shooter, vice president and Editor-in-Chief (1986)
Publishers
Abraham Goodman 1939 – ?
Martin Goodman ? – 1972
Charles "Chip" Goodman 1972
Stan Lee 1972 – October 1996
Shirrel Rhoades October 1996 – October 1998
Winston Fowlkes February 1998 – November 1999
Bill Jemas February 2000 – 2003
Dan Buckley 2003–present
Editors-in-chief
Marvel's chief editor originally held the title of "editor". This head editor's title later became "editor-in-chief". Joe Simon was the company's first true chief-editor, with publisher Martin Goodman, who had served as titular editor only and outsourced editorial operations.
In 1994 Marvel briefly abolished the position of editor-in-chief, replacing Tom DeFalco with five group editors-in-chief. As Carl Potts described the 1990s editorial arrangement:
Marvel reinstated the overall editor-in-chief position in 1995 with Bob Harras.
Editor
Martin Goodman (1939–1940; titular only)
Joe Simon (1939–1941)
Stan Lee (1941–1942)
Vincent Fago (acting editor during Lee's military service) (1942–1945)
Stan Lee (1945–1972)
Roy Thomas (1972–1974)
Len Wein (1974–1975)
Marv Wolfman (black-and-white magazines 1974–1975, entire line 1975–1976)
Gerry Conway (1976)
Archie Goodwin (1976–1978)
Editor-in-chief
Jim Shooter (1978–1987)
Tom DeFalco (1987–1994)
No overall; separate group editors-in-chief (1994–1995)
Mark Gruenwald, Universe (Avengers & Cosmic)
Bob Harras, Mutant
Bob Budiansky, Spider-Man
Bobbie Chase, Marvel Edge
Carl Potts, Epic Comics & general entertainment
Bob Harras (1995–2000)
Joe Quesada (2000–2011)
Axel Alonso (2011–present)
Executive Editor
Originally called associate editor when Marvel's chief editor just carried the title of editor, the title of the next highest editorial position became executive editor under the chief editor title of editor-in-chief. The title of associate editor later was revived under the editor-in-chief as an editorial position in charge of few titles under the direction of an editor and without an assistant editor.
Associate Editor
Chris Claremont ?–1976
Jim Shooter January 5, 1976 – January 2, 1978
Executive Editor
Tom DeFalco 1987
Mark Gruenwald 1987–1994, senior editor 1995–1996
Carl Potts Epic 1989–1994, 1995–
Bob Budiansky early '90s – 1994
Bobbie Chase 1995–2001
Tom Brevoort 2007–present
Axel Alonso 2010 – January 2011Phegley, Kiel (January 4, 2011). "Alonso Named Marvel Editor-In-Chief". Comic Book Resources.
Ownership
Martin Goodman (1939–1961–1968)
Parent corporation
Magazine Management Co. (1968–1973)
Cadence Industries (1973–1986)
Marvel Entertainment Group (1986–1998)
Marvel Enterprises
Marvel Enterprises, Inc. (1998–2005)
Marvel Entertainment, Inc (2005–2009)
Marvel Entertainment, LLC (2009–present, a wholly owned subsidiary of The Walt Disney Company)
Offices
Located in New York City, Marvel has had successive headquarters:
in the McGraw-Hill Building, where it originated as Timely Comics in 1939
in suite 1401 of the Empire State BuildingSanderson, Peter. The Marvel Comics Guide to New York City, (Pocket Books, 2007) p. 59. ISBN 978-1-4165-3141-8
at 635 Madison Avenue (the actual location, though the comic books' indicia listed the parent publishing-company's address of 625 Madison Ave.)
575 Madison Avenue;
387 Park Avenue South
10 East 40th Street
417 Fifth Avenue
a space at 135 W. 50th Street"Marvel to move to new, 60,000-square-foot offices in October". Comic Book Resources. September 21, 2010.Turner, Zake. "Where We Work", The New York Observer, December 21, 2010
Market share
In August 2016, Marvel held a 30.78% share of the comics market, compared to its competitor DC Comics' 39.27% share. By comparison, the companies respectively held 33.50% and 30.33% shares in 2013, and 40.81% and 29.94% shares in 2008.
Marvel characters in other media
Marvel characters and stories have been adapted to many other media. Some of these adaptations were produced by Marvel Comics and its sister company, Marvel Studios, while others were produced by companies licensing Marvel material.
Games
In June 1993, Marvel issued its collectable caps for milk caps game under the Hero Caps brand. In 2014, the Marvel Disk Wars: The Avengers Japanese TV series was launched together with a collectible game called Bachicombat, a game similar to the milk caps game, by Bandai.
Collectible card
The RPG industry brought the development of the Collectible card game (CCG) in the early 1990s which there were soon Marvel characters were featured in CCG of their own starting in 1995 with Fleer's OverPower (1995–1999). Later collectible card game were:
Marvel Superstars (2010–?) Upper Deck Company
ReCharge Collectible Card Game (2001–? ) Marvel
Vs. System (2004–2009, 2014–) Upper Deck Company
X-Men Trading Card Game (2000–?) Wizards of the Coast
Role-playing
TSR published the pen-and-paper role-playing game Marvel Super Heroes in 1984. TSR then released in 1998 the Marvel Super Heroes Adventure Game which used a different system, the card-based SAGA system, than their first game. In 2003 Marvel Publishing published its own role-playing game, the Marvel Universe Roleplaying Game, that used a diceless stone pool system. In August 2011 Margaret Weis Productions announced it was developing a tabletop role-playing game based on the Marvel universe, set for release in February 2012 using its house Cortex Plus RPG system.
Video games
Video games based on Marvel characters go back to 1984 and the Atari game, Spider-Man. Since then several dozen video games have been released and all have been produced by outside licensees. In 2014, Disney Infinity 2.0: Marvel Super Heroes was released that brought Marvel characters to the existing Disney sandbox video game.
Films
As of the start of September 2015, films based on Marvel's properties represent the highest-grossing U.S. franchise, having grossed over $7.7 billion "Franchise Index". Box Office Mojo. Retrieved May 29, 2013. as part of a worldwide gross of over $18 billion.
Live shows
The Marvel Experience (2014–)
Marvel Universe Live! (2014–) live arena show
Spider-Man Live! (2002–2003)
Spider-Man: Turn Off the Dark (2011–2014) a Broadway musical
Prose novels
Marvel first licensed two prose novels to Bantam Books, who printed The Avengers Battle the Earth Wrecker by Otto Binder (1967) and Captain America: The Great Gold Steal by Ted White (1968). Various publishers took up the licenses from 1978 to 2002. Also, with the various licensed films being released beginning in 1997, various publishers put out movie novelizations. In 2003, following publication of the prose young adult novel Mary Jane, starring Mary Jane Watson from the Spider-Man mythos, Marvel announced the formation of the publishing imprint Marvel Press. However, Marvel moved back to licensing with Pocket Books from 2005 to 2008. With few books issued under the imprint, Marvel and Disney Books Group relaunched Marvel Press in 2011 with the Marvel Origin Storybooks line.
Television programs
Many television series, both live-action and animated, have based their productions on Marvel Comics characters. These include multiple series for popular characters such as Spider-Man, Iron Man and the X-Men. Additionally, a handful of television movies, usually also pilots, based on Marvel Comics characters have been made.
Theme parks
Marvel has licensed its characters for theme-parks and attractions, including at the Universal Orlando Resort's Islands of Adventure, in Orlando, Florida, which includes rides based on their iconic characters and costumed performers.Universal's Islands of Adventures: Marvel Super Hero Island official site
Walt Disney Parks and Resorts plans on creating original Marvel attractions at their theme parks, with Hong Kong Disneyland becoming the first Disney theme park to feature a Marvel attraction. Due to the licensing agreement with Universal Studios, signed prior to Disney's purchase of Marvel, Walt Disney World and Tokyo Disney are barred from having Marvel characters in their parks. However, this only includes characters Universal is currently using, other characters in their "families" (X-Men, Avengers, Fantastic Four, etc.), and the villains associated with said characters. This clause has allowed Walt Disney World to have meet and greets, merchandise, attractions and more with other Marvel characters not associated with the characters at Islands of Adventures, such as Star-Lord and Gamora from Guardians of the Galaxy as well as Baymax and Hiro from Big Hero 6.
Imprints
Disney Kingdoms
Marvel Comics
Marvel Press, joint imprint with Disney Books Group
Icon Comics (creator owned)
Infinite Comics
Defunct
Amalgam Comics
CrossGen
Curtis Magazines/Marvel Magazine Group
Marvel Monsters Group
Epic Comics (creator owned) (1982–2004)
Malibu Comics (1994–1997)
Marvel 2099 (1992–1998)
Marvel Absurd
Marvel Age/Adventures
Marvel Books
Marvel Edge
Marvel Knights
Marvel Illustrated
Marvel Mangaverse
Marvel Music
Marvel Next
Marvel Noir
Marvel UK
Marvel Frontier
MAX
MC2
New Universe
Paramount Comics (co-owned with Viacom's Paramount Pictures)
Razorline
Soleil
Star Comics
Tsunami
Ultimate Comics
See also
List of magazines released by Marvel Comics in the 1970s
Panini Comics
Soleil Productions
Notes
References
Further reading
External links
.
Category:Comic book publishing companies of the United States
Category:Comics publications
Category:Marvel Entertainment
Category:Media companies based in New York City
Category:Publishing companies based in New York City
Category:American companies established in 1939
Category:Publishing companies established in 1939
Category:1939 establishments in New York
Category:Companies that filed for Chapter 11 bankruptcy in 1996
Category:The Walt Disney Company subsidiaries
Category:Articles which contain graphical timelines | 20,966 | 2017-01 |
Federalism | thumb|upright=1.5|
thumb|upright=1.5|The pathway of regional integration or separation
Federalism is the mixed or compound mode of government, combining a general government (the central or 'federal' government) with regional governments (provincial, state, Land, cantonal, territorial or other sub-unit governments) in a single political system. Its distinctive feature, exemplified in the founding example of modern federalism of the United States of America under the Constitution of 1787, is a relationship of parity between the two levels of government established.Kenneth Wheare identified the two levels of government in the US as 'co-equally supreme'. In this, he echoed the perspective of the founding fathers of the Constitution, James Madison in Federalist 39 having seen the several states as forming 'distinct and independent portions of the supremacy' in relation to the general government. Wheare, Kenneth (1946) Federal Government, Oxford University Press, London, pp. 10-15. Madison, James, Hamilton, Alexander and Jay, John (1987) The Federalist Papers, Penguin, Harmondsworth, p. 258. It can thus be defined as a form of government in which there is a division of powers between two levels of government of equal status.Law, John (2013) 'How Can We Define Federalism?', in Perspectives on Federalism, Vol. 5, No. 3, pp. E105-6. http://www.on-federalism.eu/attachments/169_download.pdf
Federalism is distinguished from confederalism, in which the general level of government is subordinate to the regional level, and from devolution within a unitary state, in which the regional level of government is subordinate to the general level.Wheare, Kenneth (1946), pp. 31-2. It represents the central form in the pathway of regional integration or separation,See diagram above. bounded on the less integrated side by confederalism and on the more integrated side by devolution within a unitary state.Diamond, Martin (1961) "The Federalist's View of Federalism", in Benson, George (ed.) Essays in Federalism, Institute for Studies in Federalism, Claremont, p. 22. Downs, William (2011) 'Comparative Federalism, Confederalism, Unitary Systems', in Ishiyama, John and Breuning, Marijke (eds) Twenty-first Century Political Science: A Reference Handbook, Sage, Los Angeles, Vol. I, pp. 168-9. Hueglin, Thomas and Fenna, Alan (2006) Comparative Federalism: A Systematic Inquiry, Broadview, Peterborough, p. 31.
Leading examples of the federation or federal state include Canada, the United States, Mexico, Brazil, Germany, Switzerland, Australia and India. Some also today characterize the European Union as the pioneering example of federalism in a multi-state setting, in a concept termed the federal union of states.See Law, John (2013), p. 104. http://www.on-federalism.eu/attachments/169_download.pdf
This author identifies two distinct federal forms, where before only one was known, based upon whether sovereignty (conceived in its core meaning of ultimate authority) resides in the whole (in one people) or in the parts (in many peoples). This is determined by the absence or presence of a right of secession for the parts. The structures are termed, respectively, the federal state (or federation) and the federal union of states (or federal union).
Overview
The terms 'federalism' and 'confederalism' both have a root in the Latin word foedus, meaning "treaty, pact or covenant." Their common meaning until the late eighteenth century was a simple league or inter-governmental relationship among sovereign states based upon a treaty. They were therefore initially synonyms. It was in this sense that James Madison in Federalist 39 had referred to the new United States as 'neither a national nor a federal Constitution, but a composition of both' (i.e. neither a single large unitary state nor a league/confederation among several small states, but a hybrid of the two).Madison, James, Hamilton, Alexander and Jay, John (1987) The Federalist Papers, Penguin, Harmondsworth, p. 259. In the course of the nineteenth century the meaning of federalism would come to shift, strengthening to refer uniquely to the novel compound political form, while the meaning of confederalism would remain at a league of states.Law, John (2012) 'Sense on Federalism', in Political Quarterly, Vol. 83, No. 3, p. 544. Thus, this article relates to the modern usage of the word 'federalism'.
Modern federalism is a system based upon democratic rules and institutions in which the power to govern is shared between national and provincial/state governments. The term federalist describes several political beliefs around the world depending on context.
It is often perceived as an optimal solution for states comprising different cultural or ethnic communities. However, tensions between territories can be found in federalist countries such as Canada and federation as a way to appease and quell military conflict has failed recently in places like Libya or Iraq, while the formula is simultaneously proposed and dismissed in countries such as Ukraine or Syria.Why Talk of Federalism Won't Help Peace in Syria|Foreign Policy Federations such as Yugoslavia or Czechoslovakia collapsed as soon as it was possible to put the model to the test.'The Federal Experience in Yugoslavia', Mihailo Markovic, page 75; included in 'Rethinking Federalism: Citizens, Markets, and Governments in a changing world', edited by Karen Knop, Sylvia Ostry, Richard Simeon, Katherine Swinton|Google books
European vs. American federalism
In Europe, "Federalist" is sometimes used to describe those who favor a common federal government, with distributed power at regional, national and supranational levels. Most European federalists want this development to continue within the European Union. European federalism originated in post-war Europe; one of the more important initiatives was Winston Churchill's speech in Zürich in 1946.Winston Churchill's speech in Zürich in 1946
In the United States, federalism originally referred to belief in a stronger central government. When the U.S. Constitution was being drafted, the Federalist Party supported a stronger central government, while "Anti-Federalists" wanted a weaker central government. This is very different from the modern usage of "federalism" in Europe and the United States. The distinction stems from the fact that "federalism" is situated in the middle of the political spectrum between a confederacy and a unitary state. The U.S. Constitution was written as a reaction to the Articles of Confederation, under which the United States was a loose confederation with a weak central government.
In contrast, Europe has a greater history of unitary states than North America, thus European "federalism" argues for a weaker central government, relative to a unitary state. The modern American usage of the word is much closer to the European sense. As the power of the Federal government has increased, some people have perceived a much more unitary state than they believe the Founding Fathers intended. Most people politically advocating "federalism" in the United States argue in favor of limiting the powers of the federal government, especially the judiciary (see Federalist Society, New Federalism).
In Canada, federalism typically implies opposition to sovereigntist movements (most commonly Quebec separatism).
The governments of Argentina, Australia, Brazil, India, and Mexico, among others, are also organized along federalist principles.
Federalism may encompass as few as two or three internal divisions, as is the case in Belgium or Bosnia and Herzegovina. In general, two extremes of federalism can be distinguished: at one extreme, the strong federal state is almost completely unitary, with few powers reserved for local governments; while at the other extreme, the national government may be a federal state in name only, being a confederation in actuality.
In 1999, the Government of Canada established the Forum of Federations as an international network for exchange of best practices among federal and federalizing countries. Headquartered in Ottawa, the Forum of Federations partner governments include Australia, Brazil, Canada, Ethiopia, Germany, India, Mexico, Nigeria, and Switzerland.
Examples of federalism
Australia
thumb|250px|Commonwealth of Australia, consisting of its federal district, Australian Capital Territory (red), the states of New South Wales (pink), Queensland (blue), South Australia (purple), Tasmania (yellow, bottom), Victoria (green), Western Australia (orange) and the territories of Northern Territory (yellow, top) and Jervis Bay Territory (not shown).
On the 1st of January 1901 the nation-state of Australia officially came into existence as a federation. The Australian continent was colonised by the United Kingdom in 1788, which subsequently established six, eventually self-governing, colonies there. In the 1890s the governments of these colonies all held referendums on becoming a unified, self-governing "Commonwealth" within the British Empire. When all the colonies voted in favour of federation, the Federation of Australia commenced, resulting in the establishment of the Commonwealth of Australia in 1901. The model of Australian federalism adheres closely to the original model of the United States of America, although it does so through a parliamentary Westminster system rather than a presidential system.
Brazil
thumb|left|Brazil is a union of 26 states and its federal district, which is the site of the federal capital, Brasília.
In Brazil, the fall of the monarchy in 1889 by a military coup d'état led to the rise of the presidential system, headed by Deodoro da Fonseca. Aided by well-known jurist Ruy Barbosa, Fonseca established federalism in Brazil by decree, but this system of government would be confirmed by every Brazilian constitution since 1891, although some of them would distort some of the federalist principles. The 1937 federal government had the authority to appoint State Governors (called interventors) at will, thus centralizing power in the hands of President Getúlio Vargas. Brazil also uses the Fonseca system to regulate interstate trade. Brazil is one of the biggest federal governments.
The Brazilian Constitution of 1988 introduced a new component to the ideas of federalism, including municipalities as federal entities. Brazilian municipalities are now invested with some of the traditional powers usually granted to states in federalism, and although they are not allowed to have a Constitution, they are structured by an organic law.
Canada
thumb|right|240px|In Canada, the provincial governments derive all their powers directly from the constitution. In contrast, the territories are subordinate to the federal government and are delegated powers by it.
In Canada the system of federalism is described by the division of powers between the federal parliament and the country's provincial governments. Under the Constitution Act (previously known as the British North America Act) of 1867, specific powers of legislation are allotted. Section 91 of the constitution gives rise to federal authority for legislation, whereas section 92 gives rise to provincial powers.
For matters not directly dealt with in the constitution, the federal government retains residual powers; however, conflict between the two levels of government, relating to which level has legislative jurisdiction over various matters, has been a longstanding and evolving issue. Areas of contest include legislation with respect to regulation of the economy, taxation, and natural resources.
India
thumb|left|Indian state governments led by various political parties
The Government of India (referred to as the Union Government) was established by the Constitution of India, and is the governing authority of a federal union of 29 states and 7 union territories.
The government of India is based on a 3 tiered system, in which the Constitution of India delineates the subjects on which each tier of government has executive powers. The Constitution originally provided for a two-tier system of government, the Union Government (also known as the Central Government), representing the Union of India, and the State governments. Later, a third tier was added in the form of Panchayats and Municipalities. In the current arrangement, The Seventh Schedule of the Indian Constitution delimits the subjects of each level of governmental jurisdiction, dividing them into three lists:
Union List includes subjects of national importance such as defence of the country, foreign affairs, banking, communications and currency. The Union Government alone can make laws relating to the subjects mentioned in the Union List.
State List contains subjects of State and local importance such as police, trade, commerce, agriculture and irrigation. The State Governments alone can make laws relating to the subjects mentioned in the State List.
Concurrent List includes subjects of common interest to both the Union Government as well as the State Governments, such as education, forest, trade unions, marriage, adoption and succession. Both the Union as well as the State Governments can make laws on the subjects mentioned in this list. If their laws conflict with each other, the law made by the Union Government will prevail.
Asymmetric federalism
A distinguishing aspect of Indian federalism is that unlike many other forms of federalism, it is asymmetric. Article 370 makes special provisions for the state of Jammu and Kashmir as per its Instrument of Accession. Article 371 makes special provisions for the states of Andhra Pradesh, Arunachal Pradesh, Assam, Goa, Mizoram, Manipur, Nagaland and Sikkim as per their accession or state-hood deals. Also one more aspect of Indian federalism is system of President's Rule in which the central government (through its appointed Governor) takes control of state's administration for certain months when no party can form a government in the state or there is violent disturbance in the state.
Coalition politics
Although the Constitution does not say so, India is now a multilingual federation. India has a multi-party system,with political allegiances frequently based on linguistic, regional and caste identities,Johnson, A "Federalism: The Indian Experience ", HSRC Press,1996, Pg 3, ISBN necessitating coalition politics, especially at the Union level.
Nigeria
South Africa
Although South Africa bears some elements of a federal system, such as the allocation of certain powers to provinces, it is nevertheless constitutionally and functionally a unitary state.
Federalism in Europe
Several federal systems exist in Europe, such as in Switzerland, Austria, Germany, Belgium, Bosnia and Herzegovina and the European Union.
Germany and the EU present the only examples of federalism in the world where
members of the federal "upper houses" (the German Bundesrat (Federal Council) and the European Council) are neither elected nor appointed but comprise members or delegates of the governments of their constituents. The United States had a similar system until 1913, where prior to the 17th Amendment, Senators were delegates of the state elected by the state legislatures rather than the citizens.
the different constituents of the "upper house" do not have the same number of votes, contrary to the federal principle that one of the two houses of parliament has to grant equal voting power to the unequally sized and populated federated entities (e.g. the U.S. Senate or the Swiss Council of States (Ständerat)).
Modern Germany abandoned federalism only during Nazism (1933–1945) and in the DDR (German Democratic Republic a.k.a. East Germany) from 1952 to 1990. Adolf Hitler viewed federalism as an obstacle to his goals. As he wrote in Mein Kampf, "National Socialism must claim the right to impose its principles on the whole German nation, without regard to what were hitherto the confines of federal states."
Accordingly, the idea of a strong, centralized government has very negative associations in German politics, although the Progressive political movements in Germany (Liberals, Social Democrats) were advocating at the time of the Second German Empire (1871-1918) to abolish (or to reshape) the majority of German federated states of that era, as they were considered to be mostly monarchist remnances of the feudal structures of the Middle Ages.Bernt Engelmann, Einig gegen Recht und Freiheit, p7ff, publ. Goldmann, Munich 1975
In Britain, an Imperial Federation was once seen as (inter alia) a method of solving the Home Rule problem in Ireland; federalism has long been proposed as a solution to the "Irish Problem", and more lately, to the "West Lothian question".
French Revolution
During the French Revolution, especially in 1793, "federalism" had an entirely different meaning. It was a political movement to weaken the central government in Paris by devolving power to the provinces.Bill Edmonds, "'Federalism' and Urban Revolt in France in 1793," Journal of Modern History (1983) 55#1 pp 22-53 in JSTORFrançois Furet and Mona Ozouf, eds. A Critical Dictionary of the French Revolution (1989), pp. 54-64
European Union
Following the end of World War II, several movements began advocating a European federation, such as the Union of European Federalists and the European Movement, founded in 1948. Those organizations exercised influence in the European unification process, but never in a decisive way.
Although the drafts of both the Maastricht treaty and the Treaty establishing a Constitution for Europe mentioned federalism, the reference never made it to the text of the treaties adopted by consensus. The strongest advocates of European federalism have been Germany, Italy, Belgium and Luxembourg while those historically most strongly opposed have been the United Kingdom, Denmark and France (with conservative heads of state and governments). Since the presidency of François Mitterrand (1981-1995), the French authorities have adopted a much more pro-European Unification position, as they consider that a strong EU is presenting the best "insurance" against a unified Germany which might become too strong and thus a threat for its neighbours.
Russian Federation
thumb|left|320px|Federal subjects of Russia
The post-Imperial nature of Russian subdivision of government changed towards a generally autonomous model which began with the establishment of the USSR (of which Russia was governed as part). It was liberalized in the aftermath of the Soviet Union, with the reforms under Boris Yeltsin preserving much of the Soviet structure while applying increasingly liberal reforms to the governance of the constituent republics and subjects (while also coming into conflict with Chechen secessionist rebels during the Chechen War). Some of the reforms under Yeltsin were scaled back by Vladimir Putin.
All of Russia's subdivisional entities are known as subjects, with some smaller entities, such as the republics enjoying more autonomy than other subjects on account of having an extant presence of a culturally non-Russian ethnic minority or, in some cases, majority.
Currently, there are 85 federal subjects of Russia.
United States
Federalism in the United States is the evolving relationship between state governments and the federal government of the United States. American government has evolved from a system of dual federalism to one of associative federalism. In "Federalist No. 46," James Madison asserted that the states and national government "are in fact but different agents and trustees of the people, constituted with different powers." Alexander Hamilton, writing in "Federalist No. 28," suggested that both levels of government would exercise authority to the citizens' benefit: "If their [the peoples'] rights are invaded by either, they can make use of the other as the instrument of redress." (1)
thumb|300px|left|The United States is composed of fifty self-governing states and several territories.
Because the states were preexisting political entities, the U.S. Constitution did not need to define or explain federalism in any one section but it often mentions the rights and responsibilities of state governments and state officials in relation to the federal government. The federal government has certain express powers (also called enumerated powers) which are powers spelled out in the Constitution, including the right to levy taxes, declare war, and regulate interstate and foreign commerce. In addition, the Necessary and Proper Clause gives the federal government the implied power to pass any law "necessary and proper" for the execution of its express powers. Other powers—the reserved powers—are reserved to the people or the states. The power delegated to the federal government was significantly expanded by the Supreme Court decision in McCulloch v. Maryland (1819), amendments to the Constitution following the Civil War, and by some later amendments—as well as the overall claim of the Civil War, that the states were legally subject to the final dictates of the federal government.
The Federalist Party of the United States was opposed by the Democratic-Republicans, including powerful figures such as Thomas Jefferson. The Democratic-Republicans mainly believed that: the Legislature had too much power (mainly because of the Necessary and Proper Clause) and that they were unchecked; the Executive had too much power, and that there was no check on the executive; a dictator would arise; and that a bill of rights should be coupled with the constitution to prevent a dictator (then believed to eventually be the president) from exploiting or tyrannizing citizens. The federalists, on the other hand, argued that it was impossible to list all the rights, and those that were not listed could be easily overlooked because they were not in the official bill of rights. Rather, rights in specific cases were to be decided by the judicial system of courts.
After the American Civil War, the federal government increased greatly in influence on everyday life and in size relative to the state governments. Reasons included the need to regulate businesses and industries that span state borders, attempts to secure civil rights, and the provision of social services. The federal government acquired no substantial new powers until the acceptance by the Supreme Court of the Sherman Anti-Trust Act.
From 1938 until 1995, the U.S. Supreme Court did not invalidate any federal statute as exceeding Congress' power under the Commerce Clause. Most actions by the federal government can find some legal support among the express powers, such as the Commerce Clause, whose applicability has been narrowed by the Supreme Court in recent years. In 1995 the Supreme Court rejected the Gun-Free School Zones Act in the Lopez decision, and also rejected the civil remedy portion of the Violence Against Women Act of 1994 in the United States v. Morrison decision. Recently, the Commerce Clause was interpreted to include marijuana laws in the Gonzales v. Raich decision.
Dual federalism holds that the federal government and the state governments are co-equals, each sovereign.
However, since the Civil War Era, the national courts often interpret the federal government as the final judge of its own powers under dual federalism. The establishment of Native American governments (which are separate and distinct from state and federal government) exercising limited powers of sovereignty, has given rise to the concept of "bi-federalism."
Venezuela
The Federal War ended in 1863 with the signing of the Treaty of Coche by both the centralist government of the time and the Federal Forces. The United States of Venezuela were subsequently incorporated under a "Federation of Sovereign States" upon principles borrowed from the Articles of Confederation of the United States of America. In this Federation, each State had a "President" of its own that controlled almost every issue, even the creation of "State Armies," while the Federal Army was required to obtain presidential permission to enter any given state.
However, more than 140 years later, the original system has gradually evolved into a quasi-centralist form of government. While the 1999 Constitution still defines Venezuela as a Federal Republic, it abolished the Senate, transferred competences of the States to the Federal Government and granted the President of the Republic vast powers to intervene in the States and Municipalities.
Federalism with two components
Belgium
Federalism in the Kingdom of Belgium is an evolving system.
Belgian federalism is a twin system which reflects both the
linguistic communities of the country, French (ca. 40% of the total population), Dutch (ca. 59%), and to a much lesser extent German (ca. 1%) and the
geographically defined Regions (federated States: Brussels-Capital (de facto Greater Brussels), Flanders and Wallonia). The last two correspond to the language areas in Belgium, Wallonia hosting both the bulk of the French-speaking population and the German-speaking minority. In Brussels, ca. 80% of the population speaks French and ca. 20% Dutch with the city being an enclave of the Flemish region and officially a bilingual area.”Taalgebruik in Brussel en de plaats van het Nederlands. Enkele recente bevindingen”, Rudi Janssens, Brussels Studies, Nummer 13, 7 January 2008 (see page 4).
Flanders is the region associated with Belgium's Dutch-speaking majority, i.e. the Flemish Community.
Due to its relatively small size (approximately one percent) the German-speaking Community of Belgium does not have much influence on national politics.
Wallonia is a French-speaking area, except for the German-speaking so-called East Cantons (Cantons de l'est). French is the second most spoken mother tongue of Belgium, after Dutch. Within the French-speaking Community of Belgium, there is a geographical and political distinction between Wallonia and Brussels for historical and sociological reasons.Historically, the Walloons were for a federalism with three components and the Flemings for two.See: Witte, Els & Craeybeckx, Jan. Politieke geschiedenis van België. Antwerpen, SWU, pp. 455, 459-460. This difference is one of the elements which makes the Belgian issue so complicated. The Flemings wanted to defend their culture while the Walloons wanted to defend their political and economical supremacy they had in the 19th century: It is true that the Walloon movement, which has never stopped affirming that Wallonia is part of the French cultural area, has never made this cultural struggle a priority, being more concerned to struggle against its status as a political minority and the economic decline which was only a corollary to it.ligne.net/Wallonie_Politique/1995_Destatte_Philippe_Wallonia-Identity.htm Wallonia today - The search for an identity without nationalist mania - (1995)
On one hand, this means that the Belgian political landscape, generally speaking, consists of only two components: the Dutch-speaking population represented by Dutch-language political parties, and the majority populations of Wallonia and Brussels, represented by their French-speaking parties. The Brussels region emerges as a third component.Charles Picqué, Minister-President of the Brussels-Capital Region said in a September, 2008 declaration in Namur at the National Walloon Feast : It is, besides, impossible to have a debate about the institutions of Belgium in which Brussels would be excluded. (French Il n'est d'ailleurs, pas question d'imaginer un débat institutionnel dont Bruxelles serait exclu. ) The Brussels-Capital Region has claimed and obtained a special place in the current negotiations about the reformation of the Belgian state. (French Pendant 18 ans, Bruxelles est demeurée sans statut (...) L'absence de statut pour Bruxelles s'expliquait par la différence de vision que partis flamands et partis francophones en avaient: [les partis flamands étaient] allergiques à la notion de Région (...) les francophones (...) considéraient que Bruxelles devait devenir une Région à part entière (...) Les partis flamands ont accepté [en 1988] la création d'une troisième Région et l'exercice par celle-ci des mêmes compétences que celles des deux autres... C.E. Lagasse, Les nouvelles institutions politiques de la Belgique et de l'Europe, Erasme, Namur, 2003, pp. 177- 178 ISBN ) This specific dual form of federalism, with the special position of Brussels, consequently has a number of political issues—even minor ones—that are being fought out over the Dutch/French-language political division. With such issues, a final decision is possible only in the form of a compromise. This tendency gives this dual federalism model a number of traits that generally are ascribed to confederalism, and makes the future of Belgian federalism contentious.
On the other hand, Belgian federalism is federated with three components. An affirmative resolution concerning Brussels' place in the federal system passed in the parliaments of Wallonia and Brussels.La Libre Belgique 17 juillet 2008La Libre Belgique, 19 juillet 2008 These resolutions passed against the desires of Dutch-speaking parties, who are generally in favour of a federal system with two components (i.e. the Dutch and French Communities of Belgium). However, the Flemish representatives in the Parliament of the Brussels Capital-Region voted in favour of the Brussels resolution, with the exception of one party. The chairman of the Walloon Parliament stated on July 17, 2008 that, "Brussels would take an attitude".Le Vif Brussels' parliament passed the resolution on July 18, 2008:
The Parliament of the Brussels-Capital Region approves with great majority a resolution claiming the presence of Brussels itself at the negotiations of the reformation of the Belgian State. July 18, 2008
This aspect of Belgian federalism helps to explain the difficulties of partition; Brussels, with its importance, is linked to both Wallonia and Flanders and vice versa. This situation, however, does not erase the traits of a confederation in the Belgian system.
Other examples
thumb|200px|23x15px Official flag of Iraqi Kurdistan Ratio: 2:3
Current examples of two-sided federalism:
Bosnia and Herzegovina is a federation of two entities: Republika Srpska and Federation of Bosnia and Herzegovina (the latter itself a federation).
Historical examples of two-sided federalism include:
Czechoslovakia, until the Czech Republic and Slovakia separated in 1993.
The Federal Republic of Yugoslavia, from 1992 to 2003 when it became a confederation titled the State Union of Serbia and Montenegro. This confederation expired 2006 as Montenegro declared its independence.
The 1960 Constitution of Cyprus was based on the same ideas, but the union of Greeks and Turks failed.
United Republic of Tanzania (formerly United Republic of Tanganyika and Zanzibar), which was the union of Tanganyika and Zanzibar.
Iraq adapted a federal system on 15 October 2005, and formally recognized the Kurdistan Region as the country's first and currently only federal region. See Constitution of Iraq for more information regarding Iraq's method of creating federal entities.
The Federal Republic of Cameroun operated between 1961 and 1972
Proposed federalism
It has been proposed in several unitary states to establish a federal system, for various reasons.
China
China is the largest unitary state in the world by both population and land area. Although China has had long periods of central rule for centuries, it is often argued that the unitary structure of the Chinese government is far too unwieldy to effectively and equitably manage the country's affairs. On the other hand, Chinese nationalists are suspicious of decentralization as a form of secessionism and a backdoor for national disunity; still others argue that the degree of autonomy given to provincial-level officials in the People's Republic of China amounts to a de facto federalism.
Libya
Shortly after the 2011 civil war, some people in Cyrenaica (in the eastern region of the country) began to call for the new regime to be federal, with the traditional three regions of Libya (Cyrenaica, Tripolitania, and Fezzan) being the constituent units. A group calling itself the "Cyrenaican Transitional Council" issued a declaration of autonomy on 6 March 2012; this move was rejected by the National Transitional Council in Tripoli.Thomson Reuters Foundation | News, Information and Connections for Action . Trust.org (2012-03-06). Retrieved on 2013-07-12.
Philippines
150px|thumb|right|11 Proposed "States" for the proposed Federal Republic of the Philippines
The Philippines is a unitary state with some powers devolved to Local Government Units (LGUs) under the terms of the Local Government Code. There is also one autonomous region, the Autonomous Region in Muslim Mindanao. Over the years various modifications have been proposed to the Constitution of the Philippines, including possible transition to a federal system as part of a shift to a parliamentary system. In 2004, Philippine President Gloria Macapagal Arroyo established the Consultative Commission which suggested such a Charter Change but no action was taken by the Philippine Congress to amend the 1987 Constitution.
Spain
Spain is a unitary state with a high level of decentralisation, often regarded as a federal system in all but name or a "federation without federalism".The Federal Option and Constitutional Management of Diversity in Spain Xavier Arbós Marín, page 375; included in 'The Ways of Federalism in Western Countries and the Horizons of Territorial Autonomy in Spain' (volume 2), edited by Alberto López-Eguren and Leire Escajedo San Epifanio; edited by Springer ISBN 978-3-642-27716-0, ISBN 978-3-642-27717-7(eBook) The country has been quoted as being "an extraordinarily decentralized country", with the central government accounting for just 18% of public spending, 38% for the regional governments, 13% for the local councils, and the remaining 31% for the social security system. The current Spanish constitution has been implemented in such a way that, in many respects, Spain can be compared to countries which are undeniably federal.The Federal Option and Constitutional Management of Diversity in Spain Xavier Arbós Marín, page 381; included in 'The Ways of Federalism in Western Countries and the Horizons of Territorial Autonomy in Spain' (volume 2), edited by Alberto López-Eguren and Leire Escajedo San Epifanio; edited by Springer ISBN 978-3-642-27716-0, ISBN 978-3-642-27717-7(eBook)
However, in order to manage the tensions present in the Spanish transition to democracy, the drafters of the current Spanish constitution avoided giving labels such as 'federal' to the territorial arrangements. Besides, unlike in the federal system, the main taxes are taken centrally from Madrid (except for the Basque Country and Navarre, which were recognized in the Spanish democratic constitution as charter territories drawing from historical reasons) and then distributed to the Autonomous Communities.
An explicit and legal recognition of federalism as such is promoted by parties such as Podemos, United Left and, more recently, the Spanish Socialist Workers' Party. The Spanish Socialist party has recently considered the idea of enshrining a federal Spain, in part, due to the increase of the Spanish peripheral nationalisms and the Catalan proposal of self-determination referenda for creating a Catalan State in Catalonia, either independent or within Spain.Federalism. El País.El PSOE plantea una reforma de la Constitución para una España federal. El País.Mas encarga el diseño de un Estado catalán. El País.
Sri Lanka
Syria
United Kingdom
thumb|120px|Map of the Constituent countries of the United Kingdom and Regions of England
The United Kingdom has traditionally been governed as a unitary state by the Westminster Parliament in London. Instead of adopting a federal model, the UK has relied on gradual devolution to decentralise political power. Devolution in the UK began with the Government of Ireland Act 1914 which granted home rule to Ireland as a constituent country of the former United Kingdom of Great Britain and Ireland. Following the partition of Ireland in 1921 which saw the creation of the sovereign Irish Free State (which eventually evolved into the modern day Republic of Ireland), Northern Ireland retained its devolved government through the Parliament of Northern Ireland, the only part of the UK to have such a body at this time. This body was suspended in 1972 and Northern Ireland was governed by direct rule during the period of conflict known as The Troubles.
In modern times, a process of devolution in the United Kingdom has decentralised power once again. Since the 1997 referendums in Scotland and Wales and the Good Friday Agreement in Northern Ireland, three of the four constituent countries of the UK now have some level of autonomy. Government has been devolved to the Scottish Parliament, the National Assembly for Wales and the Northern Ireland Assembly. England does not have its own parliament and English affairs continue to be decided by the Westminster Parliament. In 1998 a set of eight unelected Regional assemblies, or chambers, was created to support the English Regional Development Agencies, but these were abolished between 2008 and 2010. The Regions of England continue to be used in certain governmental administrative functions.
Critics of devolution often cite the West Lothian Question, which refers to the voting power of non-English MPs on matters affecting only England in the UK Parliament. Scottish and Welsh nationalism have been increasing in popularity, and since the Scottish independence referendum, 2014 there has been a wider debate about the UK adopting a federal system with each of the four home nations having its own, equal devolved legislatures and law-making powers.
UK federal government was proposed as early as 1912 by the Member of Parliament for Dundee, Winston Churchill, in the context of the legislation for Irish Home Rule. In a speech in Dundee on 12 September, he proposed that England should also be governed by regional parliaments, with power devolved to areas such as Lancashire, Yorkshire, the Midlands and London as part of a federal system of government.
Federalism as the anarchist and libertarian socialist mode of political organization
Anarchists are against the State but are not against political organization or "governance"—so long as it is self-governance utilizing direct democracy. The mode of political organization preferred by anarchists, in general, is federalism or confederalism. However, the anarchist definition of federalism tends to differ from the definition of federalism assumed by pro-state political scientists. The following is a brief description of federalism from section I.5 of An Anarchist FAQ:
"The social and political structure of anarchy is similar to that of the economic structure, i.e., it is based on a voluntary federation of decentralized, directly democratic policy-making bodies. These are the neighborhood and community assemblies and their confederations. In these grassroots political units, the concept of "self-management" becomes that of "self-government", a form of municipal organisation in which people take back control of their living places from the bureaucratic state and the capitalist class whose interests it serves.
[...]
The key to that change, from the anarchist standpoint, is the creation of a network of participatory communities based on self-government through direct, face-to-face democracy in grassroots neighborhood and community assemblies [meetings for discussion, debate, and decision making].
[...]
Since not all issues are local, the neighborhood and community assemblies will also elect mandated and re-callable delegates to the larger-scale units of self-government in order to address issues affecting larger areas, such as urban districts, the city or town as a whole, the county, the bio-region, and ultimately the entire planet. Thus the assemblies will confederate at several levels in order to develop and co-ordinate common policies to deal with common problems.
[...]
This need for co-operation does not imply a centralized body. To exercise your autonomy by joining self-managing organisations and, therefore, agreeing to abide by the decisions you help make is not a denial of that autonomy (unlike joining a hierarchical structure, where you forsake autonomy within the organisation). In a centralized system, we must stress, power rests at the top and the role of those below is simply to obey (it matters not if those with the power are elected or not, the principle is the same). In a federal system, power is not delegated into the hands of a few (obviously a "federal" government or state is a centralized system). Decisions in a federal system are made at the base of the organisation and flow upwards so ensuring that power remains decentralized in the hands of all. Working together to solve common problems and organize common efforts to reach common goals is not centralization and those who confuse the two make a serious error -- they fail to understand the different relations of authority each generates and confuse obedience with co-operation."Anarchist Writers. "I.5 What could the social structure of anarchy look like?" An Anarchist FAQ. http://www.infoshop.org/page/AnarchistFAQSectionI5
Christian Church
Federalism also finds expression in ecclesiology (the doctrine of the church). For example, presbyterian church governance resembles parliamentary republicanism (a form of political federalism) to a large extent. In Presbyterian denominations, the local church is ruled by elected elders, some of which are ministerial. Each church then sends representatives or commissioners to presbyteries and further to a general assembly. Each greater level of assembly has ruling authority over its constituent members. In this governmental structure, each component has some level of sovereignty over itself. As in political federalism, in presbyterian ecclesiology there is shared sovereignty.
Other ecclesiologies also have significant representational and federalistic components, including the more anarchic congregational ecclesiology, and even in more hierarchical episcopal ecclesiology.
Some Christians argue that the earliest source of political federalism (or federalism in human institutions; in contrast to theological federalism) is the ecclesiastical federalism found in the Bible. They point to the structure of the early Christian Church as described (and prescribed, as believed by many) in the New Testament. In their arguments, this is particularly demonstrated in the Council of Jerusalem, described in Acts chapter 15, where the Apostles and elders gathered together to govern the Church; the Apostles being representatives of the universal Church, and elders being such for the local church. To this day, elements of federalism can be found in almost every Christian denomination, some more than others.
Constitutional structure
Division of powers
In a federation, the division of power between federal and regional governments is usually outlined in the constitution. Almost every country allows some degree of regional self-government, in federations the right to self-government of the component states is constitutionally entrenched. Component states often also possess their own constitutions which they may amend as they see fit, although in the event of conflict the federal constitution usually takes precedence.
In almost all federations the central government enjoys the powers of foreign policy and national defense as exclusive federal powers. Were this not the case a federation would not be a single sovereign state, per the UN definition. Notably, the states of Germany retain the right to act on their own behalf at an international level, a condition originally granted in exchange for the Kingdom of Bavaria's agreement to join the German Empire in 1871. Beyond this the precise division of power varies from one nation to another.
The constitutions of Germany and the United States provide that all powers not specifically granted to the federal government are retained by the states. The Constitution of some countries like Canada and India, on the other hand, state that powers not explicitly granted to the provincial governments are retained by the federal government. Much like the US system, the Australian Constitution allocates to the Federal government (the Commonwealth of Australia) the power to make laws about certain specified matters which were considered too difficult for the States to manage, so that the States retain all other areas of responsibility. Under the division of powers of the European Union in the Lisbon Treaty, powers which are not either exclusively of European competence or shared between EU and state as concurrent powers are retained by the constituent states.
thumb|Satiric depiction of late 19th century political tensions in Spain
Where every component state of a federation possesses the same powers, we are said to find 'symmetric federalism'. Asymmetric federalism exists where states are granted different powers, or some possess greater autonomy than others do. This is often done in recognition of the existence of a distinct culture in a particular region or regions. In Spain, the Basques and Catalans, as well as the Galicians, spearheaded a historic movement to have their national specificity recognized, crystallizing in the "historical communities" such as Navarre, Galicia, Catalonia, and the Basque Country. They have more powers than the later expanded arrangement for other Spanish regions, or the Spain of the autonomous communities (called also the "coffee for everyone" arrangement), partly to deal with their separate identity and to appease peripheral nationalist leanings, partly out of respect to specific rights they had held earlier in history. However, strictly speaking Spain is not a federalism, but a decentralized administrative organization of the state.
It is common that during the historical evolution of a federation there is a gradual movement of power from the component states to the centre, as the federal government acquires additional powers, sometimes to deal with unforeseen circumstances. The acquisition of new powers by a federal government may occur through formal constitutional amendment or simply through a broadening of the interpretation of a government's existing constitutional powers given by the courts.
Usually, a federation is formed at two levels: the central government and the regions (states, provinces, territories), and little to nothing is said about second or third level administrative political entities. Brazil is an exception, because the 1988 Constitution included the municipalities as autonomous political entities making the federation tripartite, encompassing the Union, the States, and the municipalities. Each state is divided into municipalities (municípios) with their own legislative council (câmara de vereadores) and a mayor (prefeito), which are partly autonomous from both Federal and State Government. Each municipality has a "little constitution", called "organic law" (lei orgânica). Mexico is an intermediate case, in that municipalities are granted full-autonomy by the federal constitution and their existence as autonomous entities (municipio libre, "free municipality") is established by the federal government and cannot be revoked by the states' constitutions. Moreover, the federal constitution determines which powers and competencies belong exclusively to the municipalities and not to the constituent states. However, municipalities do not have an elected legislative assembly.
Federations often employ the paradox of being a union of states, while still being states (or having aspects of statehood) in themselves. For example, James Madison (author of the US Constitution) wrote in Federalist Paper No. 39 that the US Constitution "is in strictness neither a national nor a federal constitution; but a composition of both. In its foundation, it is federal, not national; in the sources from which the ordinary powers of the Government are drawn, it is partly federal, and partly national..." This stems from the fact that states in the US maintain all sovereignty that they do not yield to the federation by their own consent. This was reaffirmed by the Tenth Amendment to the United States Constitution, which reserves all powers and rights that are not delegated to the Federal Government as left to the States and to the people.
Bicameralism
The structures of most federal governments incorporate mechanisms to protect the rights of component states. One method, known as 'intrastate federalism', is to directly represent the governments of component states in federal political institutions. Where a federation has a bicameral legislature the upper house is often used to represent the component states while the lower house represents the people of the nation as a whole. A federal upper house may be based on a special scheme of apportionment, as is the case in the senates of the United States and Australia, where each state is represented by an equal number of senators irrespective of the size of its population.
Alternatively, or in addition to this practice, the members of an upper house may be indirectly elected by the government or legislature of the component states, as occurred in the United States prior to 1913, or be actual members or delegates of the state governments, as, for example, is the case in the German Bundesrat and in the Council of the European Union. The lower house of a federal legislature is usually directly elected, with apportionment in proportion to population, although states may sometimes still be guaranteed a certain minimum number of seats.
Intergovernmental relations
In Canada, the provincial governments represent regional interests and negotiate directly with the central government. A First Ministers conference of the prime minister and the provincial premiers is the de facto highest political forum in the land, although it is not mentioned in the constitution.
Constitutional change
Federations often have special procedures for amendment of the federal constitution. As well as reflecting the federal structure of the state this may guarantee that the self-governing status of the component states cannot be abolished without their consent. An amendment to the constitution of the United States must be ratified by three-quarters of either the state legislatures, or of constitutional conventions specially elected in each of the states, before it can come into effect. In referendums to amend the constitutions of Australia and Switzerland it is required that a proposal be endorsed not just by an overall majority of the electorate in the nation as a whole, but also by separate majorities in each of a majority of the states or cantons. In Australia, this latter requirement is known as a double majority.
Some federal constitutions also provide that certain constitutional amendments cannot occur without the unanimous consent of all states or of a particular state. The US constitution provides that no state may be deprived of equal representation in the senate without its consent. In Australia, if a proposed amendment will specifically impact one or more states, then it must be endorsed in the referendum held in each of those states. Any amendment to the Canadian constitution that would modify the role of the monarchy would require unanimous consent of the provinces. The German Basic Law provides that no amendment is admissible at all that would abolish the federal system.
Other technical terms
Fiscal federalism – the relative financial positions and the financial relations between the levels of government in a federal system.
Formal federalism (or 'constitutional federalism') – the delineation of powers is specified in a written constitution, which may or may not correspond to the actual operation of the system in practice.
Executive federalism refers in the English-speaking tradition to the intergovernmental relationships between the executive branches of the levels of government in a federal system and in the continental European tradition to the way constituent units 'execute' or administer laws made centrally.
Federalism as a political philosophy
The meaning of federalism, as a political movement, and of what constitutes a 'federalist', varies with country and historical context. Movements associated with the establishment or development of federations can exhibit either centralising or decentralising trends. For example, at the time those nations were being established, factions known as "federalists" in the United States and Australia advocated the formation of strong central government. Similarly, in European Union politics, federalists mostly seek greater EU integration. In contrast, in Spain and in post-war Germany, federal movements have sought decentralisation: the transfer of power from central authorities to local units. In Canada, where Quebec separatism has been a political force for several decades, the "federalist" impulse aims to keep Quebec inside Canada.
Federalism as a conflict reducing device
Federalism, and other forms of territorial autonomy, is generally seen as a useful way to structure political systems in order to prevent violence among different groups within countries because it allows certain groups to legislate at the subnational level.Arend Lijphart. 1977. Democracy in Plural Societies: A Comparative Exploration. New Haven CT: Yale University Press. Some scholars have suggested, however, that federalism can divide countries and result in state collapse because it creates proto-states.Henry E. Hale. Divided We Stand: Institutional Sources of Ethnofederal State Survival and Collapse. World Politics 56(2): 165-193. Still others have shown that federalism is only divisive when it lacks mechanisms that encourage political parties to compete across regional boundaries.Dawn Brancati. 2009. Peace by Design: Managing Intrastate Conflict through Decentralization. Oxford: Oxford UP.
See also
Consociationalism
Cooperative federalism
Democratic World Federalists
Federal Union
Layer cake federalism
Pillarisation
States' rights
Union of Utrecht
World Federalist Movement
Notes and references
External links
P.-J. Proudhon, The Principle of Federation, 1863.
A Comparative Bibliography: Regulatory Competition on Corporate Law
A Rhetoric for Ratification: The Argument of the Federalist and its Impact on Constitutional Interpretation
National
Teaching about Federalism in the United States - From the Education Resources Information Center Clearinghouse for Social Studies/Social Science Education Bloomington, Indiana.
An Ottawa, Ontario, Canada-based international organization for federal countries that share best practices among countries with that system of government
Tenth Amendment Center Federalism and States Rights in the U.S.
BackStory Radio episode on the origins and current status of Federalism
Constitutional law scholar Hester Lessard discusses Vancouver's Downtown Eastside and jurisdictional justice McGill University, 2011
General Federalism
Category:Political systems
Category:Political theories
ca:Federació | 11,542 | 2017-01 |
Mali | Mali (; ), officially the Republic of Mali (), is a landlocked country in Africa. Mali is the eighth-largest country in Africa, with an area of just over . The population of Mali is 14.5 million. Its capital is Bamako. Mali consists of eight regions and its borders on the north reach deep into the middle of the Sahara Desert, while the country's southern part, where the majority of inhabitants live, features the Niger and Senegal rivers. The country's economy centers on agriculture and fishing. Some of Mali's prominent natural resources include gold, being the third largest producer of gold in the African continent,Mali gold reserves rise in 2011 alongside price. Retrieved 17 January 2013 and salt. About half the population lives below the international poverty line of $1.25 (U.S.) a day.Human Development Indices, Table 3: Human and income poverty, p. 6. Retrieved 1 June 2009 A majority of the population (90%) are Muslims."Chapter 1: Religious Affiliation". The World's Muslims: Unity and Diversity. Pew Research Center's Religion & Public Life Project. 9 August 2012. Retrieved 4 September 2013.
Present-day Mali was once part of three West African empires that controlled trans-Saharan trade: the Ghana Empire, the Mali Empire (for which Mali is named), and the Songhai Empire. During its golden age, there was a flourishing of mathematics, astronomy, literature, and art.Topics. MuslimHeritage.com (5 June 2003). Retrieved 8 October 2012.Sankore University. Muslimmuseum.org. Retrieved 8 October 2012. At its peak in 1300, the Mali Empire covered an area about twice the size of modern-day France and stretched to the west coast of Africa.Mali Empire (ca. 1200- ) | The Black Past: Remembered and Reclaimed. The Black Past. Retrieved 8 October 2012. In the late 19th century, during the Scramble for Africa, France seized control of Mali, making it a part of French Sudan. French Sudan (then known as the Sudanese Republic) joined with Senegal in 1959, achieving independence in 1960 as the Mali Federation. Shortly thereafter, following Senegal's withdrawal from the federation, the Sudanese Republic declared itself the independent Republic of Mali. After a long period of one-party rule, a coup in 1991 led to the writing of a new constitution and the establishment of Mali as a democratic, multi-party state.
In January 2012, an armed conflict broke out in northern Mali, in which Tuareg rebels took control of by April and declared the secession of a new state, Azawad.Lydia Polgreen and Alan Cowell, "Mali Rebels Proclaim Independent State in North", The New York Times (6 April 2012) The conflict was complicated by a military coup that took place in MarchUN Security council condemns Mali coup. Telegraph (23 March 2012). Retrieved 24 March 2013. and later fighting between Tuareg and Islamist rebels. In response to Islamist territorial gains, the French military launched Opération Serval in January 2013. A month later, Malian and French forces recaptured most of the north. Presidential elections were held on 28 July 2013, with a second round run-off held on 11 August, and legislative elections were held on 24 November and 15 December 2013.
History
The extent of the Mali Empire's peak|thumb|left
The pages above are from Timbuktu Manuscripts written in Sudani script (a form of Arabic) from the Mali Empire showing established knowledge of astronomy and mathematics. Today there are close to a million of these manuscripts found in Timbuktu alone.|thumb|left
thumb|left|Griots of Sambala, king of Médina (Fula people, Mali), 1890.
Mali was once part of three famed West African empires which controlled trans-Saharan trade in gold, salt, slaves, and other precious commodities.Mali country profile, p. 1. These Sahelian kingdoms had neither rigid geopolitical boundaries nor rigid ethnic identities. The earliest of these empires was the Ghana Empire, which was dominated by the Soninke, a Mande-speaking people. The empire expanded throughout West Africa from the 8th century until 1078, when it was conquered by the Almoravids.Mali country profile. Mali was later responsible for the collapse of Islamic Slave Army from the North. The defeat of Tukuror Slave Army, was repeated by Mali against the France and Spanish Expeditionary Army in the 1800s ("Blanc et memoires"). . p. 2.
The Mali Empire later formed on the upper Niger River, and reached the height of power in the 14th century. Under the Mali Empire, the ancient cities of Djenné and Timbuktu were centers of both trade and Islamic learning. The empire later declined as a result of internal intrigue, ultimately being supplanted by the Songhai Empire. The Songhai people originated in current northwestern Nigeria. The Songhai had long been a major power in West Africa subject to the Mali Empire's rule.
In the late 14th century, the Songhai gradually gained independence from the Mali Empire and expanded, ultimately subsuming the entire eastern portion of the Mali Empire. The Songhai Empire's eventual collapse was largely the result of a Moroccan invasion in 1591, under the command of Judar Pasha. The fall of the Songhai Empire marked the end of the region's role as a trading crossroads. Following the establishment of sea routes by the European powers, the trans-Saharan trade routes lost significance.
One of the worst famines in the region's recorded history occurred in the 18th century. According to John Iliffe, "The worst crises were in the 1680s, when famine extended from the Senegambian coast to the Upper Nile and 'many sold themselves for slaves, only to get a sustenance', and especially in 1738–56, when West Africa's greatest recorded subsistence crisis, due to drought and locusts, reportedly killed half the population of Timbuktu."John Iliffe (2007) Africans: the history of a continent. Cambridge University Press. p. 69. ISBN 0-521-68297-5
French colonial rule
thumb|Cotton being processed in Niono into bales for export to other parts of Africa and to France, .
Mali fell under the control of France during the late 19th century. By 1905, most of the area was under firm French control as a part of French Sudan. In early 1959, French Sudan (which changed its name to the Sudanese Republic) and Senegal united to become the Mali Federation. The Mali Federation gained independence from France on 20 June 1960.
Senegal withdrew from the federation in August 1960, which allowed the Sudanese Republic to become the independent Republic of Mali on 22 September 1960. Modibo Keïta was elected the first president. Keïta quickly established a one-party state, adopted an independent African and socialist orientation with close ties to the East, and implemented extensive nationalization of economic resources. In 1960, the population of Mali was reported to be about 4.1 million.Core document forming part of the reports of states parties: Mali. United Nations Human Rights Website.
Moussa Traoré
On 19 November 1968, following progressive economic decline, the Keïta regime was overthrown in a bloodless military coup led by Moussa Traoré,Mali country profile, p. 3. a day which is now commemorated as Liberation Day. The subsequent military-led regime, with Traoré as president, attempted to reform the economy. His efforts were frustrated by political turmoil and a devastating drought between 1968 and 1974, in which famine killed thousands of people."Mali's nomads face famine". BBC News. 9 August 2005. The Traoré regime faced student unrest beginning in the late 1970s and three coup attempts. The Traoré regime repressed all dissenters until the late 1980s.
The government continued to attempt economic reforms, and the populace became increasingly dissatisfied. In response to growing demands for multi-party democracy, the Traoré regime allowed some limited political liberalization. They refused to usher in a full-fledged democratic system. In 1990, cohesive opposition movements began to emerge, and was complicated by the turbulent rise of ethnic violence in the north following the return of many Tuaregs to Mali.
thumb|WWI Commemorative Monument to the "Armée Noire"
Anti-government protests in 1991 led to a coup, a transitional government, and a new constitution. Opposition to the corrupt and dictatorial regime of General Moussa Traoré grew during the 1980s. During this time strict programs, imposed to satisfy demands of the International Monetary Fund, brought increased hardship upon the country's population, while elites close to the government supposedly lived in growing wealth. Peaceful student protests in January 1991 were brutally suppressed, with mass arrests and torture of leaders and participants. Mali March 1991 Revolution Scattered acts of rioting and vandalism of public buildings followed, but most actions by the dissidents remained nonviolent.
March Revolution
From 22 March through 26 March 1991, mass pro-democracy rallies and a nationwide strike was held in both urban and rural communities, which became known as les evenements ("the events") or the March Revolution. In Bamako, in response to mass demonstrations organized by university students and later joined by trade unionists and others, soldiers opened fire indiscriminately on the nonviolent demonstrators. Riots broke out briefly following the shootings. Barricades as well as roadblocks were erected and Traoré declared a state of emergency and imposed a nightly curfew. Despite an estimated loss of 300 lives over the course of four days, nonviolent protesters continued to return to Bamako each day demanding the resignation of the dictatorial president and the implementation of democratic policies.
26 March 1991 is the day that marks the clash between military soldiers and peaceful demonstrating students which climaxed in the massacre of dozens under the orders of then President Moussa Traoré. He and three associates were later tried and convicted and received the death sentence for their part in the decision-making of that day. Nowadays, the day is a national holiday in order to remember the tragic events and the people that were killed. The coup is remembered as Mali's March Revolution of 1991.
By 26 March, the growing refusal of soldiers to fire into the largely nonviolent protesting crowds turned into a full-scale tumult, and resulted in thousands of soldiers putting down their arms and joining the pro-democracy movement. That afternoon, Lieutenant Colonel Amadou Toumani Touré announced on the radio that he had arrested the dictatorial president, Moussa Traoré. As a consequence, opposition parties were legalized and a national congress of civil and political groups met to draft a new democratic constitution to be approved by a national referendum.
Amadou Toumani Touré presidency
In 1992, Alpha Oumar Konaré won Mali's first democratic, multi-party presidential election, before being re-elected for a second term in 1997, which was the last allowed under the constitution. In 2002 Amadou Toumani Touré, a retired general who had been the leader of the military aspect of the 1991 democratic uprising, was elected.Mali country profile, p. 4. During this democratic period Mali was regarded as one of the most politically and socially stable countries in Africa.USAID Africa: Mali. USAID. Retrieved 15 May 2008. Retrieved 3 June 2008.
Slavery persists in Mali today with as many as 200,000 people held in direct servitude to a master. In the Tuareg Rebellion of 2012, ex-slaves were a vulnerable population with reports of some slaves being recaptured by their former masters.
Northern Mali conflict
In January 2012 a Tuareg rebellion began in Northern Mali, led by the National Movement for the Liberation of Azawad.Mali clashes force 120 000 from homes. News24 (22 February 2012). Retrieved 23 February 2012. In March, military officer Amadou Sanogo seized power in a coup d'état, citing Touré's failures in quelling the rebellion, and leading to sanctions and an embargo by the Economic Community of West African States.Callimachi, Rukmini (3 April 2012) "Post-coup Mali hit with sanctions by African neighbours". Globe and Mail. Retrieved 4 May 2012. The MNLA quickly took control of the north, declaring independence as Azawad. However, Islamist groups including Ansar Dine and Al-Qaeda in the Islamic Maghreb (AQIM), who had helped the MNLA defeat the government, turned on the Tuareg and took control of the North with the goal of implementing sharia in Mali.
On 11 January 2013, the French Armed Forces intervened at the request of the interim government.
On 30 January, the coordinated advance of the French and Malian troops claimed to have retaken the last remaining Islamist stronghold of Kidal, which was also the last of three northern provincial capitals. French troops retake the last remaining Islamist urban stronghold in Mali. On 2 February, the French President, François Hollande, joined Mali's interim President, Dioncounda Traoré, in a public appearance in recently recaptured Timbuktu.
Geography
Satellite image of Mali|thumb
thumb|Mali map of Köppen climate classification.
left|Landscape in Hombori|thumb
Mali is a landlocked country in West Africa, located southwest of Algeria. It lies between latitudes 10° and 25°N, and longitudes 13°W and 5°E. Mali is bordered by Algeria to the northeast, Niger to the east, Burkina Faso and Côte d'Ivoire to the south, Guinea to the south-west, and Senegal and Mauritania to the west.
At , including the disputed region of Azawad, Mali is the world's 24th-largest country and is comparable in size to South Africa or Angola. Most of the country lies in the southern Sahara Desert, which produces an extremely hot, dust-laden Sudanian savanna zone.Mali country profile, p. 5. Mali is mostly flat, rising to rolling northern plains covered by sand. The Adrar des Ifoghas massif lies in the northeast.
Mali lies in the torrid zone and is among the hottest countries in the world. The thermal equator, which matches the hottest spots year-round on the planet based on the mean daily annual temperature, crosses the country. Most of Mali receives negligible rainfall and droughts are very frequent. Late June to early December is the rainy season in the southernmost area. During this time, flooding of the Niger River is common, creating the Inner Niger Delta. The vast northern desert part of Mali has a hot desert climate (Köppen climate classification (BWh) with long, extremely hot summers and scarce rainfall which decreases northwards. The central area has a hot semi-arid climate (Köppen climate classification (BSh) with very high temperatures year-round, a long, intense dry season and a brief, irregular rainy season. The little southern band possesses a tropical wet and dry climate (Köppen climate classification (Aw) very high temperatures year-round with a dry season and a rainy season.
Mali has considerable natural resources, with gold, uranium, phosphates, kaolinite, salt and limestone being most widely exploited. Mali is estimated to have in excess of 17,400 tonnes of uranium (measured + indicated + inferred).Uranium Mine Ownership – Africa. Wise-uranium.org. Retrieved 24 March 2013.Muller, CJ and Umpire, A (22 November 2012) An Independent Technical Report on the Mineral Resources of Falea Uranium, Copper and Silver Deposit, Mali, West Africa. Minxcon. In 2012, a further uranium mineralized north zone was identified.Uranium in Africa. World-nuclear.org. Retrieved 24 March 2013. Mali faces numerous environmental challenges, including desertification, deforestation, soil erosion, and inadequate supplies of potable water.
Regions and cercles
Mali is divided into eight regions (régions) and one district. Each region has a governor.DiPiazza, p. 37. Since Mali's regions are very large, the country is subdivided into 49 cercles and 703 communes.
The régions and Capital District are:
Region name Area (km2) PopulationCensus 1998 PopulationCensus 2009 Kayes119,7431,374,3161,996,812 Koulikoro95,8481,570,5072,418,305 BamakoCapital District2521,016,2961,809,106 Sikasso70,2801,782,1572,625,919 Ségou64,8211,675,3572,336,255 Mopti79,0171,484,6012,037,330 Tombouctou(Timbuktu)496,611442,619681,691 Gao170,572341,542544,120 Kidal151,43038,77467,638
Extent of central government control
In March 2012, the Malian government lost control over Tombouctou, Gao and Kidal Regions and the north-eastern portion of Mopti Region. On 6 April 2012, the National Movement for the Liberation of Azawad unilaterally declared their secession from Mali as Azawad, an act that neither Mali nor the international community recognised. The government later regained control over these areas.
Politics and government
thumb|Ex Malian Transition President Dioncounda Traoré
Until the military coup of 22 March 2012Video: US condemns Mali coup amid reports of looting. Telegraph (22 March 2012). Retrieved 24 March 2013. and a second military coup in December 2012,Hossiter, Adam (12 December 2012) Mali’s Prime Minister Resigns After Arrest, Muddling Plans to Retake North. The New York Times Mali was a constitutional democracy governed by the Constitution of 12 January 1992, which was amended in 1999. The constitution provides for a separation of powers among the executive, legislative, and judicial branches of government.Mali country profile, p. 14. The system of government can be described as "semi-presidential". Executive power is vested in a president, who is elected to a five-year term by universal suffrage and is limited to two terms.Constitution of Mali, Art. 30.
The president serves as a chief of state and commander in chief of the armed forces.Constitution of Mali, Art. 29 & 46. A prime minister appointed by the president serves as head of government and in turn appoints the Council of Ministers.Constitution of Mali, Art. 38. The unicameral National Assembly is Mali's sole legislative body, consisting of deputies elected to five-year terms.Mali country profile, p. 15.Constitution of Mali, Art. 59 & 61. Following the 2007 elections, the Alliance for Democracy and Progress held 113 of 160 seats in the assembly. Koné, Denis. Mali: "Résultats définitifs des Législatives". Les Echos (Bamako) (13 August 2007). Retrieved 24 June 2008. The assembly holds two regular sessions each year, during which it debates and votes on legislation that has been submitted by a member or by the government.Constitution of Mali, Art. 65.
Mali's constitution provides for an independent judiciary,Constitution of Mali, Art. 81. but the executive continues to exercise influence over the judiciary by virtue of power to appoint judges and oversee both judicial functions and law enforcement. Mali's highest courts are the Supreme Court, which has both judicial and administrative powers, and a separate Constitutional Court that provides judicial review of legislative acts and serves as an election arbiter.Constitution of Mali, Art. 83–94. Various lower courts exist, though village chiefs and elders resolve most local disputes in rural areas.
Foreign relations
Former President of Mali Amadou Toumani Touré and Minister-president of the Netherlands Mark Rutte|thumb
Mali's foreign policy orientation has become increasingly pragmatic and pro-Western over time.Mali country profile, p. 17. Since the institution of a democratic form of government in 2002, Mali's relations with the West in general and with the United States in particular have improved significantly. Mali has a longstanding yet ambivalent relationship with France, a former colonial ruler. Mali was active in regional organizations such as the African Union until its suspension over the 2012 Malian coup d'état.
Working to control and resolve regional conflicts, such as in Ivory Coast, Liberia, and Sierra Leone, is one of Mali's major foreign policy goals. Mali feels threatened by the potential for the spillover of conflicts in neighboring states, and relations with those neighbors are often uneasy. General insecurity along borders in the north, including cross-border banditry and terrorism, remain troubling issues in regional relations.
Military
Mali's military forces consist of an army, which includes land forces and air force, as well as the paramilitary Gendarmerie and Republican Guard, all of which are under the control of Mali's Ministry of Defense and Veterans, headed by a civilian.Mali country profile, p. 18. The military is underpaid, poorly equipped, and in need of rationalization.
Economy
thumb|A market scene in Djenné.
thumb|Kalabougou potters.
thumb|Cotton processing at CMDT.
The Central Bank of West African States handles the financial affairs of Mali and additional members of the Economic Community of West African States. Mali is one of the poorest countries in the world. The average worker's annual salary is approximately US$1,500.
Mali underwent economic reform, beginning in 1988 by signing agreements with the World Bank and the International Monetary Fund. During 1988 to 1996, Mali's government largely reformed public enterprises. Since the agreement, sixteen enterprises were privatized, 12 partially privatized, and 20 liquidated. In 2005, the Malian government conceded a railroad company to the Savage Corporation. Two major companies, Societé de Telecommunications du Mali (SOTELMA) and the Cotton Ginning Company (CMDT), were expected to be privatized in 2008.
Between 1992 and 1995, Mali implemented an economic adjustment programme that resulted in economic growth and a reduction in financial imbalances. The programme increased social and economic conditions, and led to Mali joining the World Trade Organization on 31 May 1995.Mali and the WTO. World Trade Organization. Retrieved 24 March 2013.
Mali is also a member of the Organization for the Harmonization of Business Law in Africa (OHADA). The gross domestic product (GDP) has risen since. In 2002, the GDP amounted to US$3.4 billion,Mali country profile, p. 9. and increased to US$5.8 billion in 2005, which amounts to an approximately 17.6 percent annual growth rate.
Mali is a part of "French Zone" (Zone Franc), which means that it uses CFA franc. Mali is connected with the French government by agreement since 1962 (creation of BCEAO). Today all seven countries of BCEAO (including Mali) are connected to French Central Bank.Zone franc sur le site de la Banque de France. Banque-france.fr. Retrieved 24 March 2013.
Agriculture
Mali's key industry is agriculture. Cotton is the country's largest crop export and is exported west throughout Senegal and Ivory Coast. During 2002, 620,000 tons of cotton were produced in Mali but cotton prices declined significantly in 2003. In addition to cotton, Mali produces rice, millet, corn, vegetables, tobacco, and tree crops. Gold, livestock and agriculture amount to 80% of Mali's exports.
Eighty percent of Malian workers are employed in agriculture. 15 percent of Malian workers are employed in the service sector. Seasonal variations lead to regular temporary unemployment of agricultural workers.
Mining
In 1991, with the assistance of the International Development Association, Mali relaxed the enforcement of mining codes which led to renewed foreign interest and investment in the mining industry. Gold is mined in the southern region and Mali has the third highest gold production in Africa (after South Africa and Ghana).
The emergence of gold as Mali's leading export product since 1999 has helped mitigate some of the negative impact of the cotton and Ivory Coast crises.African Development Bank, p. 186. Other natural resources include kaolin, salt, phosphate, and limestone.
Energy
Electricity and water are maintained by the Energie du Mali, or EDM, and textiles are generated by Industry Textile du Mali, or ITEMA. Mali has made efficient use of hydroelectricity, consisting of over half of Mali's electrical power. In 2002, 700 GWh of hydroelectric power were produced in Mali.
Energie du Mali is an electric company that provides electricity to Mali citizens. Only 55% of the population in cities have access to EDM.Farvacque-Vitkovic, Catherine et al. (September 2007) DEVELOPMENT OF THE CITIES OF MALI — Challenges and Priorities. Africa Region Working Paper Series No. 104/a. World Bank
Transport infrastructure
In Mali, there is a railway that connects to bordering countries. There are also approximately 29 airports of which 8 have paved runways. Urban areas are known for their large quantity of green and white taxicabs. A significant sum of the population is dependent on public transportation.
Society
Demographics
thumb|upright|A Bozo girl in Bamako
In July 2009, Mali's population was an estimated 14.5 million. The population is predominantly rural (68 percent in 2002), and 5–10 percent of Malians are nomadic.Mali country profile, p. 6. More than 90 percent of the population lives in the southern part of the country, especially in Bamako, which has over 1 million residents.
In 2007, about 48 percent of Malians were younger than 12 years old, 49 percent were 15–64 years old, and 3 percent were 65 and older. The median age was 15.9 years. The birth rate in 2014 is 45.53 births per 1,000, and the total fertility rate (in 2012) was 6.4 children per woman. The death rate in 2007 was 16.5 deaths per 1,000. Life expectancy at birth was 53.06 years total (51.43 for males and 54.73 for females). Mali has one of the world's highest rates of infant mortality, with 106 deaths per 1,000 live births in 2007.
Ethnicity
thumb|left|The Tuareg are historic, nomadic inhabitants of northern Mali.
Mali's population encompasses a number of sub-Saharan ethnic groups.
The Bambara () are by far the largest single ethnic group, making up 36.5 percent of the population.
Collectively, the Bambara, Soninké, Khassonké, and Malinké (also called Mandinka), all part of the broader Mandé group, constitute 50 percent of Mali's population. Other significant groups are the Fula (; ) (17 percent), Voltaic (12 percent), Songhai (6 percent), and Tuareg and Moor (10 percent).
In the far north, there is a division between Berber-descendent Tuareg nomad populations and the darker-skinned Bella or Tamasheq people, due to the historical spread of slavery in the region.
An estimated 800,000 people in Mali are descended from slaves. Slavery in Mali has persisted for centuries.
The Arabic population kept slaves well into the 20th century, until slavery was suppressed by French authorities around the mid-20th century. There still persist certain hereditary servitude relationships,"Kayaking to Timbuktu, Writer Sees Slave Trade". National Geographic News. 5 December 2002."Kayaking to Timbuktu, Original National Geographic Adventure Article discussing Slavery in Mali". National Geographic Adventure. December 2002/January 2003. and according to some estimates, even today approximately 200,000 Malians are still enslaved.
Although Mali has enjoyed a reasonably good inter-ethnic relationships based on the long history of coexistence, some hereditary servitude and bondage relationship exist, as well as ethnic tension between settled Songhai and nomadic Tuaregs of the north. Due to a backlash against the northern population after independence, Mali is now in a situation where both groups complain about discrimination on the part of the other group.Bruce S. Hall, A History of Race in Muslim West Africa, 1600–1960. Cambridge University Press, 2011, ISBN 9781107002876: "The mobilization of local ideas about racial difference has been important in generating, and intensifying, civil wars that have occurred since the end of colonial rule in all of the countries that straddle the southern edge of the Sahara Desert. [...] contemporary conflicts often hearken back to an older history in which blackness could be equated with slavery and non-blackness with predatory and uncivilized banditry." (cover text) This conflict also plays a role in the continuing Northern Mali conflict where there is a tension between both Tuaregs and the Malian government, and the Tuaregs and radical Islamists who are trying to establish sharia law.see e.g. Mali's conflict and a 'war over skin colour', Afua Hirsch, The Guardian, Friday 6 July 2012.
Languages
Mali's official language is French and over 40 African languages also are spoken by the various ethnic groups. About 80 percent of Mali's population can communicate in Bambara, which serves as an important lingua franca.
Mali has 12 national languages beside French and Bambara, namely Bomu, Tieyaxo Bozo, Toro So Dogon, Maasina Fulfulde, Hassaniya Arabic, Mamara Senoufo, Kita Maninkakan, Soninke, Koyraboro Senni, Syenara Senoufo, Tamasheq and Xaasongaxango. Each is spoken as a first language primarily by the ethnic group with which it is associated.
Religion
thumb|A mosque entrance
Islam was introduced to West Africa in the 11th century and remains the predominant religion in much of the region. An estimated 90 percent of Malians are Muslim (mostly Sunni and Ahmadiyya), approximately 5 percent are Christian (about two-thirds Roman Catholic and one-third Protestant) and the remaining 5 percent adhere to indigenous or traditional animist beliefs.International Religious Freedom Report 2008: Mali. State.gov (19 September 2008). Retrieved 4 May 2012. Atheism and agnosticism are believed to be rare among Malians, most of whom practice their religion on a daily basis.
The constitution establishes a secular state and provides for freedom of religion, and the government largely respects this right.
Islam as historically practiced in Mali has been moderate, tolerant, and adapted to local conditions; relations between Muslims and practitioners of minority religious faiths have generally been amicable.
After the 2012 imposition of sharia rule in northern parts of the country, however, Mali came to be listed high (number 7) in the Christian persecution index published by Open Doors, which described the persecution in the north as severe.Report points to 100 million persecuted Christians.. Retrieved 10 January 2013.OPEN DOORS World Watch list 2012. Worldwatchlist.us. Retrieved 24 March 2013.
Education
thumb|left|High school students in Kati
Public education in Mali is in principle provided free of charge and is compulsory for nine years between the ages of seven and sixteen. The system encompasses six years of primary education beginning at age 7, followed by six years of secondary education. Mali's actual primary school enrollment rate is low, in large part because families are unable to cover the cost of uniforms, books, supplies, and other fees required to attend.
In the 2000–01 school year, the primary school enrollment rate was 61 percent (71 percent of males and 51 percent of females). In the late 1990s, the secondary school enrollment rate was 15 percent (20 percent of males and 10 percent of females). The education system is plagued by a lack of schools in rural areas, as well as shortages of teachers and materials.
Estimates of literacy rates in Mali range from 27–30 to 46.4 percent, with literacy rates significantly lower among women than men. The University of Bamako, which includes four constituent universities, is the largest university in the country and enrolls approximately 60,000 undergraduate and graduate students.
Health
Mali faces numerous health challenges related to poverty, malnutrition, and inadequate hygiene and sanitation.Mali country profile, p. 7. Mali's health and development indicators rank among the worst in the world. Life expectancy at birth is estimated to be 53.06 years in 2012.CIA World Factbook: Life Expectancy ranks In 2000, 62–65 percent of the population was estimated to have access to safe drinking water and only 69 percent to sanitation services of some kind. In 2001, the general government expenditures on health totalled about US$4 per capita at an average exchange rate.Mali country profile, p. 8.
Efforts have been made to improve nutrition, and reduce associated health problems, by encouraging women to make nutritious versions of local recipes. For example, the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) and the Aga Khan Foundation, trained women's groups to make equinut, a healthy and nutritional version of the traditional recipe di-dèguè (comprising peanut paste, honey and millet or rice flour). The aim was to boost nutrition and livelihoods by producing a product that women could make and sell, and which would be accepted by the local community because of its local heritage.Nourishing communities through holistic farming, Impatient optimists, Bill & Melinda Gates Foundation. 30 April 2013.
Medical facilities in Mali are very limited, and medicines are in short supply. Malaria and other arthropod-borne diseases are prevalent in Mali, as are a number of infectious diseases such as cholera and tuberculosis. Mali's population also suffers from a high rate of child malnutrition and a low rate of immunization. An estimated 1.9 percent of the adult and children population was afflicted with HIV/AIDS that year, among the lowest rates in Sub-Saharan Africa. An estimated 85–91 percent of Mali's girls and women have had female genital mutilation (2006 and 2001 data).WHO | Female genital mutilation and other harmful practices. Who.int (6 May 2011). Retrieved 4 May 2012.Female genital cutting in the Demographic Health Surveys: a critical and comparative analysis. Calverton, MD: ORC Marco; 2004 (DHS Comparative Reports No. 7). (PDF). Retrieved 18 January 2013.
Culture
thumb|Konoguel Mosque tower
The varied everyday culture of Malians reflects the country's ethnic and geographic diversity.Pye-Smith, Charlie & Rhéal Drisdelle. Mali: A Prospect of Peace? Oxfam (1997). ISBN 0-85598-334-5, p. 13. Most Malians wear flowing, colorful robes called boubous that are typical of West Africa. Malians frequently participate in traditional festivals, dances, and ceremonies.
Music
Malian musical traditions are derived from the griots, who are known as "Keepers of Memories".Crabill, Michelle and Tiso, Bruce (January 2003). Mali Resource Website. Fairfax County Public Schools. Retrieved 4 June 2008. Malian music is diverse and has several different genres. Some famous Malian influences in music are kora virtuoso musician Toumani Diabaté, the late roots and blues guitarist Ali Farka Touré, the Tuareg band Tinariwen, and several Afro-pop artists such as Salif Keita, the duo Amadou et Mariam, Oumou Sangare, Rokia Traore, and Habib Koité. Dance also plays a large role in Malian culture. Dance parties are common events among friends, and traditional mask dances are performed at ceremonial events.
Literature
Though Mali's literature is less famous than its music,Velton, p. 29. Mali has always been one of Africa's liveliest intellectual centers. Mali's literary tradition is passed mainly by word of mouth, with jalis reciting or singing histories and stories known by heart.Milet, p. 128.Velton, p. 28. Amadou Hampâté Bâ, Mali's best-known historian, spent much of his life writing these oral traditions down for the world to remember.
The best-known novel by a Malian writer is Yambo Ouologuem's Le devoir de violence, which won the 1968 Prix Renaudot but whose legacy was marred by accusations of plagiarism. Other well-known Malian writers include Baba Traoré, Modibo Sounkalo Keita, Massa Makan Diabaté, Moussa Konaté, and Fily Dabo Sissoko.
Sport
thumb|Malian children playing football in a Dogon village
The most popular sport in Mali is football (soccer),Milet, p. 151.DiPiazza, p. 55. which became more prominent after Mali hosted the 2002 African Cup of Nations.Hudgens, Jim, Richard Trillo, and Nathalie Calonnec. The Rough Guide to West Africa. Rough Guides (2003). ISBN 1-84353-118-6, p. 320. Most towns and cities have regular games; the most popular teams nationally are Djoliba AC, Stade Malien, and Real Bamako, all based in the capital. Informal games are often played by youths using a bundle of rags as a ball.
Basketball is another major sport;"Malian Men Basketball". Africabasket.com. Retrieved 3 June 2008. the Mali women's national basketball team, led by Hamchetou Maiga, competed at the 2008 Beijing Olympics.Chitunda, Julio. "Ruiz looks to strengthen Mali roster ahead of Beijing". FIBA.com (13 March 2008). Retrieved 24 June 2008. Traditional wrestling (la lutte) is also somewhat common, though popularity has declined in recent years. The game wari, a mancala variant, is a common pastime.
Cuisine
thumb|Malian tea
Rice and millet are the staples of Malian cuisine, which is heavily based on cereal grains.Velton, p. 30. Grains are generally prepared with sauces made from edible leaves, such as spinach or baobab, with tomato peanut sauce, and may be accompanied by pieces of grilled meat (typically chicken, mutton, beef, or goat).Milet, p. 146. Malian cuisine varies regionally. Other popular dishes include fufu, jollof rice, and maafe.
Media
In Mali, there are several newspapers such as Les Echos, L'Essor, Info Matin, Nouvel Horizon, and Le Républicain. The Telecommunications in Mali include 869,600 mobile phones, 45,000 televisions and 414,985 internet users.
See also
Ebola virus disease in Mali
Index of Mali-related articles
Mali conflict
Outline of Mali
References
Bibliography
A student-translated English version is also available.
This article incorporates text from this source, which is in the public domain.
External links
The European Union mission in Mali – Hungary's involvement in the mission
War at the background of Europe: The crisis of Mali
Mali from UCB Libraries GovPubs
Hungarian soldiers in EUTM MALI
Mali profile from the BBC News
Possibilities and Challenges for Transitional Justice in Mali from the ICTJ
Trade
Mali 2012 Trade Summary Statistics
Category:Former French colonies
Category:French-speaking countries and territories
Category:Landlocked countries
Category:Least developed countries
Category:Member states of the Organisation internationale de la Francophonie
Category:Member states of the African Union
Category:Member states of the Organisation of Islamic Cooperation
Category:Member states of the United Nations
Category:Republics
Category:States and territories established in 1960
Category:West African countries
Category:Muslim-majority countries
Category:1960 establishments in Africa | 19,127 | 2017-01 |
Geography of the United States | The term "United States", when used in the geographical sense,
is the contiguous United States, the state of Alaska, the island state of Hawaii, the five insular territories of Puerto Rico, Northern Mariana Islands, U.S. Virgin Islands, Guam, and American Samoa, and minor outlying possessions.U.S. State Department, Common Core Document to U.N. Committee on Human Rights, December 30, 2011, Item 22, 27, 80; Homeland Security Public Law 107-296 Sec.2.(16)(A); Presidential Proclamation of national jurisdiction The United States shares land borders with Canada and Mexico and maritime borders with Russia, Cuba, and the Bahamas in addition to Canada and Mexico.
Area
From 1989 through 1996, the total area of the US was listed as (land + inland water only). The listed total area changed to in 1997 (Great Lakes area and coastal waters added), to in 2004, to in 2006, and to in 2007 (territorial waters added). Currently, the CIA World Factbook gives , the United Nations Statistics Division gives , and the Encyclopædia Britannica gives (Great Lakes area included but not coastal waters). These sources consider only the 50 states and the Federal District, and exclude overseas territories.
By total area (water as well as land), the United States is either slightly larger or smaller than the People's Republic of China, making it the world's third or fourth largest country. China and the United States are smaller than Russia and Canada in total area, but are larger than Brazil. By land area only (exclusive of waters), the United States is the world's third largest country, after Russia and China, with Canada in fourth.
Whether the US or China is the third largest country by total area depends on two factors: (1) The validity of China's claim on Aksai Chin and Trans-Karakoram Tract. Both these territories are also claimed by India, so are not counted; and (2) How US calculates its own surface area. Since the initial publishing of the World Factbook, the CIA has updated the total area of United States a number of times.
General characteristics
thumb|450pxpx|A satellite composite image of the contiguous United States. Deciduous vegetation and grasslands prevail in the east, transitioning to prairies, boreal forests, and the Rockies in the west, and deserts in the southwest. In the northeast, the coasts of the Great Lakes and Atlantic seaboard host much of the country's population.
The United States shares land borders with Canada (to the north) and Mexico (to the south), and a territorial water border with Russia in the northwest, and two territorial water borders in the southeast between Florida and Cuba, and Florida and the Bahamas. The contiguous forty-eight states are otherwise bounded by the Pacific Ocean on the west, the Atlantic Ocean on the east, and the Gulf of Mexico to the southeast. Alaska borders the Pacific Ocean to the south and southwest, the Bering Strait to the west, and the Arctic Ocean to the north, while Hawaii lies far to the southwest of the mainland in the Pacific Ocean.
Forty-eight of the states are in the single region between Canada and Mexico; this group is referred to, with varying precision and formality, as the continental or contiguous United States, and as the Lower 48. Alaska, which is not included in the term contiguous United States, is at the northwestern end of North America, separated from the Lower 48 by Canada.
The capital city, Washington, District of Columbia, is a federal district located on land donated by the state of Maryland. (Virginia had also donated land, but it was returned in 1849.) The United States also has overseas territories with varying levels of independence and organization: in the Caribbean the territories of Puerto Rico and the U.S. Virgin Islands, and in the Pacific the inhabited territories of Guam, American Samoa, and the Northern Mariana Islands, along with a number of uninhabited island territories.
Physiographic divisions
thumb|300px|Denali, Alaska, the highest point in North America at .
The eastern United States has a varied topography. A broad, flat coastal plain lines the Atlantic and Gulf shores from the Texas-Mexico border to New York City, and includes the Florida peninsula. Areas further inland feature rolling hills and temperate forests. The Appalachian Mountains form a line of low mountains separating the eastern seaboard from the Great Lakes and the Mississippi Basin.
The five Great Lakes are located in the north-central portion of the country, four of them forming part of the border with Canada, only Lake Michigan situated entirely within United States. The southeast United States contain subtropical forests and mangrove wetlands in Florida. West of the Appalachians lies the Mississippi River basin and two large eastern tributaries, the Ohio River and the Tennessee River. The Ohio and Tennessee Valleys and the Midwest consist largely of rolling hills and productive farmland, stretching south to the Gulf Coast.
The Great Plains lie west of the Mississippi River and east of the Rocky Mountains. A large portion of the country's agricultural products are grown in the Great Plains. Before their general conversion to farmland, the Great Plains were noted for their extensive grasslands, from tallgrass prairie in the eastern plains to shortgrass steppe in the western High Plains. Elevation rises gradually from less than a few hundred feet near the Mississippi River to more than a mile high in the High Plains. The generally low relief of the plains is broken in several places, most notably in the Ozark and Ouachita Mountains, which form the U.S. Interior Highlands, the only major mountainous region between the Rocky Mountains and the Appalachian Mountains.
The Great Plains come to an abrupt end at the Rocky Mountains. The Rocky Mountains form a large portion of the Western U.S., entering from Canada and stretching nearly to Mexico. The Rocky Mountain region is the highest region of the United States by average elevation. The Rocky Mountains generally contain fairly mild slopes and wider peaks compared to some of the other great mountain ranges, with a few exceptions (such as the Teton Mountains in Wyoming and the Sawatch Range in Colorado). The highest peaks of the Rockies are found in Colorado, the tallest peak being Mount Elbert at . The Rocky Mountains contain some of the most spectacular, and well known scenery in the world. In addition, instead of being one generally continuous and solid mountain range, it is broken up into a number of smaller, intermittent mountain ranges, forming a large series of basins and valleys.
West of the Rocky Mountains lies the Intermontane Plateaus (also known as the Intermountain West), a large, arid desert lying between the Rockies and the Cascades and Sierra Nevada ranges. The large southern portion, known as the Great Basin, consists of salt flats, drainage basins, and many small north-south mountain ranges. The Southwest is predominantly a low-lying desert region. A portion known as the Colorado Plateau, centered around the Four Corners region, is considered to have some of the most spectacular scenery in the world. It is accentuated in such national parks as Grand Canyon, Arches, Mesa Verde National Park and Bryce Canyon, among others. Other smaller Intermontane areas include the Columbia Plateau covering eastern Washington, western Idaho and northeast Oregon and the Snake River Plain in Southern Idaho.
thumb|300px|The Grand Canyon from Moran Point. The Grand Canyon is among the most famous locations in the country.
The Intermontane Plateaus come to an end at the Cascade Range and the Sierra Nevada. The Cascades consist of largely intermittent, volcanic mountains, many rising prominently from the surrounding landscape. The Sierra Nevada, further south, is a high, rugged, and dense mountain range. It contains the highest point in the contiguous 48 states, Mount Whitney ()[1] It is located at the boundary between California's Inyo and Tulare counties, just west-northwest of the lowest point in North America at the Badwater Basin in Death Valley National Park at below sea level.
These areas contain some spectacular scenery as well, as evidenced by such national parks as Yosemite and Mount Rainier. West of the Cascades and Sierra Nevada is a series of valleys, such as the Central Valley in California and the Willamette Valley in Oregon. Along the coast is a series of low mountain ranges known as the Pacific Coast Ranges. Much of the Pacific Northwest coast is inhabited by some of the densest vegetation outside of the Tropics, and also the tallest trees in the world (the Redwoods).
Alaska contains some of the most dramatic and untapped scenery in the country. Tall, prominent mountain ranges rise up sharply from broad, flat tundra plains. On the islands off the south and southwest coast are many volcanoes. Hawaii, far to the south of Alaska in the Pacific Ocean, is a chain of tropical, volcanic islands, popular as a tourist destination for many from East Asia and the mainland United States.
The territories of Puerto Rico and the U.S. Virgin Islands encompass a number of tropical isles in the northeastern Caribbean Sea. In the Pacific Ocean the territories of Guam and the Northern Mariana Islands occupy the limestone and volcanic isles of the Mariana archipelago, and American Samoa (the only populated US territory in the southern hemisphere) encompasses volcanic peaks and coral atolls in the eastern part of the Samoan Islands chain.
Physiographic regions
thumb|450px|A physiographical map of the contiguous 48 states of the U.S. The map indicates the age of the exposed surface as well as the type of terrain.
The geography of the United States varies across their immense area. Within the continental U.S., eight distinct physiographic divisions exist, though each is composed of several smaller physiographic subdivisions. These major divisions are:
Laurentian Upland - part of the Canadian Shield that extends into the northern United States Great Lakes area.
Atlantic Plain - the coastal regions of the eastern and southern parts includes the continental shelf, the Atlantic Coast and the Gulf Coast.
Appalachian Highlands - lying on the eastern side of the United States, it includes the Appalachian Mountains, the Watchung Mountains, the Adirondacks and New England province originally containing the Great Eastern Forest.
Interior Plains - part of the interior contentintal United States, it includes much of what is called the Great Plains.
Interior Highlands - also part of the interior contentintal United States, this division includes the Ozark Plateau.
Rocky Mountain System - one branch of the Cordilleran system lying far inland in the western states.
Intermontane Plateaus - also divided into the Columbia Plateau, the Colorado Plateau and the Basin and Range Province, it is a system of plateaus, basins, ranges and gorges between the Rocky and Pacific Mountain Systems. It is the setting for the Grand Canyon, the Great Basin and Death Valley.
Pacific Mountain System - the coastal mountain ranges and features in the west coast of the United States.
thumb|300px|Much of the central United States is covered by relatively flat, arable land. This aerial photo was taken over northern Ohio.
The Atlantic coast of the United States is low, with minor exceptions. The Appalachian Highland owes its oblique northeast-southwest trend to crustal deformations which in very early geological time gave a beginning to what later came to be the Appalachian mountain system. This system had its climax of deformation so long ago (probably in Permian time) that it has since then been very generally reduced to moderate or low relief. It owes its present-day altitude either to renewed elevations along the earlier lines or to the survival of the most resistant rocks as residual mountains. The oblique trend of this coast would be even more pronounced but for a comparatively modern crustal movement, causing a depression in the northeast resulting in an encroachment of the sea upon the land. Additionally, the southeastern section has undergone an elevation resulting in the advance of the land upon the sea.
While the Atlantic coast is relatively low, the Pacific coast is, with few exceptions, hilly or mountainous. This coast has been defined chiefly by geologically recent crustal deformations, and hence still preserves a greater relief than that of the Atlantic. The low Atlantic coast and the hilly or mountainous Pacific coast foreshadow the leading features in the distribution of mountains within the United States.
The east coast Appalachian system, originally forest covered, is relatively low and narrow and is bordered on the southeast and south by an important coastal plain. The Cordilleran system on the western side of the continent is lofty, broad and complicated having two branches, the Rocky Mountain System and the Pacific Mountain System. In between these mountain systems lie the Intermontaine Plateaus. Both the Columbia River and Colorado River rise far inland near the easternmost members of the Cordilleran system, and flow through plateaus and intermontaine basins to the ocean. Heavy forests cover the northwest coast, but elsewhere trees are found only on the higher ranges below the Alpine region. The intermontane valleys, plateaus and basins range from treeless to desert with the most arid region being in the southwest.
The Laurentian Highlands, the Interior Plains and the Interior Highlands lie between the two coasts, stretching from the Gulf of Mexico northward, far beyond the national boundary, to the Arctic Ocean. The central plains are divided by a hardly perceptible height of land into a Canadian and a United States portion. It is from the United States side, that the great Mississippi system discharges southward to the Gulf of Mexico. The upper Mississippi and some of the Ohio basin is the semi-arid prairie region, with trees originally only along the watercourses. The uplands towards the Appalachians were included in the great eastern forested area, while the western part of the plains has so dry a climate that its native plant life is scanty, and in the south it is practically barren.
Elevation extremes:
Lowest point: Death Valley, Inyo County, California
Highest point: Denali, Denali Borough, Alaska
Climate
thumb|350px|Köppen climate types of the US
thumb|350px|A map of average precipitation across the contiguous US.
Due to its large size and wide range of geographic features, the United States contains examples of nearly every global climate. The climate is temperate in most areas, subtropical in the Southern United States, tropical in Hawaii and southern Florida, polar in Alaska, semiarid in the Great Plains west of the 100th meridian, Mediterranean in coastal California and arid in the Great Basin. Its comparatively favorable agricultural climate contributed (in part) to the country's rise as a world power, with infrequent severe drought in the major agricultural regions, a general lack of widespread flooding, and a mainly temperate climate that receives adequate precipitation.
The main influence on U.S. weather is the polar jet stream which migrates northward into Canada in the summer months, and then southward into the USA in the winter months. The jet stream brings in large low pressure systems from the northern Pacific Ocean that enter the US mainland over the Pacific Northwest. The Cascade Range, Sierra Nevada, and Rocky Mountains pick up most of the moisture from these systems as they move eastward. Greatly diminished by the time they reach the High Plains, much of the moisture has been sapped by the orographic effect as it is forced over several mountain ranges.
Once it moves over the Great Plains, uninterrupted flat land allows it to reorganize and can lead to major clashes of air masses. In addition, moisture from the Gulf of Mexico is often drawn northward. When combined with a powerful jet stream, this can lead to violent thunderstorms, especially during spring and summer. Sometimes during winter these storms can combine with another low pressure system as they move up the East Coast and into the Atlantic Ocean, where they intensify rapidly.
These storms are known as Nor'easters and often bring widespread, heavy snowfall to New England. The uninterrupted flat grasslands of the Great Plains also lead to some of the most extreme climate swings in the world. Temperatures can rise or drop rapidly and winds can be extreme, and the flow of heat waves or Arctic air masses often advance uninterrupted through the plains.
The Great Basin and Columbia Plateau (the Intermontane Plateaus) are arid or semiarid regions that lie in the rain shadow of the Cascades and Sierra Nevada. Precipitation averages less than . The Southwest is a hot desert, with temperatures exceeding for several weeks at a time in summer. The Southwest and the Great Basin are also affected by the monsoon from the Gulf of California from July to September, which brings localized but often severe thunderstorms to the region.
Much of California consists of a Mediterranean climate, with sometimes excessive rainfall from October–April and nearly no rain the rest of the year. In the Pacific Northwest rain falls year-round, but is much heavier during winter and spring. The mountains of the west receive abundant precipitation and very heavy snowfall. The Cascades are one of the snowiest places in the world, with some places averaging over of snow annually, but the lower elevations closer to the coast receive very little snow.
Florida has a subtropical climate in the northern and central part of the state and a tropical climate in the far southern part of the state. Summers and wet and winters are dry in Florida. Annually much of central and southern Florida are frost-free. The mild winters of Florida allow a massive citrus industry to thrive in the central part of the state, and Florida is second to only Brazil in citrus production in the world.
Another significant (but localized) weather effect is lake-effect snow that falls south and east of the Great Lakes, especially in the hilly portions of the Upper Peninsula of Michigan and on the Tug Hill Plateau in New York. The lake effect dumped well over of snow in the area of Buffalo, New York throughout the 2006-2007 winter. The Wasatch Front and Wasatch Range in Utah can also receive significant lake effect accumulations from the Great Salt Lake.
Extremes
In northern Alaska, tundra and arctic conditions predominate, and the temperature has fallen as low as .Williams, Jack Each state's low temperature record, USA today, URL accessed 13 June 2006. On the other end of the spectrum, Death Valley, California once reached , the highest temperature ever recorded on Earth.
On average, the mountains of the western states receive the highest levels of snowfall on Earth. The greatest annual snowfall level is at Mount Rainier in Washington, at ; the record there was in the winter of 1971–72. This record was broken by the Mt. Baker Ski Area in northwestern Washington which reported of snowfall for the 1998-99 snowfall season. Other places with significant snowfall outside the Cascade Range are the Wasatch Mountains, near the Great Salt Lake, the San Juan Mountains in Colorado, and the Sierra Nevada, near Lake Tahoe.
In the east, while snowfall does not approach western levels, the region near the Great Lakes and the mountains of the Northeast receive the most. Along the northwestern Pacific coast, rainfall is greater than anywhere else in the continental U.S., with Quinault Rainforest in Washington having an average of .National Atlas, Average Annual Precipitation, 1961-1990, URL accessed 15 June 2006. Hawaii receives even more, with measured annually on Mount Waialeale, in Kauai. The Mojave Desert, in the southwest, is home to the driest locale in the U.S. Yuma, Arizona, has an average of of precipitation each year.Hereford, Richard, et al., Precipitation History of the Mojave Desert Region, 1893–2001, U.S. Geological Survey, Fact Sheet 117-03, URL accessed 13 June 2006.
In central portions of the U.S., tornadoes are more common than anywhere else on EarthNOVA, Tornado Heaven, Hunt for the Supertwister, URL accessed 15 June 2006. and touch down most commonly in the spring and summer. Deadly and destructive hurricanes occur almost every year along the Atlantic seaboard and the Gulf of Mexico. The Appalachian region and the Midwest experience the worst floods, though virtually no area in the U.S. is immune to flooding. The Southwest has the worst droughts; one is thought to have lasted over 500 years and to have hurt Ancestral Pueblo peoples.O'Connor, Jim E. and John E. Costa, Large Floods in the United States: Where Thley Happen and Why, U.S. Geological Survey Circular 1245, URL accessed 13 June 2006. The West is affected by large wildfires each year.
Natural disasters
The United States is affected by a variety of natural disasters yearly. Although drought is rare, it has occasionally caused major disruption, such as during the Dust Bowl (1931–1942). Farmland failed throughout the Plains, entire regions were virtually depopulated, and dust storms ravaged the land.
thumb|300px|A powerful tornado near Dimmitt, Texas on June 2, 1995.
Tornadoes and hurricanes
The Great Plains and Midwest, due to the contrasting air masses, sees frequent severe thunderstorms and tornado outbreaks during spring and summer with around 1,000 tornadoes occurring each year.NSSL: Severe Weather 101. Nssl.noaa.gov. Retrieved on 2013-07-29. The strip of land from north Texas north to Kansas and Nebraska and east into Tennessee is known as Tornado Alley, where many houses have tornado shelters and many towns have tornado sirens.
Hurricanes are another natural disaster found in the US, which can hit anywhere along the Gulf Coast or the Atlantic Coast as well as Hawaii in the Pacific Ocean. Particularly at risk are the central and southern Texas coasts, the area from southeastern Louisiana east to the Florida Panhandle, peninsular Florida, and the Outer Banks of North Carolina, although any portion of the coast could be struck.
Hurricane season runs from June 1 to November 30, with a peak from mid-August through early October. Some of the more devastating hurricanes have included the Galveston Hurricane of 1900, Hurricane Andrew in 1992, and Hurricane Katrina in 2005. The remnants of tropical cyclones from the Eastern Pacific also occasionally impact the western United States, bringing moderate to heavy rainfall.
thumb|300px|Total devastation in Gulfport, Mississippi caused by storm surge from Hurricane Katrina in 2005.
Flooding
Occasional severe flooding is experienced. There was the Great Mississippi Flood of 1927, the Great Flood of 1993, and widespread flooding and mudslides caused by the 1982-1983 El Niño event in the western United States. Localized flooding can, however, occur anywhere, and mudslides from heavy rain can cause problems in any mountainous area, particularly the Southwest. Large stretches of desert shrub in the west can fuel the spread of wildfires. The narrow canyons of many mountain areas in the west and severe thunderstorm activity during the summer lead to sometimes devastating flash floods as well, while Nor'Easter snowstorms can bring activity to a halt throughout the Northeast (although heavy snowstorms can occur almost anywhere).
Geologic
The West Coast of the continental United States and areas of Alaska (including the Aleutian Islands, the Alaskan Peninsula and southern Alaskan coast) make up part of the Pacific Ring of Fire, an area of heavy tectonic and volcanic activity that is the source of 90% of the world's earthquakes. The American Northwest sees the highest concentration of active volcanoes in the United States, in Washington, Oregon and northern California along the Cascade Mountains. There are several active volcanoes located in the islands of Hawaii, including Kilauea in ongoing eruption since 1983, but they do not typically adversely affect the inhabitants of the islands. There has not been a major life-threatening eruption on the Hawaiian islands since the 17th century. Volcanic eruptions can occasionally be devastating, such as in the 1980 eruption of Mount St. Helens in Washington.
The Ring of Fire makes California and southern Alaska particularly vulnerable to earthquakes. Earthquakes can cause extensive damage, such as the 1906 San Francisco earthquake or the 1964 Good Friday earthquake near Anchorage, Alaska. California is well known for seismic activity, and requires large structures to be earthquake resistant to minimize loss of life and property. Outside of devastating earthquakes, California experiences minor earthquakes on a regular basis.
There have been about 100 significant earthquakes annually from 2010 to 2012. Past averages were 21 a year. This is believed to be due to the deep disposal of wastewater from fracking. None has exceeded a magnitude of 5.6, and no one has been killed.
Other natural disasters
Other natural disasters include: tsunamis around Pacific Basin, mud slides in California, and forest fires in the western half of the contiguous U.S. Although drought is relatively rare, it has occasionally caused major economic and social disruption, such as during the Dust Bowl (1931–1942), which resulted in widespread crop failures and duststorms, beginning in the southern Great Plains and reaching to the Atlantic Ocean.
Public lands
The United States holds many areas for the use and enjoyment of the public. These include National Parks, National Monuments, National Forests, Wilderness areas, and other areas. For lists of areas, see the following articles:
List of National Parks of the United States
List of National Natural Landmarks
List of U.S. National Forests
List of U.S. Wilderness Areas
See also
County (United States)
Geographic centers of the United States
Geography of Puerto Rico
Geography of the Interior United States
Geography's impact on colonial America
List of extreme points of the United States
List of fjords of the United States
List of islands of the United States
List of North American deserts
List of U.S. government designations for places
Lists of landforms of the United States
Public Land Survey System
Territorial evolution of the United States
Regions:
East Coast of the United States
Historic regions of the United States
List of regions of the United States
West Coast of the United States
Mountains:
List of mountain peaks of the United States
List of mountains of the United States
Notes
Further reading
Brown, Ralph Hall, Historical Geography of the United States, New York : Harcourt, Brace, 1948
Stein, Mark, How the States Got Their Shapes, New York : Smithsonian Books/Collins, 2008. ISBN 978-0-06-143138-8
External links
USGS: Tapestry of Time and Terrain
United States Geological Survey - Maintains free aerial maps
National Atlas of the United States of America
| 32,019 | 2017-01 |
The Legend of Zelda: Twilight Princess | is an action-adventure game developed and published by Nintendo for the Wii and GameCube home video game consoles. It is the thirteenth installment in the The Legend of Zelda series. Originally planned for release on the GameCube in November 2005, Twilight Princess was delayed by Nintendo to allow its developers to refine the game, add more content, and port it to the Wii. The Wii version was released alongside the console in North America in November 2006, and in Japan, Europe, and Australia the following month. The GameCube version was also released worldwide in December 2006 and was the final first-party game released for the console.
The story focuses on series protagonist Link, who tries to prevent Hyrule from being engulfed by a corrupted parallel dimension known as the Twilight Realm. To do so, he takes the form of both a Hylian and a wolf, and is assisted by a mysterious creature named Midna. The game takes place hundreds of years after Ocarina of Time and Majora's Mask, in an alternate timeline from The Wind Waker.
At the time of its release, Twilight Princess was critically acclaimed, receiving several Game of the Year awards. , 8.85 million copies of the game have been sold worldwide, making it the best-selling title in the series. In 2011, the Wii version was rereleased under the Nintendo Selects label. A high-definition remaster for the Wii U, The Legend of Zelda: Twilight Princess HD, was released in March 2016.
Gameplay
thumb|left|An arrow points at an enemy whom Link is targeting as he prepares to swing his sword (GameCube version).|alt=A boy in a green tunic holds a shield while swinging his sword towards an enemy.
The Legend of Zelda: Twilight Princess is an action-adventure game focused on combat, exploration, and puzzle-solving. It uses the basic control scheme introduced in Ocarina of Time, including context-sensitive action buttons and L-targeting (Z-targeting on the Wii), a system that allows the player to keep Link's view focused on an enemy or important object while moving and attacking. Link can walk, run, and attack, and will automatically jump when running off of or reaching for a ledge. Link uses a sword and shield in combat, complemented with secondary weapons and items, including a bow and arrows, a boomerang, and bombs. While L-targeting, projectile-based weapons can be fired at a target without the need for manual aiming.
The context-sensitive button mechanic allows one button to serve a variety of functions, such as talking, opening doors, and pushing, pulling, and throwing objects. The on-screen display shows what action, if any, the button will trigger, determined by the situation. For example, if Link is holding a rock, the context-sensitive button will cause Link to throw the rock if he is moving or targeting an object or enemy, or place the rock on the ground if he is standing still.
The GameCube and Wii versions feature several minor differences in their controls. The Wii version of the game makes use of the motion sensors and built-in speaker of the Wii Remote. The speaker emits the sounds of a bowstring when shooting an arrow, Midna's laugh when she gives advice to Link, and the series' trademark "chime" when discovering secrets. The player controls Link's sword by swinging the Wii Remote. Other attacks are triggered using similar gestures with the Nunchuk. Unique to the GameCube version is the ability for the player to control the camera freely, without entering a special "lookaround" mode required by the Wii; however, in the GameCube version, only two of Link's secondary weapons can be equipped at a time, as opposed to four in the Wii version.
The game features nine dungeons—large, contained areas where Link battles enemies, collects items, and solves puzzles. Link navigates these dungeons and fights a boss at the end in order to obtain an item or otherwise advance the plot. The dungeons are connected by a large overworld, across which Link can travel on foot; on his horse, Epona; or by teleporting with Midna's assistance.
When Link enters the Twilight Realm, the void that corrupts parts of Hyrule, he transforms into a wolf. He is eventually able to transform between his Hylian and wolf forms at will. As a wolf, Link loses the ability to use his sword, shield, or any secondary items; he instead attacks by biting and defends primarily by dodging attacks. However, "Wolf Link" gains several key advantages in return—he moves faster than he does as a human (though riding Epona is still faster) and digs holes to create new passages and uncover buried items, and has improved senses, including the ability to follow scent trails. He also carries Midna, a small creature who gives him hints, uses an energy field to attack enemies, helps him jump long distances, and eventually allows him to "warp" to any of several preset locations throughout the overworld. Using Link's wolf senses, the player can see and listen to the wandering spirits of those affected by the Twilight, as well as hunt for enemy ghosts named Poes.
The artificial intelligence (AI) of enemies in Twilight Princess is more advanced than that of enemies in The Wind Waker. Enemies react to defeated companions and to arrows or slingshot pellets that pass by, and can detect Link from a greater distance than was possible in previous games.
There is very little voice acting in the game, as is the case in most The Legend of Zelda titles to date. Link remains silent in conversation, but grunts when attacking or injured and gasps when surprised. His emotions and responses are largely indicated visually by nods and facial expressions. Other characters have similar language-independent verbalizations, including laughter, surprised or fearful exclamations, and screams. Midna has the most voice acting—her on-screen dialogue is often accompanied by a babble of pseudo-speech, which was produced by scrambling English phrases sampled by Japanese voice actress Akiko Kōmoto.
Plot
Twilight Princess takes place several centuries after Ocarina of Time and Majora's Mask. The game begins with a youth named Link, who is working as a ranch hand in Ordon Village. One day, the village is attacked by Bulblins, who carry off the village's children with Link in pursuit before he encounters a wall of Twilight. A Shadow Beast pulls him beyond the wall into the Twilight-shrouded forest, where he is transformed into a wolf and imprisoned. Link is soon freed by an imp-like Twilight creature named Midna, who offers to help him if he obeys her unconditionally. She guides him to Princess Zelda, who explains that Zant, the King of the Twilight, infiltrated Hyrule Castle and forced her to surrender. The conquered kingdom was enveloped in Twilight, rendering all its inhabitants besides Link and Zelda spirits. In order to save Hyrule, Link must first revive the Light Spirits by entering the Twilight-covered regions and, as a wolf, recovering the Spirits' light from the Twilight beings that stole it. Once revitalized, each Spirit returns Link to his Hylian form.
During this time, Link also helps Midna acquire the Fused Shadows, fragments of a relic containing powerful dark magic. In return, she aids Link in rescuing Ordon Village's children and assisting the monkeys of Faron, the Gorons of Eldin, and the Zoras of Lanayru. After restoring the Light Spirits and obtaining the Fused Shadows, Link and Midna are ambushed by Zant, who relieves Midna of the fragments. She ridicules him for abusing his tribe's magic, but Zant reveals that his power comes from another source as he uses it to revert Link to his wolf state. Failing to seduce Midna into joining forces with him, Zant leaves her to die from the world's light. Upon bringing a dying Midna to Zelda, Link learns he needs the Master Sword to lift Zant's curse. Zelda sacrifices herself to heal Midna with her power before vanishing mysteriously. Moved by Zelda's selflessness, Midna begins to care more about Link and the fate of the light world.
After gaining the Master Sword, Link is cleansed of the curse that kept him in wolf form. Deep within the Gerudo Desert, Link and Midna locate the Mirror of Twilight, the only known gateway between Hyrule and the Twilight Realm, but discover it is broken. The Sages there explain that Zant tried to destroy it, but merely managed to shatter it into fragments; only the true ruler of the Twili can completely destroy the Mirror of Twilight. They also relate that they once used it to banish Ganondorf, the Gerudo tribe leader who attempted to steal the Triforce, to the Twilight Realm when executing him failed. Link and Midna set out to retrieve the missing shards of the Mirror. Once the portal has been restored, the Sages reveal to Link that Midna is the true ruler of the Twilight Realm, usurped by Zant when he cursed her into her current form. Confronting Zant, Link and Midna learn that Zant's coup was made possible when he forged a pact with Ganondorf, who asked for Zant's assistance in subjugating Hyrule. After Link defeats Zant, Midna recovers the Fused Shadows and destroys Zant after learning that only Ganondorf's death can release her from her curse.
Returning to Hyrule, Link and Midna find Ganondorf in Hyrule Castle, with a lifeless Zelda suspended above his head. Ganondorf fights Link by possessing Zelda's body and by transforming into a massive boar-like beast, but Link defeats him and Midna is able to resuscitate Zelda. Ganondorf then revives, and Midna teleports Link and Zelda outside the castle so she can hold him off with the Fused Shadows. However, as Hyrule Castle collapses, it is revealed that Ganondorf was victorious as he crushes Midna's helmet. Ganondorf engages Link on horseback; assisted by Zelda and the Light Spirits, Link eventually knocks Ganondorf off his horse and they duel on foot before Link strikes down Ganondorf and plunges the Master Sword into his chest. With Ganondorf dead, the Light Spirits revive Midna and restore her to her true form. After bidding farewell to Link and Zelda, Midna returns home and destroys the Mirror of Twilight with a tear to maintain balance between Hyrule and the Twilight Realm. As Hyrule Castle is rebuilt, Link leaves Ordon Village, heading to parts unknown.
Development
Creation
thumb|Eiji Aonuma, the director of Twilight Princess, at the 2007 Game Developers Conference
In 2003, Nintendo announced that a new The Legend of Zelda game was in the works for the GameCube by the same team that had created the cel-shaded The Wind Waker. At the following year's Game Developers Conference, director Eiji Aonuma unintentionally revealed that the game's sequel was in development under the working title The Wind Waker 2; it was set to use a similar graphical style to that of its predecessor. Nintendo of America told Aonuma that North American sales of The Wind Waker were sluggish because its cartoon appearance created the impression that the game was designed for a young audience. Concerned that the sequel would have the same problem, Aonuma expressed to producer Shigeru Miyamoto that he wanted to create a realistic Zelda game that would appeal to the North American market. Miyamoto, hesitant about solely changing the game's presentation, suggested the team's focus should instead be on coming up with gameplay innovations. He advised that Aonuma should start by doing what could not be done in Ocarina of Time, particularly horseback combat.
In four months, Aonuma's team managed to present realistic horseback riding, which Nintendo later revealed to the public with a trailer at Electronic Entertainment Expo 2004. The game was scheduled to be released the next year, and was no longer a follow-up to The Wind Waker; a true sequel to it was released for the Nintendo DS in 2007, in the form of Phantom Hourglass. Miyamoto explained in interviews that the graphical style was chosen to satisfy demand, and that it better fit the theme of an older incarnation of Link. The game runs on a modified The Wind Waker engine.
Prior Zelda games have employed a theme of two separate, yet connected, worlds. In A Link to the Past, Link travels between a "Light World" and a "Dark World"; in Ocarina of Time, as well as in Oracle of Ages, Link travels between two different time periods. The Zelda team sought to reuse this motif in the series' latest installment. It was suggested that Link transform into a wolf, much like he metamorphoses into a rabbit in the Dark World of A Link to the Past. The story of the game was created by Aonuma, and later underwent several changes by scenario writers Mitsuhiro Takano and Aya Kyogoku. Takano created the script for the story scenes, while Kyogoku and Takayuki Ikkaku handled the actual in-game script. Aonuma left his team working on the new idea while he directed The Minish Cap for the Game Boy Advance. When he returned, he found the Twilight Princess team struggling. Emphasis on the parallel worlds and the wolf transformation had made Link's character unbelievable. Aonuma also felt the gameplay lacked the caliber of innovation found in Phantom Hourglass, which was being developed with touch controls for the Nintendo DS. At the same time, the Wii was under development with the code name "Revolution". Miyamoto thought that the Revolution's pointing device, the Wii Remote, was well suited for aiming arrows in Zelda, and suggested that Aonuma consider using it.
Wii transition
Aonuma had anticipated creating a Zelda game for what would later be called the Wii, but had assumed that he would need to complete Twilight Princess first. His team began work developing a pointing-based interface for the bow and arrow, and Aonuma found that aiming directly at the screen gave the game a new feel, just like the DS control scheme for Phantom Hourglass. Aonuma felt confident this was the only way to proceed, but worried about consumers who had been anticipating a GameCube release. Developing two versions would mean delaying the previously announced 2005 release, still disappointing the consumer. Satoru Iwata felt that having both versions would satisfy users in the end, even though they would have to wait for the finished product. Aonuma then started working on both versions in parallel.
Transferring GameCube development to the Wii was relatively simple, since the Wii was being created to be compatible with the GameCube. At E3 2005, Nintendo released a small number of Nintendo DS game cards containing a preview trailer for Twilight Princess. They also announced that Zelda would appear on the Wii (then codenamed "Revolution"), but it was not clear to the media if this meant Twilight Princess or a different game.
The team worked on a Wii control scheme, adapting camera control and the fighting mechanics to the new interface. A prototype was created that used a swinging gesture to control the sword from a first-person viewpoint, but was unable to show the variety of Link's movements. When the third-person view was restored, Aonuma thought it felt strange to swing the Wii Remote with the right hand to control the sword in Link's left hand, so the entire Wii version map was mirrored. Details about Wii controls began to surface in December 2005 when British publication NGC Magazine claimed that when a GameCube copy of Twilight Princess was played on the Revolution, it would give the player the option of using the Revolution controller. Miyamoto confirmed the Revolution controller-functionality in an interview with Nintendo of Europe and Time reported this soon after. However, support for the Wii controller did not make it into the GameCube release. At E3 2006, Nintendo announced that both versions would be available at the Wii launch, and had a playable version of Twilight Princess for the Wii. Later, the GameCube release was pushed back to a month after the launch of the Wii.
Nintendo staff members reported that demo users complained about the difficulty of the control scheme. Aonuma realized that his team had implemented Wii controls under the mindset of "forcing" users to adapt, instead of making the system intuitive and easy to use. He began rethinking the controls with Miyamoto to focus on comfort and ease. The camera movement was reworked and item controls were changed to avoid accidental button presses. In addition, the new item system required use of the button that had previously been used for the sword. To solve this, sword controls were transferred back to gestures—something E3 attendees had commented they would like to see. This reintroduced the problem of using a right-handed swing to control a left-handed sword attack. The team did not have enough time before release to rework Link's character model, so they instead flipped the entire game—everything was made a mirror image. Link was now right-handed, and references to "east" and "west" were switched around. The GameCube version, however, was left with the original orientation. The Twilight Princess player's guide focuses on the Wii version, but has a section in the back with mirror-image maps for GameCube users.
Music
The game's score was composed by Toru Minegishi and Asuka Ohta, with series regular Koji Kondo serving as the sound supervisor. Minegishi took charge of composition and sound design in Twilight Princess, providing all field and dungeon music. For the trailers, three pieces were written by different composers, two of which were created by Mahito Yokota and Kondo. Michiru Ōshima created orchestral arrangements for the three compositions, later to be performed by an ensemble conducted by Yasuzo Takemoto. Kondo's piece was later chosen as music for the E3 2005 trailer and for the demo movie after the game's title screen.
Media requests at the trade show prompted Kondo to consider using orchestral music for the other tracks in the game as well, a notion reinforced by his preference for live instruments. He originally envisioned a full 50-person orchestra for action sequences and a string quartet for more "lyrical moments", though the final product used sequenced music instead. Kondo later cited the lack of interactivity that comes with orchestral music as one of the main reasons for the decision. Both six- and seven-track versions of the game's soundtrack were released on November 19, 2006, as part of a Nintendo Power promotion and bundled with replicas of the Master Sword and the Hylian Shield.
Technical issues
Following the discovery of a buffer overflow vulnerability in the Wii version of Twilight Princess, an exploit known as the "Twilight Hack" was developed, allowing the execution of custom code from a Secure Digital (SD) card on the console. A specifically designed save file would cause the game to load unsigned code, which could include Executable and Linkable Format (ELF) programs and homebrew Wii applications. Versions 3.3 and 3.4 of the Wii Menu prevented copying exploited save files onto the console until circumvention methods were discovered, and version 4.0 of the Wii Menu patched the vulnerability.
Wii U version
A high-definition remaster of the game, The Legend of Zelda: Twilight Princess HD, was developed by Tantalus Media for the Wii U. Announced during a Nintendo Direct presentation on November 12, 2015, it features enhanced graphics and Amiibo functionality. The game was released in North America and Europe on March 4, 2016; in Australia on March 5, 2016; and in Japan on March 10, 2016.
Certain bundles of the game contain a Wolf Link Amiibo figurine, which unlocks a Wii U-exclusive dungeon called the "Cave of Shadows" and can carry data over to The Legend of Zelda: Breath of the Wild. Other Zelda-related Amiibo figurines have distinct functions: Link and Toon Link replenish arrows, Zelda and Sheik restore Link's health, and Ganondorf causes Link to take twice as much damage. A CD containing 20 musical selections from the game was available as a GameStop preorder bonus in North America; it is included with the limited-edition bundle in other regions.
Reception
Reviews
Twilight Princess was released to universal critical acclaim and commercial success. It received perfect scores from major publications such as 1UP.com, Computer and Video Games, Electronic Gaming Monthly, Game Informer, GamesRadar, and GameSpy. On the review aggregator Metacritic, Twilight Princess holds scores of 95/100 for the Wii version and 96/100 for the GameCube version, indicating "universal acclaim". GameTrailers in their review called it one of the greatest games ever created.
On release, Twilight Princess was considered to be the greatest Zelda game ever made by many critics including writers for 1UP.com, Computer and Video Games, Electronic Gaming Monthly, Game Informer, GamesRadar, IGN and The Washington Post. Game Informer called it "so creative that it rivals the best that Hollywood has to offer". GamesRadar praised Twilight Princess as "a game that deserves nothing but the absolute highest recommendation". Cubed3 hailed Twilight Princess as "the single greatest videogame experience". Twilight Princess graphics were praised for the art style and animation, although the game was designed for the GameCube, which is technically lacking compared to the next generation consoles. Both IGN and GameSpy pointed out the existence of blurry textures and low-resolution characters. Despite these complaints, Computer and Video Games felt the game's atmosphere was superior to that of any previous Zelda game, and regarded Twilight Princess Hyrule as the best version ever created. PALGN praised the game's cinematics, noting that "the cutscenes are the best ever in Zelda games". Regarding the Wii version, GameSpots Jeff Gerstmann said the Wii controls felt "tacked-on", although 1UP.com said the remote-swinging sword attacks were "the most impressive in the entire series". Gaming Nexus considered Twilight Princess soundtrack to be the best of this generation, though IGN criticized its MIDI-formatted songs for lacking "the punch and crispness" of their orchestrated counterparts. Hypers Javier Glickman commended the game for its "very long quests, superb Wii controls and being able to save anytime". However, he criticized it for "no voice acting, no orchestral score and slightly outdated graphics".
Awards
Twilight Princess received the awards for Best Artistic Design, Best Original Score, and Best Use of Sound from IGN for its GameCube version. Both IGN and Nintendo Power gave Twilight Princess the awards for Best Graphics and Best Story. Twilight Princess received Game of the Year awards from GameTrailers, 1UP.com, Electronic Gaming Monthly, Game Informer, Games Radar, GameSpy, Spacey Awards, X-Play and Nintendo Power. It was also given awards for Best Adventure Game from the Game Critics Awards, X-Play, IGN, GameTrailers, 1UP.com, and Nintendo Power. The game was considered the Best Console Game by the Game Critics Awards and GameSpy. The game placed 16th in Official Nintendo Magazines list of the 100 Greatest Nintendo Games of All Time. IGN ranked the game as the 4th-best Wii game. Nintendo Power ranked the game as the third-best game to be released on a Nintendo system in the 2000s decade.
Sales and legacy
During its first week, the game was sold with three of every four Wii purchases. The game had sold 5.82 million copies on the Wii , and 1.32 million on the GameCube . , the game has sold 8.85 million copies worldwide on both platforms, making it the best-selling installment in the series.
A manga series based on Twilight Princess, penned and illustrated by Akira Himekawa, was first released in Japan on February 8, 2016. The series is available solely via publisher Shogakukan's MangaOne mobile application. While the manga adaptation began almost ten years after the initial release of the game on which it is based, it launched only a month before the release of the high-definition remake. , an English localization by Viz Media is being produced for release in the West.
To commemorate the launch of the My Nintendo loyalty program in March 2016, Nintendo released My Nintendo Picross: The Legend of Zelda: Twilight Princess, a Picross puzzle game developed by Jupiter for download to the Nintendo 3DS.
See also
Link's Crossbow Training, a 2007 shooting video game created for the Wii Zapper, using the world and assets of Twilight Princess
Notes
References
General references
Inline citations
External links
Category:2006 video games
Category:Action-adventure games
Category:Curses in video games
Twilight Princess
Category:Nintendo Entertainment Analysis and Development games
Category:GameCube games
Category:Nintendo games
Category:Open world video games
Category:Shapeshifting in fiction
Category:Single-player-only video games
Category:Spirit possession in fiction
Category:Video games featuring female protagonists
Category:Video games featuring non-playable protagonists
Category:Video games featuring parallel universes
Category:Werewolf video games
Category:Wii games | 1,610,197 | 2017-01 |
Kanye West | Kanye Omari West (;West says "Kanye" twice in the song Diamonds from Sierra Leone, at 2'57" and 3'36" in this recording. born June 8, 1977) is an American rapper, songwriter, record producer, fashion designer, and entrepreneur. Born in Atlanta and raised in Chicago, West first became known as a producer for Roc-A-Fella Records in the early 2000s, producing hit singles for artists such as Jay Z and Alicia Keys. Intent on pursuing a solo career as a rapper, West released his debut album The College Dropout in 2004 to widespread critical and commercial success, and founded the record label GOOD Music. He went on to pursue a variety of different styles on subsequent albums Late Registration (2005), Graduation (2007), and 808s & Heartbreak (2008). In 2010, he released his fifth album My Beautiful Dark Twisted Fantasy to rave reviews from critics, and the following year he collaborated with Jay Z on the joint LP Watch the Throne (2011). West released his abrasive sixth album, Yeezus, to further critical praise in 2013. His seventh album, The Life of Pablo, was released in 2016.
West's outspoken views and life outside of music have received significant mainstream attention. He has been a frequent source of controversy for his conduct at award shows, on social media, and in other public settings. His more scrutinized comments include his off-script denunciation of President George W. Bush during a live 2005 television broadcast for Hurricane Katrina relief and his interruption of singer Taylor Swift at the 2009 MTV Video Music Awards. West's efforts as a fashion designer include collaborations with Nike, Louis Vuitton, and A.P.C. on both clothing and footwear, and have most prominently resulted in the YEEZY collaboration with Adidas beginning in 2013. He is the founder and head of the creative content company DONDA. His 2014 marriage to television personality Kim Kardashian has also been subject to widespread media coverage.
West is among the most acclaimed musicians of the 21st century,Rucker, CJ. "How We Heard Kanye West's 'The Life Of Pablo'….So Far". Hypetrak. HB Network. Retrieved 22 February 2016. and is one of the best-selling artists of all time, having sold more than 32 million albums and 100 million digital downloads worldwide. He has won a total of 21 Grammy Awards, making him one of the most awarded artists of all time and the most Grammy-awarded artist to have debuted in the 21st century. Three of his albums have been included and ranked on Rolling Stone's 2012 update of the "500 Greatest Albums of All Time" list. He has also been included in a number of Forbes annual lists. Time named him one of the 100 most influential people in the world in 2005 and 2015.
Early life
West was born on June 8, 1977 in Atlanta, Georgia. His parents divorced when he was three years old. After the divorce, he and his mother moved to Chicago, Illinois. His father, Ray West, is a former Black Panther and was one of the first black photojournalists at The Atlanta Journal-Constitution. Ray West was later a Christian counselor, and in 2006, opened the Good Water Store and Café in Lexington Park, Maryland with startup capital from his son. West's mother, Dr. Donda C. (Williams) West, was a professor of English at Clark Atlanta University, and the Chair of the English Department at Chicago State University before retiring to serve as his manager. West was raised in a middle-class background, attending Polaris High School in suburban Oak Lawn, Illinois after living in Chicago.
At the age of 10, West moved with his mother to Nanjing, China, where she was teaching at Nanjing University as part of an exchange program. According to his mother, West was the only foreigner in his class, but settled in well and quickly picked up the language, although he has since forgotten most of it. When asked about his grades in high school, West replied, "I got A's and B's. And I'm not even frontin'."
West demonstrated an affinity for the arts at an early age; he began writing poetry when he was five years old. His mother recalled that she first took notice of West's passion for drawing and music when he was in the third grade.West, Donda, p. 105 Growing up in Chicago, West became deeply involved in its hip hop scene. He started rapping in the third grade and began making musical compositions in the seventh grade, eventually selling them to other artists. At age thirteen, West wrote a rap song called "Green Eggs and Ham" and began to persuade his mother to pay $25 an hour for time in a recording studio. It was a small, crude basement studio where a microphone hung from the ceiling by a wire clothes hanger. Although this wasn't what West's mother wanted, she nonetheless supported him. West crossed paths with producer/DJ No I.D., with whom he quickly formed a close friendship. No I.D. soon became West's mentor, and it was from him that West learned how to sample and program beats after he received his first sampler at age 15.Hess, p. 557
After graduating from high school, West received a scholarship to attend Chicago's American Academy of Art in 1997 and began taking painting classes, but shortly after transferred to Chicago State University to study English. He soon realized that his busy class schedule was detrimental to his musical work, and at 20 he dropped out of college to pursue his musical dreams.West, Donda, p. 106 This action greatly displeased his mother, who was also a professor at the university. She later commented, "It was drummed into my head that college is the ticket to a good life... but some career goals don't require college. For Kanye to make an album called College Dropout it was more about having the guts to embrace who you are, rather than following the path society has carved out for you."Hess, p. 558
Career
1996–2002: Early work and Roc-A-Fella Records
Kanye West began his early production career in the mid-1990s, making beats primarily for burgeoning local artists, eventually developing a style that involved speeding up vocal samples from classic soul records. His first official production credits came at the age of nineteen when he produced eight tracks on Down to Earth, the 1996 debut album of a Chicago rapper named Grav. For a time, West acted as a ghost producer for Deric "D-Dot" Angelettie. Because of his association with D-Dot, West wasn't able to release a solo album, so he formed and became a member and producer of the Go-Getters, a late-1990s Chicago rap group composed of him, GLC, Timmy G, Really Doe, and Arrowstar. His group was managed by John "Monopoly" Johnson, Don Crowley, and Happy Lewis under the management firm Hustle Period. After attending a series of promotional photo shoots and making some radio appearances, The Go-Getters released their first and only studio album World Record Holders in 1999. The album featured other Chicago-based rappers such as Rhymefest, Mikkey Halsted, Miss Criss, and Shayla G. Meanwhile, the production was handled by West, Arrowstar, Boogz, and Brian "All Day" Miller.
West spent much of the late 1990s producing records for a number of well-known artists and music groups. The third song on Foxy Brown's second studio album Chyna Doll was produced by West. Her second effort subsequently became the very first hip-hop album by a female rapper to debut at the top of the U.S. Billboard 200 chart in its first week of release. West produced three of the tracks on Harlem World's first and only album The Movement alongside Jermaine Dupri and the production duo Trackmasters. His songs featured rappers Nas, Drag-On, and R&B singer Carl Thomas. The ninth track from World Party, the last Goodie Mob album to feature the rap group's four founding members prior to their break-up, was co-produced by West with his manager Deric "D-Dot" Angelettie. At the close of the millennium, West ended up producing six songs for Tell 'Em Why U Madd, an album that was released by D-Dot under the alias of The Madd Rapper; a fictional character he created for a skit on The Notorious B.I.G.'s second and final studio album Life After Death. West's songs featured guest appearances from rappers such as Ma$e, Raekwon, and Eminem.
thumb|right|255px|West received early acclaim for his production work on Jay-Z's The Blueprint; the two are pictured here in 2011.
West got his big break in the year 2000, when he began to produce for artists on Roc-A-Fella Records. West came to achieve recognition and is often credited with revitalizing Jay-Z's career with his contributions to the rap mogul's influential 2001 album The Blueprint. The Blueprint is consistently ranked among the greatest hip-hop albums, and the critical and financial success of the album generated substantial interest in West as a producer. Serving as an in-house producer for Roc-A-Fella Records, West produced records for other artists from the label, including Beanie Sigel, Freeway, and Cam'ron. He also crafted hit songs for Ludacris, Alicia Keys, and Janet Jackson.Mitchum, Rob. Review: The College Dropout. Pitchfork Media. Retrieved July 23, 2009.Kellman, Andy. The College Dropout. AllMusic. All Music Guide. Retrieved August 25, 2011Serpick, Evan. Kanye West. Rolling Stone Jann Wenner. Retrieved December 26, 2009.
Despite his success as a producer, West's true aspiration was to be a rapper. Though he had developed his rapping long before he began producing, it was often a challenge for West to be accepted as a rapper, and he struggled to attain a record deal. Multiple record companies ignored him because he did not portray the 'gangsta image' prominent in mainstream hip hop at the time.Hess, p. 556 After a series of meetings with Capitol Records, West was ultimately denied an artist deal.
According to Capitol Record's A&R, Joe Weinberger, he was approached by West and almost signed a deal with him, but another person in the company convinced Capitol's president not to. Desperate to keep West from defecting to another label, then-label head Damon Dash reluctantly signed West to Roc-A-Fella Records. Jay-Z later admitted that Roc-A-Fella was initially reluctant to support West as a rapper, claiming that many saw him as a producer first and foremost, and that his background contrasted with that of his labelmates.
West's breakthrough came a year later on October 23, 2002, when, while driving home from a California recording studio after working late, he fell asleep at the wheel and was involved in a near-fatal car crash. The crash left him with a shattered jaw, which had to be wired shut in reconstructive surgery. The accident inspired West; two weeks after being admitted to the hospital, he recorded a song at the Record Plant Studios with his jaw still wired shut.Kearney, Kevin (September 30, 2005). Rapper Kanye West on the cover of Time: Will rap music shed its "gangster" disguise?. World Socialist Web Site. Retrieved September 23, 2007. The composition, "Through The Wire", expressed West's experience after the accident, and helped lay the foundation for his debut album, as according to West "all the better artists have expressed what they were going through".Davis, Kimberly. "The Many Faces of Kanye West" (June 2004) Ebony. West added that "the album was my medicine", as working on the record distracted him from the pain.Davis, Kimberly. "Kanye West: Hip Hop's New Big Shot" (April 2005) Ebony. "Through The Wire" was first available on West's Get Well Soon... mixtape, released December 2002. At the same time, West announced that he was working on an album called The College Dropout, whose overall theme was to "make your own decisions. Don't let society tell you, 'This is what you have to do.'"Reid, Shaheem (December 10, 2002). "Kanye West Raps Through His Broken Jaw, Lays Beats For Scarface, Ludacris". MTV. Retrieved October 23, 2007.
2003–06: The College Dropout and Late Registration
thumb|200px|left|West performing in Portland in December 2005.
Carrying a Louis Vuitton backpack filled with old disks and demos to the studio and back, West crafted much of his production for his debut album in less than fifteen minutes at a time. He recorded the remainder of the album in Los Angeles while recovering from the car accident. Once he had completed the album, it was leaked months before its release date. However, West decided to use the opportunity to review the album, and The College Dropout was significantly remixed, remastered, and revised before being released. As a result, certain tracks originally destined for the album were subsequently retracted, among them "Keep the Receipt" with Ol' Dirty Bastard and "The Good, the Bad, and the Ugly" with Consequence. West meticulously refined the production, adding string arrangements, gospel choirs, improved drum programming and new verses. West's perfectionism led The College Dropout to have its release postponed three times from its initial date in August 2003.
The College Dropout was eventually issued by Roc-A-Fella in February 2004, shooting to number two on the Billboard 200 as his debut single, "Through the Wire" peaked at number fifteen on the Billboard Hot 100 chart for five weeks.Kanye West – Through the Wire – Music Charts. aCharts.us. Retrieved July 3, 2010. "Slow Jamz", his second single featuring Twista and Jamie Foxx, became an even bigger success: it became the three musicians' first number one hit. The College Dropout received near-universal critical acclaim from contemporary music critics, was voted the top album of the year by two major music publications, and has consistently been ranked among the great hip-hop works and debut albums by artists. "Jesus Walks", the album's fourth single, perhaps exposed West to a wider audience; the song's subject matter concerns faith and Christianity. The song nevertheless reached the top 20 of the Billboard pop charts, despite industry executives' predictions that a song containing such blatant declarations of faith would never make it to radio. The College Dropout would eventually be certified triple platinum in the US, and garnered West 10 Grammy nominations, including Album of the Year, and Best Rap Album (which it received). During this period, West also founded GOOD Music, a record label and management company that would go on to house affiliate artists and producers, such as No I.D. and John Legend. At the time, the focal point of West's production style was the use of sped-up vocal samples from soul records.Sheffield, Rob (November 22, 2010). Review: My Beautiful Dark Twisted Fantasy. Rolling Stone. Retrieved November 11, 2010. However, partly because of the acclaim of The College Dropout, such sampling had been much copied by others; with that overuse, and also because West felt he had become too dependent on the technique, he decided to find a new sound.Brown, p. 121 During this time, he also produced singles for Brandy, Common, John Legend, and Slum Village.
thumb|right|188px|West performing in 2008.
Beginning his second effort that fall, West would invest two million dollars and take over a year to craft his second album.Brown, p. 120 West was significantly inspired by Roseland NYC Live, a 1998 live album by English trip hop group Portishead, produced with the New York Philharmonic Orchestra. Early in his career, the live album had inspired him to incorporate string arrangements into his hip-hop production. Though West had not been able to afford many live instruments around the time of his debut album, the money from his commercial success enabled him to hire a string orchestra for his second album Late Registration. West collaborated with American film score composer Jon Brion, who served as the album's co-executive producer for several tracks.Perez, Rodrigo (August 12, 2005). "Kanye's Co-Pilot, Jon Brion, Talks About The Making Of Late Registration". MTV. Viacom. Retrieved March 2, 2006. Although Brion had no prior experience in creating hip-hop records, he and West found that they could productively work together after their first afternoon in the studio where they discovered that neither confined his musical knowledge and vision to one specific genre.Brown, p. 124 Late Registration sold over 2.3 million units in the United States alone by the end of 2005 and was considered by industry observers as the only successful major album release of the fall season, which had been plagued by steadily declining CD sales.
While West had encountered controversy a year prior when he stormed out of the American Music Awards of 2004 after losing Best New Artist, the rapper's first large-scale controversy came just days following Late Registration release, during a benefit concert for Hurricane Katrina victims. In September 2005, NBC broadcast A Concert for Hurricane Relief, and West was a featured speaker. When West was presenting alongside actor Mike Myers, he deviated from the prepared script. Myers spoke next and continued to read the script. Once it was West's turn to speak again, he said, "George Bush doesn't care about black people." West's comment reached much of the United States, leading to mixed reactions; President Bush would later call it one of the most "disgusting moments" of his presidency.Itzkoff, Dave, "UPDATED: Kanye West Criticizes 'Today' Show for 'Brutal' Interview", The New York Times Arts Beat blog, November 10, 2010, 2:25 pm. Retrieved November 10, 2010. West raised further controversy in January 2006 when he posed on the cover of Rolling Stone wearing a crown of thorns.
2007–09: Graduation, 808s & Heartbreak, and VMAs controversy
Fresh off spending the previous year touring the world with U2 on their Vertigo Tour, West felt inspired to compose anthemic rap songs that could operate more efficiently in large arenas. To this end, West incorporated the synthesizer into his hip-hop production, utilized slower tempos, and experimented with electronic music and influenced by music of the 1980s. In addition to U2, West drew musical inspiration from arena rock bands such as The Rolling Stones and Led Zeppelin in terms of melody and chord progression. To make his next effort, the third in a planned tetralogy of education-themed studio albums, more introspective and personal in lyricism, West listened to folk and country singer-songwriters Bob Dylan and Johnny Cash in hopes of developing methods to augment his wordplay and storytelling ability.
thumb|260px|left|West working in the studio in 2008, accompanied by mentor No I.D. (left).
West's third studio album, Graduation, garnered major publicity when its release date pitted West in a sales competition against rapper 50 Cent's Curtis.Reid, Shaheem. 50 Cent Or Kanye West, Who Will Win? Nas, Timbaland, More Share Their Predictions. MTV. Viacom. Retrieved December 24, 2009. Upon their September 2007 releases, Graduation outsold Curtis by a large margin, debuting at number one on the U.S. Billboard 200 chart and selling 957,000 copies in its first week. Graduation once again continued the string of critical and commercial successes by West, and the album's lead single, "Stronger", garnered the rapper his third number-one hit. "Stronger", which samples French house duo Daft Punk, has been accredited to not only encouraging other hip-hop artists to incorporate house and electronica elements into their music, but also for playing a part in the revival of disco and electro-infused music in the late 2000s. Ben Detrick of XXL cited the outcome of the sales competition between 50 Cent's Curtis and West's Graduation as being responsible for altering the direction of hip-hop and paving the way for new rappers who didn't follow the hardcore-gangster mold, writing, "If there was ever a watershed moment to indicate hip-hop's changing direction, it may have come when 50 Cent competed with Kanye in 2007 to see whose album would claim superior sales."
West's life took a different direction when his mother, Donda West, died of complications from cosmetic surgery involving abdominoplasty and breast reduction in November 2007. Months later, West and fiancée Alexis Phifer ended their engagement and their long-term intermittent relationship, which had begun in 2002. The events profoundly affected West, who set off for his 2008 Glow in the Dark Tour shortly thereafter. Purportedly because his emotions could not be conveyed through rapping, West decided to sing using the voice audio processor Auto-Tune, which would become a central part of his next effort. West had previously experimented with the technology on his debut album The College Dropout for the background vocals of "Jesus Walks" and "Never Let Me Down." Recorded mostly in Honolulu, Hawaii in three weeks, West announced his fourth album, 808s & Heartbreak, at the 2008 MTV Video Music Awards, where he performed its lead single, "Love Lockdown". Music audiences were taken aback by the uncharacteristic production style and the presence of Auto-Tune, which typified the pre-release response to the record.
thumb|270px|West performing in August 2008 on the Glow in the Dark Tour.
808s & Heartbreak, which features extensive use of the eponymous Roland TR-808 drum machine and contains themes of love, loneliness, and heartache, was released by Island Def Jam to capitalize on Thanksgiving weekend in November 2008. Reviews were positive, though slightly more mixed than his previous efforts. Despite this, the record's singles demonstrated outstanding chart performances. Upon its release, the lead single "Love Lockdown" debuted at number three on the Billboard Hot 100 and became a "Hot Shot Debut", while follow-up single "Heartless" performed similarly and became his second consecutive "Hot Shot Debut" by debuting at number four on the Billboard Hot 100.Heartless: Hot 100 Charts. Billboard. Retrieved April 20, 2009. While it was criticized prior to release, 808s & Heartbreak had a significant effect on hip-hop music, encouraging other rappers to take more creative risks with their productions.
West's controversial incident the following year at the 2009 MTV Video Music Awards was arguably his biggest controversy, and led to widespread outrage throughout the music industry. During the ceremony, West crashed the stage and grabbed the microphone from winner Taylor Swift in order to proclaim that, instead, Beyoncé's video for "Single Ladies (Put a Ring on It)", nominated for the same award, was "one of the best videos of all time". He was subsequently withdrawn from the remainder of the show for his actions. West's tour with Lady Gaga was cancelled in response to the controversy.
2010–12: My Beautiful Dark Twisted Fantasy and collaborations
Following the highly publicized incident, West took a brief break from music and threw himself into fashion, only to hole up in Hawaii for the next few months writing and recording his next album. Importing his favorite producers and artists to work on and inspire his recording, West kept engineers behind the boards 24 hours a day and slept only in increments. Noah Callahan-Bever, a writer for Complex, was present during the sessions and described the "communal" atmosphere as thus: "With the right songs and the right album, he can overcome any and all controversy, and we are here to contribute, challenge, and inspire."Callahan-Bever, Noah (November 2010). Kanye West: Project Runaway. Complex. Retrieved November 30, 2010. A variety of artists contributed to the project, including close friends Jay-Z, Kid Cudi and Pusha T, as well as off-the-wall collaborations, such as with Justin Vernon of Bon Iver.Hermes, Will (October 25, 2010). Lost in the World by Kanye West feat. Bon Iver and Gil Scott-Heron | Rolling Stone Music. Rolling Stone. Retrieved May 2, 2011.
thumb|right|West performing at Lollapalooza in 2011.
My Beautiful Dark Twisted Fantasy, West's fifth studio album, was released in November 2010 to widespread acclaim from critics, many of whom considered it his best work and said it solidified his comeback. In stark contrast to his previous effort, which featured a minimalist sound, Dark Fantasy adopts a maximalist philosophy and deals with themes of celebrity and excess. The record included the international hit "All of the Lights", and Billboard hits "Power", "Monster", and "Runaway", the latter of which accompanied a 35-minute film of the same name.[ Kanye West Album & Song Chart History – Hot 100]. Billboard (magazine). Retrieved November 30, 2010. During this time, West initiated the free music program GOOD Fridays through his website, offering a free download of previously unreleased songs each Friday, a portion of which were included on the album. This promotion ran from August 20 – December 17, 2010. Dark Fantasy went on to go platinum in the United States, but its omission as a contender for Album of the Year at the 54th Grammy Awards was viewed as a "snub" by several media outlets.
Following a headlining set at Coachella 2011 that was described by The Hollywood Reporter as "one of greatest hip-hop sets of all time", West released the collaborative album Watch the Throne with Jay-Z. By employing a sales strategy that released the album digitally weeks before its physical counterpart, Watch the Throne became one of the few major label albums in the Internet age to avoid a leak. "Niggas in Paris" became the record's highest charting single, peaking at number five on the Billboard Hot 100. In 2012, West released the compilation album Cruel Summer, a collection of tracks by artists from West's record label GOOD Music. Cruel Summer produced four singles, two of which charted within the top twenty of the Hot 100: "Mercy" and "Clique". West also directed a film of the same name that premiered at the 2012 Cannes Film Festival in custom pyramid-shaped screening pavilion featuring seven screens.
2013–15: Yeezus and Adidas collaboration
Sessions for West's sixth solo effort begin to take shape in early 2013 in his own personal loft's living room at a Paris hotel. Determined to "undermine the commercial", he once again brought together close collaborators and attempted to incorporate Chicago drill, dancehall, acid house, and industrial music. Primarily inspired by architecture, West's perfectionist tendencies led him to contact producer Rick Rubin fifteen days shy of its due date to strip down the record's sound in favor of a more minimalist approach. Initial promotion of his sixth album included worldwide video projections of the album's music and live television performances.
Yeezus, West's sixth album, was released June 18, 2013 to rave reviews from critics. It became the rapper's sixth consecutive number one debut, but also marked his lowest solo opening week sales. Def Jam issued "Black Skinhead" to radio in July 2013 as the album's lead single.
On September 6, 2013, Kanye West announced he would be headlining his first solo tour in five years, to support Yeezus, with fellow American rapper Kendrick Lamar accompanying him as supporting act. The tour was met with rave reviews from critics. Rolling Stone described it as "crazily entertaining, hugely ambitious, emotionally affecting (really!) and, most importantly, totally bonkers." Writing for Forbes, Zack O'Malley Greenburg praised West for "taking risks that few pop stars, if any, are willing to take in today's hyper-exposed world of pop," describing the show as "overwrought and uncomfortable at times, but [it] excels at challenging norms and provoking thought in a way that just isn't common for mainstream musical acts of late."
thumb|left|275px|West performing on the Yeezus Tour in 2013.
In June 2013, West and television personality Kim Kardashian announced the birth of their first child, North. In October 2013, West and Kardashian announced their engagement to widespread media attention.Marcus, Stephanie. "Kim Kardashian, Kanye West Are Married In Over-The-Top Wedding In Florence (UPDATED)." Huffington Post. 24 May 2014. November 2013, West stated that he was beginning work on his next studio album, hoping to release it by mid-2014,Jackson, Reed (November 25, 2013). "Kanye West Hopes To Have New Album Out By Summer". XXL. with production by Rick Rubin and Q-Tip. In December 2013, Adidas announced the beginning of their official apparel collaboration with West, to be premiered the following year. In May 2014, West and Kardashian were married in a private ceremony in Florence, Italy, with a variety of artists and celebrities in attendance. West released a single, "Only One", featuring Paul McCartney, on December 31, 2014.
"FourFiveSeconds", a single jointly produced with Rihanna and McCartney, was released in January 2015. West also appeared on the Saturday Night Live 40th Anniversary Special, where he premiered a new song entitled "Wolves", featuring Sia Furler and fellow Chicago rapper, Vic Mensa. In February 2015, West premiered his clothing collaboration with Adidas, entitled Yeezy Season 1, to generally positive reviews. This would include West's Yeezy Boost sneakers. In March 2015, West released the single "All Day" featuring Theophilus London, Allan Kingdom and Paul McCartney. West performed the song at the 2015 BRIT Awards with a number of US rappers and UK grime MC's including: Skepta, Wiley, Novelist, Fekky, Krept & Konan, Stormzy, Allan Kingdom, Theophilus London and Vic Mensa. He would premiere the second iteration of his clothing line, Yeezy Season 2, in September 2015 at New York Fashion Week.
Having initially announced a new album entitled So Help Me God slated for a 2014 release, in March 2015 West announced that the album would instead be tentatively called SWISH. Later that month, West was awarded an honorary doctorate by the School of the Art Institute of Chicago for his contributions to music, fashion, and popular culture, officially making him an honorary DFA. The next month, West headlined at the Glastonbury Festival in the UK, despite a petition signed by almost 135,000 people against his appearance. Toward the end of the set, West proclaimed himself: "the greatest living rock star on the planet." Media outlets, including social media sites such as Twitter, were divided on his performance. NME stated, "The decision to book West for the slot has proved controversial since its announcement, and the show itself appeared to polarise both Glastonbury goers and those who tuned in to watch on their TVs." The publication added that "he's letting his music speak for and prove itself." The Guardian said that "his set has a potent ferocity – but there are gaps and stutters, and he cuts a strangely lone figure in front of the vast crowd." In December 2015, West released a song titled "Facts".
2016–present: The Life of Pablo and Turbo Grafx 16
West announced in January 2016 that SWISH would be released on February 11, and that month released new songs "Real Friends" and a snippet of "No More Parties in L.A." with Kendrick Lamar. This also revived the GOOD Fridays initiative in which Kanye releases new singles every Friday. On January 26, 2016, West revealed he had renamed the album from SWISH to Waves, and also announced the premier of his Yeezy Season 3 clothing line at Madison Square Garden. In the weeks leading up to the album's release, West became embroiled in several Twitter controversies and released several changing iterations of the track list for the new album. Several days ahead of its release, West again changed the title, this time to The Life of Pablo. On February 11, West premiered the album at Madison Square Garden as part of the presentation of his Yeezy Season 3 clothing line.Phillips, Amy (February 11, 2016). "Kanye West New Album The Life Of Pablo Debut Live Stream: Watch It Here". Pitchfork. Retrieved February 11, 2016. Following the preview, West announced that he would be modifying the track list once more before its release to the public,West, Kanye (February 12, 2016). "The album is being mastered and will be out today… added on a couple of tracks…". Twitter. Retrieved February 16, 2016. and further delayed its release to finalize the recording of the track "Waves" at the behest of co-writer Chance the Rapper. He released the album exclusively on Tidal on 14 February 2016 following a performance on SNL. Following its official streaming release, West continued to tinker with mixes of several tracks, describing the work as "a living breathing changing creative expression" and proclaiming the end of the album as a dominant release form. Although a statement by West around Life of Pablo's initial release indicated that the album would be a permanent exclusive to Tidal, the album was released through several other competing services starting in April.
thumb|left|250px|West performing during the Saint Pablo Tour in 2016.
On February 24, 2016, West stated on Twitter that he was planning to release another album in the summer of 2016, tentatively called Turbo Grafx 16 in reference to the 1990s video game console of the same name. On June 3, 2016, West premiered the first single "Champions" off the GOOD Music album Cruel Winter, which was six minutes and featured Travis Scott, Big Sean, Gucci Mane, Desiigner, Yo Gotti, Quavo, and 2 Chainz. He told the radio host, Big Boy, that the beat had been in works for a year and a half. In June, West released a controversial video for "Famous," which depicted wax figures of several celebrities (including West, Kardashian, Taylor Swift, president and businessman Donald Trump, comedian Bill Cosby, and former president George W. Bush) sleeping nude in a shared bed. In August 2016, West embarked on the Saint Pablo Tour in support of The Life of Pablo. The performances featured a mobile stage suspended from the ceiling. West postponed several dates in October following the Paris robbery of his wife Kim Kardashian. On November 21, 2016, West cancelled the remaining 21 dates on the Saint Pablo Tour, following a week of no-shows, curtailed concerts and rants about politics. He was later admitted for psychiatric observation at UCLA Medical Center. He stayed hospitalized over the Thanksgiving weekend stemming from a temporary psychosis stemming from sleep deprivation and extreme dehydration.
Artistry
West's musical career has been defined by frequent stylistic shifts and different musical approaches. Asked about his early musical inspirations, he named artists such as A Tribe Called Quest, Stevie Wonder, Michael Jackson, George Michael, LL Cool J, Phil Collins and Madonna."Kanye West Interviewed. Clash Music. 12 April 2008. Other music figures West has invoked as inspirations include Puff Daddy,Heisler, Yoni. "Kanye West is on another epic Twitter rant, says his album will 'never be on Apple'". BGR. 15 February 2016. David Bowie, Miles Davis and Gil-Scott Heron. West was formatively mentored by Chicago producer No I.D., who introduced him to hip hop production in the early 1990s, allowing a teenage West to sit in on recording sessions. Early in his career, West pioneered a style of production dubbed "chipmunk soul"Bailey, Julius. The Cultural Impact of Kanye West. which utilized pitched-up vocal samples, usually from soul and R&B songs, along with his own drums and instrumentation. His first major release featuring his trademark soulful vocal sampling style was "This Can't Be Life", a track from Jay-Z's The Dynasty: Roc La Familia. West has noted Wu-Tang Clan producer RZA as an influence on his style.
thumb|270px|right|West performing backed by an orchestral section in 2007.
West further developed his style on his 2004 debut album, The College Dropout. After a rough version was leaked, West meticulously refined the production, adding string arrangements, gospel choirs, and improved drum programming. The album saw West diverge from the then-dominant gangster persona in hip hop in favor of more diverse, topical lyrical subjects including higher education, materialism, self-consciousness, minimum-wage labor, institutional prejudice, family, sexuality, and his personal struggles in the music industry.Love, Josh. Review: The College Dropout. Stylus Magazine. Retrieved on July 23, 2009. For his second album, Late Registration (2005), he collaborated with film score composer Jon Brion and drew influence from non-rap influences such as English trip hop group Portishead. Blending West's primary soulful hip hop production with Brion's elaborate chamber pop orchestration, the album experimentally incorporated a wide array of different genres and prominent orchestral elements, including string arrangements, piano chords, brass flecks, and horn riffs, amid a myriad of foreign and vintage instruments.Christgau, Robert. (August 30, 2005). "Growing by Degrees – Kanye West adds new subtlety, complexity, and Jon Brion to the idea of sophmoric". The Village Voice. Retrieved October 6, 2009. Critic Robert Christgau wrote that "there's never been hip-hop so complex and subtle musically." With his third album, Graduation (2007), West moved away from the soulful sound of his previous releases and towards a more atmospheric, rock-tinged, electronic-influenced style, drawing on European Britpop and Euro-disco, American alternative and indie-rock, and his native Chicago house.Pytlik, Mark. 2007-09-11. Review: Graduation. Pitchfork Media. Retrieved on 2009-10-06. West retracted much of the live instrumentation that characterized his previous album and replaced it with distorted, gothic synthesizers, rave stabs, house beats, electro-disco rhythms, and a wide array of modulated electronic noises and digital audio-effects. In addition, West drew musical inspiration from arena rock bands such as The Rolling Stones, U2, and Led Zeppelin. In comparison to previous albums, Graduation is more introspective, exploring West's own fame and personal issues.Drumming, Neil. Review: Graduation. Entertainment Weekly. Retrieved on 2009-10-06.
thumb|left|The Roland TR-808, the titular drum machine which served as a primary instrument on 808s and Heartbreak.
808s & Heartbreak (2008), marked a radical departure from his previous releases, largely abandoning rap and hip hop stylings in favor of an emotive, stark electropop soundPlagenhoef, Scott. Review: 808s & Heartbreak. Pitchfork Media. Retrieved August 7, 2009. that juxtaposed Auto-Tuned sung vocals and the distorted Roland TR-808 drum machine with droning synthesizers, lengthy strings, somber piano, and tribal rhythms.Kellman, Andy. [ Review: 808s & Heartbreak]. AllMusic. Rovi Corporation. Retrieved August 7, 2009. The album drew comparisons to the work of 1980s post-punk and new wave groups; West would confess an affinity with artists such as Joy Division, Gary Numan, and TJ Swan and later described 808s as "the first black new wave album." Discussing the album's influence on subsequent hip hop and R&B music, Rolling Stone journalist Matthew Trammell described 808s as "Kanye's most vulnerable work, and perhaps his most brilliant." West recorded his fifth album, My Beautiful Dark Twisted Fantasy (2010), with a wide range of collaborators. The album engages with themes of excess, celebrity, and decadence,Embling (November 2010). Kanye West – My Beautiful Dark Twisted Fantasy | Music Review | Tiny Mix Tapes. Tiny Mix Tapes. Retrieved on 2011-04-30. has been noted by writers for its maximalist aesthetic and its incorporation of elements from West's previous four albums.Vozick-Levinson, Simon (November 12, 2010). Review: My Beautiful Dark Twisted Fantasy. Entertainment Weekly. Retrieved on 2010-11-12.Dombal, Ryan (November 21, 2010). Review: My Beautiful Dark Twisted Fantasy. Pitchfork Media. Retrieved on 2010-11-21. Entertainment Weeklys Simon Vozick-Levinson noted that such elements "all recur at various points", namely "the luxurious soul of 2004's The College Dropout, the symphonic pomp of Late Registration, the gloss of 2007's Graduation, and the emotionally exhausted electro of 2008's 808s & Heartbreak." In a positive review, Andy Gill of The Independent called it "one of pop's gaudiest, most grandiose efforts of recent years, a no-holds-barred musical extravaganza in which any notion of good taste is abandoned at the door".Gill, Andy (November 19, 2010). Review: My Beautiful Dark Twisted Fantasy. The Independent. Retrieved on 2010-11-19.
Describing his sixth studio album Yeezus (2013) as "a protest to music," West embraced an abrasive style that incorporated a variety of unconventional influences. Music critic Greg Kot described it as "a hostile, abrasive and intentionally off-putting" album that combines "the worlds of" 1980s acid-house and contemporary Chicago drill music, 1990s industrial music, and the "avant-rap" of Saul Williams, Death Grips and Odd Future. The album also incorporates elements of trap music, as well as dancehall, punk, and electro. Inspired by the minimalist design of Le Corbusier and primarily electronic in nature, and continues West's practice of eclectic and unusual samples. Rolling Stone called the album a "brilliant, obsessive-compulsive career auto-correct". West's seventh album The Life of Pablo was noted for its "raw, occasionally even intentionally messy, composition" in distinction to West's previous album. Rolling Stone wrote that "It's designed to sound like a work in progress." Carl Wilson of Slate characterized the album as creating "strange links between Kanye's many iterations—soul-sample enthusiast, heartbroken Auto-Tune crooner, hedonistic avant-pop composer, industrial-rap shit-talker." West initially characterized the release as "a gospel album." Greg Kot of the Chicago Tribune wrote in his review of The Life of Pablo, "West's version of gospel touches on some of those sonic cues — heavy organ, soaring choirs — but seems more preoccupied with gospel text and the notion of redemption."
Other ventures
Fashion
thumb|left|170px|West in 2007.
Early in his career, West made clear his interest in fashion and desire to work in the clothing design industry. In September 2005, West announced that he would release his Pastelle Clothing line in spring 2006, claiming "Now that I have a Grammy under my belt and Late Registration is finished, I am ready to launch my clothing line next spring." The line was developed over the following four years – with multiple pieces teased by West himself – before the line was ultimately cancelled in 2009. In 2009, West collaborated with Nike to release his own shoe, the Air Yeezys, with a second version released in 2012. In January 2009, West introduced his first shoe line designed for Louis Vuitton during Paris Fashion Week. The line was released in summer 2009. West has additionally designed shoewear for Bape and Italian shoemaker Giuseppe Zanotti.
On October 1, 2011, Kanye West premiered his women's fashion label, DW Kanye WestKanye West – Spring/Summer 2012 ready-to-wear show – The Internet Fashion Database Retrieved and verified on October 2, 2011 at Paris Fashion Week. He received support from DSquared2 duo Dean and Dan Caten, Olivier Theyskens, Jeremy Scott, Azzedine Alaïa, and the Olsen twins, who were also in attendance during his show. His debut fashion show received mixed-to-negative reviews, ranging from reserved observations by Style.com to excoriating commentary by The Wall Street Journal, The New York Times, the International Herald Tribune, Elleuk.com, The Daily Telegraph, Harper's Bazaar and many others. On March 6, 2012, West premiered a second fashion line at Paris Fashion Week. The line's reception was markedly improved from the previous presentation, with a number of critics heralding West for his "much improved" sophomore effort.
210px|thumbnail|right|An advertisement for West's 2015 shoe collaboration with Adidas, the Yeezy 350.
On December 3, 2013, Adidas officially confirmed a new shoe collaboration deal with West. After months of anticipation and rumors, West confirmed the release of the Adidas Yeezy Boosts. In 2015, West unveiled his Yeezy Season clothing line, premiering Season 1 in collaboration with Adidas early in the year. The line received positive critical reviews, with Vogue observing "a protective toughness, a body-conscious severity that made the clothes more than a simple accessory." The release of the Yeezy Boosts and the full Adidas collaboration was showcased in New York City on February 12, 2015, with free streaming to 50 cinemas in 13 countries around the world. An initial release of the Adidas Yeezy Boosts was limited to 9000 pairs to be available only in New York City via the Adidas smartphone app; the Adidas Yeezy Boosts were sold out within 10 minutes. The shoes released worldwide on February 28, 2015, were limited to select boutique stores and the Adidas UK stores. He followed with Season 2 later that year at New York Fashion Week. On February 11, West premiered his Yeezy Season 3 clothing line at Madison Square Garden in conjunction with the previewing of his album The Life of Pablo. In June 2016, Adidas announced a new long-term contract with Kanye West which sees the Yeezy line extend to a number of stores and enter sports performance products. The Yeezys will be seen in basketball, football, soccer, and more.
Business ventures
West founded the record label and production company GOOD Music in 2004, in conjunction with Sony BMG, shortly after releasing his debut album, The College Dropout. John Legend, Common, and West were the label's inaugural artists. The label houses artists including West, Big Sean, Pusha T, Teyana Taylor, Yasiin Bey / Mos Def, D'banj and John Legend, and producers including Hudson Mohawke, Q-Tip, Travis Scott, No I.D., Jeff Bhasker, and S1. GOOD Music has released ten albums certified gold or higher by the Recording Industry Association of America (RIAA). In November 2015, West appointed Pusha T the new president of GOOD Music.
In August 2008, West revealed plans to open 10 Fatburger restaurants in the Chicago area; the first was set to open in September 2008 in Orland Park. The second followed in January 2009, while a third location is yet to be revealed, although the process is being finalized. His company, KW Foods LLC, bought the rights to the chain in Chicago. Ultimately, in 2009, only two locations actually opened. In February 2011, West shut down the Fatburger located in Orland Park. Later that year, the remaining Beverly location also was shuttered.
thumb|right|170px|The logo of West's GOOD Music imprint.
On January 5, 2012, West announced his establishment of the creative content company DONDA, named after his late mother Donda West. In his announcement, West proclaimed that the company would "pick up where Steve Jobs left off"; DONDA would operate as "a design company which will galvanize amazing thinkers in a creative space to bounce their dreams and ideas" with the "goal to make products and experiences that people want and can afford."Graham, Mark, Kanye West's Epic 1600-word Twitter Rant: Neatly Organized for your Reading Pleasure, 5 January 2012, 'VH1', retrieved 4 August 2015. West is notoriously secretive about the company's operations, maintaining neither an official website nor a social media presence.Hope, Clover, Kanye West has a Dream: Inside his Creative Agency DONDA, 19 August 2013, 'VIBE', retrieved 4 August 2015.Pasori, Cedar, How Kanye West's Creative Company DONDA is making its own Brand of Cool, 3 November 2014, 'Complex Magazine', retrieved 4 August 2015. In stating DONDA's creative philosophy, West articulated the need to "put creatives in a room together with like minds" in order to "simplify and aesthetically improve everything we see, taste, touch, and feel.". Contemporary critics have noted the consistent minimalistic aesthetic exhibited throughout DONDA creative projects.Sargent, Jordan, DONDA: Kanye West Goes G.O.O.D. Trill Hunting with His Minimalist Design Company, 13 November 2013, 'Spin Magazine', retrieved 4 August 2015.Babcock, Gregory, Kanye's Stylish Pastor Releases DONDA-Designed Book, 23 June 2015, 'Complex Magazine', retrieved 4 August 2015.Lewis, Brittany, Every Music Cover Kanye West's Creative House DONDA Has Created So Far…, 3 November 2014, 'Global Grind', retrieved 4 August 2015.
On March 30, 2015, it was announced that West is a co-owner, with various other music artists, in the music streaming service Tidal. The service specialises in lossless audio and high definition music videos. Jay Z acquired the parent company of Tidal, Aspiro, in the first quarter of 2015. Including Beyoncé and Jay-Z, sixteen artist stakeholders (such as Rihanna, Beyoncé, Madonna, Chris Martin, Nicki Minaj and more) co-own Tidal, with the majority owning a 3% equity stake. The idea of having an all artist owned streaming service was created by those involved to adapt to the increased demand for streaming within the current music industry, and to rival other streaming services such as Spotify, which have been criticised for their low payout of royalties. "The challenge is to get everyone to respect music again, to recognize its value", stated Jay-Z on the release of Tidal.
On June 6, 2016, West announced the Yeezy Season 2 Zine and the Adidas Yeezy Boost 750 will be released to retailers on June 11. They are high-top shoes with a glow in the dark sole. In an interview with Vogue, he stated that there will be Yeezy stores with the first located in California.
Philanthropy
West, alongside his mother, founded the "Kanye West Foundation" in Chicago in 2003, tasked with a mission to battle dropout and illiteracy rates, while partnering with community organizations to provide underprivileged youth access to music education. In 2007, the West and the Foundation partnered with Strong American Schools as part of their "Ed in '08" campaign. As spokesman for the campaign, West appeared in a series of PSAs for the organization, and hosted an inaugural benefit concert in August of that year.
In 2008, following the death of West's mother, the foundation was rechristened "The Dr. Donda West Foundation." The foundation ceased operations in 2011.
Kanye West and friend, Rhymefest, also founded "Donda's House, Inc". Got Bars is the Donda's House signature music/lyric composition and performance program. Participants are selected through an application and audition process. Got Bars is a free music writing program with the goal of helping at-risk Chicago youth. It is aimed at students between 15 and 24, and includes lessons on how to write and record music. Their curriculum is based on the teaching philosophy and pedagogy of Dr. Donda West with a focus on collaborative and experiential learning.
West has additionally appeared and participated in many fundraisers, benefit concerts, and has done community work for Hurricane Katrina relief, the Kanye West Foundation, the Millions More Movement, 100 Black Men of America, a Live Earth concert benefit, World Water Day rally and march, Nike runs, and a MTV special helping young Iraq War veterans who struggle through debt and PTSD a second chance after returning home.
Politics
In September 2015, West announced that he intends to run for President of the United States in 2020.
On December 13, 2016, West met with President-elect Donald Trump to discuss (according to West) "bullying, supporting teachers, modernizing curriculums, and violence in Chicago." The rapper previously stated he would have voted for Trump had he voted. He later implied on Twitter that he intends to run for President in 2024 due to Trump's win in the 2016 elections.
Controversies
General media
West has been an outspoken and controversial celebrity throughout his career, receiving both criticism and praise from many, including the mainstream media, other artists and entertainers, and two U.S. presidents. On September 2, 2005, during a benefit concert for Hurricane Katrina relief on NBC, A Concert for Hurricane Relief, West (a featured speaker) accused President George W. Bush of not "car[ing] about black people". When West was presenting alongside actor Mike Myers, he deviated from the prepared script to criticize the media's portrayal of hurricane victims, saying:
I hate the way they portray us in the media. You see a black family, it says, 'They're looting.' You see a white family, it says, 'They're looking for food.' And, you know, it's been five days [waiting for federal help] because most of the people are black. And even for me to complain about it, I would be a hypocrite because I've tried to turn away from the TV because it's too hard to watch. I've even been shopping before even giving a donation, so now I'm calling my business manager right now to see what is the biggest amount I can give, and just to imagine if I was down there, and those are my people down there. So anybody out there that wants to do anything that we can help—with the way America is set up to help the poor, the black people, the less well-off, as slow as possible. I mean, the Red Cross is doing everything they can. We already realize a lot of people that could help are at war right now, fighting another way—and they've given them permission to go down and shoot us!
Myers spoke next and continued to read the script. Once it was West's turn to speak again, he said, "George Bush doesn't care about black people." At this point, telethon producer Rick Kaplan cut off the microphone and then cut away to Chris Tucker, who was unaware of the cut for a few seconds. Still, West's comment reached much of the United States. Bush stated in an interview that the comment was "one of the most disgusting moments" of his presidency. In November 2010, in a taped interview with Matt Lauer for the Today show, West expressed regret for his criticism of Bush. "I would tell George Bush in my moment of frustration, I didn't have the grounds to call him a racist", he told Lauer. "I believe that in a situation of high emotion like that we as human beings don't always choose the right words." The following day, Bush reacted to the apology in a live interview with Lauer saying he appreciated the rapper's remorse. "I'm not a hater", Bush said. "I don't hate Kanye West. I was talking about an environment in which people were willing to say things that hurt. Nobody wants to be called a racist if in your heart you believe in equality of races."
Reactions were mixed, but some felt that West had no need to apologize. "It was not the particulars of your words that mattered, it was the essence of a feeling of the insensitivity towards our communities that many of us have felt for far too long", argued Def Jam co-founder Russell Simmons. Bush himself was receptive to the apology, saying, "I appreciate that. It wasn't just Kanye West who was talking like that during Katrina, I cited him as an example, I cited others as an example as well. You know, I appreciate that."
In September 2013, West was widely rebuked by human rights groups for performing in Kazakhstan at the wedding of authoritarian President Nursultan Nazarbayev's grandson. He traveled to Kazakhstan, which has one of the poorest human rights records in the world, as a personal guest of Nazarbayev. Other notable Western performers, including Sting, have previously cancelled performances in the country over human rights concerns. West was reportedly paid US$3 million for his performance. West had previously participated in cultural boycotts, joining Shakira and Rage Against The Machine in refusing to perform in Arizona after the 2010 implementation of stop and search laws directed against potential illegal aliens.
Later in 2013, West launched a tirade on Twitter directed at talk show host Jimmy Kimmel after his ABC program Jimmy Kimmel Live! ran a sketch on September 25 involving two children re-enacting West's recent interview with Zane Lowe for BBC Radio 1 in which he calls himself the biggest rock star on the planet. Kimmel reveals the following night that West called him to demand an apology shortly before taping."Kanye West goes after Jimmy Kimmel in Twitter rant over BBC interview spoof, late-night host responds: 'Right now we're at DefKanye Five'", Daily News (New York), September 27, 2013. Retrieved September 28, 2013
During a November 26, 2013 radio interview, West explained why he believed that President Obama had problems pushing policies in Washington: "Man, let me tell you something about George Bush and oil money and Obama and no money. People want to say Obama can't make these moves or he's not executing. That's because he ain't got those connections. Black people don't have the same level of connections as Jewish people...We ain't Jewish. We don't got family that got money like that."Kanye West Guilty Of Anti-Semitism?! Anti-Defamation League Demands Apology For Latest Comments! by Perez Hilton, May 12, 2013 In response to his comments, the Anti-Defamation League stated: "There it goes again, the age-old canard that Jews are all-powerful and control the levers of power in government."Kanye West's Week Includes Accusations Of Anti-Semitism, Weak Attendance At Kansas City Gig by The Huffington Post, Matthew Jacobs, May 12, 2013 On December 21, 2013, West backed off of the original comment and told a Chicago radio station that "I thought I was giving a compliment, but if anything it came off more ignorant. I don't know how being told you have money is an insult."
In February 2016, West again became embroiled in controversy when he posted a tweet seemingly asserting Bill Cosby's innocence in the wake of over 50 women making allegations of sexual assault directed at Cosby. That same month, West became embroiled in a short-lived social media altercation with rapper Wiz Khalifa on Twitter that eventually involved their mutual ex-partner, Amber Rose, who protested to West's mention of her and Khalifa's child. The feud involved allegations by Rose concerning her sexual relationship with West, and received significant media attention. As of February 2, 2016, West and Khalifa had reconciled.
Over the course of his career, West has been known to compare himself to various influential figures and entities in art and culture, including Apple founder Steve Jobs, animator Walt Disney, pop artist Andy Warhol, entrepreneur Howard Hughes, singer Michael Jackson, Renaissance-era polymath Leonardo da Vinci, fashion designers Ralph Lauren and Anna Wintour, athletics company Nike, and technology company Google.
Award shows
In 2004, West had his first of a number of public incidents during his attendance at music award events. At the American Music Awards of 2004, West stormed out of the auditorium after losing Best New Artist to country singer Gretchen Wilson. He later commented, "I felt like I was definitely robbed [...] I was the best new artist this year." After the 2006 Grammy nominations were released, West said he would "really have a problem" if he did not win the Album of the Year, saying, "I don't care what I do, I don't care how much I stunt – you can never take away from the amount of work I put into it. I don't want to hear all of that politically correct stuff." On November 2, 2006, when his "Touch the Sky" failed to win Best Video at the MTV Europe Music Awards, West went onto the stage as the award was being presented to Justice and Simian for "We Are Your Friends" and argued that he should have won the award instead. Hundreds of news outlets worldwide criticized the outburst. On November 7, 2006, West apologized for this outburst publicly during his performance as support act for U2 for their Vertigo concert in Brisbane. He later spoofed the incident on the 33rd-season premiere of Saturday Night Live in September 2007.
On September 9, 2007, West suggested that his race had something to do with his being overlooked for opening the 2007 MTV Video Music Awards (VMAs) in favor of Britney Spears; he claimed, "Maybe my skin's not right." West was performing at the event; that night, he lost all five awards that he was nominated for, including Best Male Artist and Video of the Year. After the show, he was visibly upset that he had lost at the VMAs two years in a row, stating that he would not come back to MTV ever again. He also appeared on several radio stations saying that when he made the song "Stronger" that it was his dream to open the VMAs with it. He has also stated that Spears has not had a hit in a long period of time and that MTV exploited her for ratings.
On September 13, 2009, during the 2009 MTV Video Music Awards while Taylor Swift was accepting her award for Best Female Video for "You Belong with Me", West went on stage and grabbed the microphone to proclaim that Beyoncé's video for "Single Ladies (Put a Ring on It)", nominated for the same award, was "one of the best videos of all time". He was subsequently removed from the remainder of the show for his actions. When Beyoncé later won the award for Best Video of the Year for "Single Ladies (Put a Ring on It)", she called Swift up on stage so that she could finish her acceptance speech. West was criticized by various celebrities for the outburst, and by President Barack Obama, who called West a "jackass". In addition, West's VMA disruption sparked a large influx of Internet photo memes with blogs, forums and "tweets" with the "Let you finish" photo-jokes. He posted a Tweet soon after the event where he stated, "Everybody wanna booooo me but I'm a fan of real pop culture... I'm not crazy y'all, I'm just real." He then posted two apologies for the outburst on his personal blog; one on the night of the incident, and the other the following day, when he also apologized during an appearance on The Jay Leno Show. After Swift appeared on The View two days after the outburst, partly to discuss the matter, West called her to apologize personally. Swift said she accepted his apology.
In September 2010, West wrote a series of apologetic tweets addressed to Swift including "Beyonce didn't need that. MTV didn't need that and Taylor and her family friends and fans definitely didn't want or need that" and concluding with "I'm sorry Taylor." He also revealed he had written a song for Swift and if she did not accept the song, he would perform it himself. However, on November 8, 2010, in an interview with a Minnesota radio station, he seemed to recant his past apologies by attempting to describe the act at the 2009 awards show as "selfless" and downgrade the perception of disrespect it created. West's 2016 single "Famous" was met with scrutiny for a controversial lyrical reference to Taylor Swift: "I feel like me and Taylor might still have sex/Why?/I made that bitch famous!" After West claimed to have obtained Swift's approval over the criticized lyric, Swift denied the claim, criticizing West and denouncing the lyric as "misogynistic" in a statement. Several months later, West's wife Kim Kardashian released a video allegedly capturing a conversation between Swift and West in which Swift appears to approve the lyric, seemingly validating West's claim.
On February 8, 2015, at the 57th Annual Grammy Awards, West walked on stage as Beck was accepting his award for Album of the Year and then walked off stage, leaving the audience to think he was joking. After the awards show, West stated in an interview that he was not joking and that "Beck needs to respect artistry, he should have given his award to Beyoncé". On February 26, 2015, he publicly apologized to Beck on Twitter.
On August 30, 2015, West was presented with the Michael Jackson Video Vanguard Award at the MTV Video Music Awards. In his acceptance speech, he stated, "Y'all might be thinking right now, 'I wonder did he smoke something before he came out here?' And the answer is: 'Yes, I rolled up a little something. I knocked the edge off.'" At the end of his speech, he announced, "I have decided in 2020 to run for president."
Petitions
Music fans have turned to Change.org around the globe to try and block West's participation at various events. The largest unsuccessful petition has been to the Glastonbury Festival 2015 with 133,000+ voters stating they would prefer a rock band to headline.
On July 20, 2015, within five days of West's announcement as the headlining artist of the closing ceremonies of the 2015 Pan American Games, Change.org user XYZ collected over 50,000 signatures for West's removal as headliner, on the grounds that the headlining artist should be Canadian. In his Pan American Games Closing Ceremony performance, close to the end of his performance, West closed the show by tossing his faulty microphone in the air and walked off stage.
Personal life
Relationships
thumb|140px|West's wife Kim Kardashian, pictured in September 2012
West began an on-and-off relationship with designer Alexis Phifer in 2002, and they became engaged in August 2006. The pair ended their 18-month engagement in 2008. West subsequently dated model Amber Rose from 2008 until the summer of 2010. West began dating reality star and longtime friend Kim Kardashian in April 2012. West and Kardashian became engaged in October 2013, and married on May 24, 2014 at Fort di Belvedere in Florence, Italy. Their private ceremony was subject to widespread mainstream coverage, with West taking issue with the couple's portrayal in the media.Lee, Christina. "Kanye West Blasts Media Coverage Of Kim Kardashian In Wedding Speech." Idolator. 28 May 2014. They have two children: daughter North "Nori" West (born June 15, 2013) and son Saint West (born December 5, 2015). In April 2015, West and Kardashian traveled to Jerusalem to have North baptized in the Armenian Apostolic Church at the Cathedral of St. James. The couple's high status and respective careers have resulted in their relationship becoming subject to heavy media coverage; The New York Times referred to their marriage as "a historic blizzard of celebrity."Caramanica, Jon. "The Agony and the Ecstasy of Kanye West." New York Times. 10 April 2015.
Mother's death
On November 10, 2007, at approximately 7:35 pm, paramedics responding to an emergency call transported West's mother, Donda West, to the nearby Centinela Freeman Hospital in Marina del Rey, California. She was unresponsive in the emergency room, and after resuscitation attempts, doctors pronounced her dead at approximately 8:30 pm, at age 58. The Los Angeles County coroner's office said in January 2008 that West had died of heart disease while suffering "multiple post-operative factors" after plastic surgery. She had undergone liposuction and breast reduction. Beverly Hills plastic surgeon Andre Aboolian had refused to do the surgery because West had a health condition that placed her at risk for a heart attack. Aboolian referred her to an internist to investigate her cardiac issue. She never met with the doctor recommended by Aboolian and had the procedures performed by a third doctor, Jan Adams.
Donda West in August 2007|thumb|left|upright
Adams sent condolences to Donda West's family but declined to publicly discuss the procedure, citing confidentiality. West's family, through celebrity attorney Ed McPherson, filed complaints with the Medical Board against Adams and Aboolian for violating patient confidentiality following her death. Adams had previously been under scrutiny by the medical board. He appeared on Larry King Live on November 20, 2007, but left before speaking. Two months later, he appeared again, with his attorney, stating he was there to "defend himself". He said that the recently released autopsy results "spoke for themselves". The final coroner's report January 10, 2008, concluded that Donda West died of "coronary artery disease and multiple post-operative factors due to or as a consequence of liposuction and mammoplasty".
The funeral and burial for Donda West was held in Oklahoma City on November 20, 2007. West played his first concert following the funeral at The O2 in London on November 22. He dedicated a performance of "Hey Mama", as well as a cover of Journey's "Don't Stop Believin'", to his mother, and did so on all other dates of his Glow in the Dark tour.
At a December 2008 press conference in New Zealand, West spoke about his mother's death for the first time. "It was like losing an arm and a leg and trying to walk through that", he told reporters."HipHopDX.com Kanye West Speaks Candidly About Mother, Religion, Rap". HipHopDX.com. Retrieved December 2, 2008.
California governor Arnold Schwarzenegger signed the "Donda West Law", legislation which makes it mandatory for patients to provide medical clearance for elective cosmetic surgery.
Legal issues
In December 2006, Robert "Evel" Knievel sued West for trademark infringement in West's video for "Touch the Sky". Knievel took issue with a "sexually charged video" in which West takes on the persona of "Evel Kanyevel" and attempts flying a rocket over a canyon. The suit claimed infringement on Knievel's trademarked name and likeness. Knievel also claimed that the "vulgar and offensive" images depicted in the video damaged his reputation. The suit sought monetary damages and an injunction to stop distribution of the video. West's attorneys argued that the music video amounted to satire and therefore was covered under the First Amendment. Just days before his death in November 2007, Knievel amicably settled the suit after being paid a visit from West, saying, "I thought he was a wonderful guy and quite a gentleman."
On September 11, 2008, West and his road manager/bodyguard Don "Don C." Crowley were arrested at Los Angeles International Airport and booked on charges of felony vandalism after an altercation with the paparazzi in which West and Crowley broke the photographers' cameras. West was later released from the Los Angeles Police Department's Pacific Division station in Culver City on $20,000 bail bond. On September 26, 2008, the Los Angeles County District Attorney's Office said it would not file felony counts against West over the incident. Instead the case file was forwarded to the city attorney's office, which charged West with one count of misdemeanor vandalism, one count of grand theft and one count of battery and his manager with three counts of each on March 18, 2009. West's and Crowley's arraignment was delayed from an original date of April 14, 2009.
West was arrested again on November 14, 2008 at the Hilton hotel near Gateshead after another scuffle involving a photographer outside the famous Tup Tup Palace nightclub in Newcastle upon Tyne. He was later released "with no further action", according to a police spokesperson.
On July 19, 2013, West was leaving LAX as he was surrounded by dozens of paparazzi. West became increasingly agitated as a photographer, Daniel Ramos, continued to ask him why people were not allowed to speak in his presence. West then says, "I told you don't talk to me, right? You trying to get me in trouble so I steal off on you and have to pay you like $250,000 and shit." Then he allegedly charged the man and grabbed him and his camera. The incident captured by TMZ, took place for a few seconds before a female voice can be heard telling West to stop. West then released the man, and his camera, and drove away from the scene. Medics were later called to the scene on behalf of the photographer who was grabbed. It was reported West could be charged with felony attempted robbery behind the matter.Coleman, C. Vernon (July 13, 2013). "Kanye West Attacks Paparazzi Outside LAX, Might Face Attempted Robbery Charges". XXL. However, the charges were reduced to misdemeanor criminal battery and attempted grand theft.Steiner, B.J. (September 13, 2013). "Kanye West Charged In LAX Paparazzi Attack". XXL. In March 2014, West was sentenced to serve two years' probation for the misdemeanor battery conviction and required to attend 24 anger management sessions, perform 250 hours of community service and pay restitution to Ramos.
Religious beliefs
After the success of his song "Jesus Walks" from the album The College Dropout, West was questioned on his beliefs and said, "I will say that I'm spiritual. I have accepted Jesus as my Savior. And I will say that I fall short every day." In a 2008 interview with The Fader, West stated that "I'm like a vessel, and God has chosen me to be the voice and the connector."
In a 2009 interview with online magazine Bossip, West clarified that he believed in God, but "would never go into a religion," explaining that "I feel like religion is more about separation and judgment than bringing people together and understanding. That's all I'm about." More recently, in September 2014, West referred to himself as a Christian during one of his concerts.
Mental health
In 2010, for a screening for his movie Runaway, West told the audience he once considered suicide. On November 20, 2016, soon before abruptly ending a concert he said, "Jay Z—call me, bruh. You still ain't called me. Jay Z, I know you got killers. Please don't send them at my head. Please call me. Talk to me like a man." The following day, he was committed to the UCLA Medical Center with hallucinations and paranoia. Contrary to what early reports said, however, West was not actually taken to the hospital involuntarily; he was persuaded to do so by authorities. While the episode was first described as one of "temporary psychosis" caused by dehydration and sleep deprivation, West's mental state was abnormal enough for his 21 cancelled concerts to be covered by his insurance policy; he has reportedly been paranoid and depressed throughout the hospitalization, but remains formally undiagnosed. Both the Paris robbery of his wife and the anniversary of his mother's death are thought to have been major contributors to his mental state, with the former likely to have triggered the paranoia. On November 30, West was released from the hospital.
Legacy
West is among the most critically acclaimed artists of the twenty-first century, receiving praise from music critics, fans, fellow musicians, artists, and wider cultural figures for his work. AllMusic editor Jason Birchmeier writes of his impact, "As his career progressed throughout the early 21st century, West shattered certain stereotypes about rappers, becoming a superstar on his own terms without adapting his appearance, his rhetoric, or his music to fit any one musical mold." Jon Caramanica of The New York Times said that West has been "a frequent lightning rod for controversy, a bombastic figure who can count rankling two presidents among his achievements." Village Voice Media senior editor Ben Westhoff dubbed him the greatest hip hop artist of all time, writing that "he's made the best albums and changed the game the most, and his music is the most likely to endure," while Complex called him the 21st century's "most important artist of any art form, of any genre." In 2016, The Guardian compared West to the late David Bowie within the "modern mainstream", arguing that "there is nobody else who can sell as many records as West does [...] while remaining so resolutely experimental and capable of stirring things up culturally and politically."
Rolling Stone credited West with transforming hip hop's mainstream, "establishing a style of introspective yet glossy rap [...]", and called him "as interesting and complicated a pop star as the 2000s produced—a rapper who mastered, upped and moved beyond the hip-hop game, a producer who created a signature sound and then abandoned it to his imitators, a flashy, free-spending sybarite with insightful things to say about college, culture and economics, an egomaniac with more than enough artistic firepower to back it up." West's middle-class background, flamboyant fashion sense and outspokenness have set him apart from other rappers. Early in his career, he was among the first rappers to publicly criticize the preponderance of homophobia in hip hop. The sales competition between rapper 50 Cent's Curtis and West's Graduation altered the direction of hip hop and helped pave the way for new rappers who did not follow the hardcore-gangster mold. Rosie Swash of The Guardian viewed the sales competition as a historical moment in hip-hop, because it "highlighted the diverging facets of hip-hop in the last decade; the former was gangsta rap for the noughties, while West was the thinking man's alternative."Swash, Rosie (June 13, 2011). Kanye v 50 Cent. The Guardian. Guardian News and Media Limited. Retrieved August 9, 2011. West's 2008 album 808s & Heartbreak polarized both listeners and critics upon its release, but was commercially successful and impacted hip hop and pop stylistically, as it laid the groundwork for a new wave of artists who generally eschewed typical rap braggadocio for intimate subject matter and introspection, including Frank Ocean, The Weeknd, Drake, Future, Kid Cudi, Childish Gambino, Lil Durk, Chief Keef, and Soulja Boy.Rabin, Nathan. Review: Thank Me Later. The A.V. Club. Retrieved June 15, 2010. According to Ben Detrick of XXL magazine, West effectively led a new wave of artists, including Kid Cudi, Wale, Lupe Fiasco, Kidz in the Hall, and Drake, who lacked the interest or ability to rap about gunplay or drug-dealing. In 2013, Julianne Escobedo Shepherd of Spin described West as fronting a "new art-pop era" in contemporary music, in which musicians draw widely on the visual arts as a signifier of both creative exploration and extravagant wealth.
A substantial number of artists and other figures have professed admiration for West's work, including hip hop artists Rakim, RZA of Wu-Tang Clan, Chuck D of Public Enemy, and DJ Premier of Gang Starr. Experimental rock pioneer and Velvet Underground founder Lou Reed said of West that "the guy really, really, really is talented. He's really trying to raise the bar. No one's near doing what he's doing, it's not even on the same planet." Musicians such as Paul McCartney and Prince have also commended West's work. Famed Tesla Motors CEO and inventor Elon Musk complimented West in a piece for Time Magazines 100 most influential people list, writing that:
"Kanye West would be the first person to tell you he belongs on this list. The dude doesn't believe in false modesty, and he shouldn't [...] He fought for his place in the cultural pantheon with a purpose. In his debut album, over a decade ago, Kanye issued what amounted to a social critique and a call to arms (with a beat): "We rappers is role models: we rap, we don't think." But Kanye does think. Constantly. About everything. And he wants everybody else to do the same: to engage, question, push boundaries. Now that he's a pop-culture juggernaut, he has the platform to achieve just that. He's not afraid of being judged or ridiculed in the process. Kanye's been playing the long game all along, and we're only just beginning to see why."
Drake, Nicki Minaj and Casey Veggies have acknowledged being influenced directly by West. Non-rap artists such as English singer-songwriters Adele and Lily Allen, New Zealand artist Lorde, American electropop singer Halsey, English rock band Arctic Monkeys, Sergio Pizzorno of English rock band Kasabian and the American indie rock bands MGMT and Yeah Yeah Yeahs have cited West as an influence. Experimental and electronic artists such as James Blake Daniel Lopatin, and Tim Hecker have also cited West's work as an inspiration.
Achievements
West's first six solo studio albums have all gone platinum, and most have received numerous awards and critical acclaim. Yeezus, his sixth solo album, became his fifth consecutive No. 1 album in the U.S. upon release. West has had six songs exceed 3 million in digital sales as of December 2012, with "Gold Digger" selling 3,086,000, "Stronger" selling 4,402,000, "Heartless" selling 3,742,000, "E.T." selling over 4,000,000, "Love Lockdown" selling over 3,000,000, and "Niggas in Paris" selling over 3,000,000, placing him third in overall digital sales of the past decade. He has sold over 30 million digital songs in the United States making him one of the best-selling digital artists of all-time.
200px|thumb|right|West speaks after receiving an honorary doctorate from SAIC
As of 2013, West has won a total of 21 Grammy Awards, making him one of the most awarded artists of all-time. About.com ranked Kanye West No. 8 on their "Top 50 Hip-Hop Producers" list. On May 16, 2008, Kanye West was crowned by MTV as the year's No. 1 "Hottest MC in the Game." On December 17, 2010, Kanye West was voted as the MTV Man of the Year by MTV. Billboard ranked Kanye West No. 3 on their list of Top 10 Producers of the Decade. West ties with Bob Dylan for having topped the annual Pazz & Jop critic poll the most number of times ever, with four number-one albums each. West has also been included twice in the Time 100 annual lists of the most influential people in the world as well as being listed in a number of Forbes annual lists.
In its 2012 list of "500 Greatest Albums of All Time, Rolling Stone included three of West's albums—The College Dropout at number 298, Late Registration at number 118, and My Beautiful Dark Twisted Fantasy at number 353.
The Pitchfork online music publication ranked My Beautiful Dark Twisted Fantasy as the world's best album of the decade "so far"—between 2010 and 2014—on August 19, 2014, while Yeezus was ranked in the eighth position of a list of 100 albums. During the same week, the song "Runaway" (featuring Pusha T) was ranked in the third position in the publication's list of the 200 "best tracks" released since 2010. According to Acclaimed Music, a site which aggregates critics' reception, West is the 21st most celebrated artist in all of popular music and its 10th most celebrated solo artist.
DiscographyStudio albums The College Dropout (2004)
Late Registration (2005)
Graduation (2007)
808s & Heartbreak (2008)
My Beautiful Dark Twisted Fantasy (2010)
Watch the Throne (with Jay Z) (2011)
Yeezus (2013)
The Life of Pablo (2016)
Videography
The College Dropout Video Anthology (2004)
Late Orchestration (2006)
VH1 Storytellers (2010)
Tours
Touch The Sky Tour (2005)
Glow in the Dark Tour (2008)
Fame Kills: Starring Kanye West and Lady Gaga (Cancelled) (2009–10)
Watch the Throne Tour (With Jay Z) (2011–12)
The Yeezus Tour (2013–14)
Saint Pablo Tour (2016)
Filmography
Film
Year Title Role Notes2004Fade to BlackHimself2005Dave Chappelle's Block PartyHimselfGuest performance2005State Property 2HimselfCameo appearance2008The Love GuruHimselfCameo appearance2009We Were Once a FairytaleHimselfShort film, directed by Spike Jonze2010RunawayGriffinShort film, also director and writer 2012Cruel SummerIbrahimShort film, also director, producer and writer 2013Anchorman 2: The Legend ContinuesJ.J. Jackson of MTV NewsUncredited cameo
Television
Year Title Role Notes 2007EntourageHimselfSeason 4, Episode 11 2010–2012The Cleveland ShowKenny West (voice)5 episodes 2012–presentKeeping Up with the KardashiansHimself 2015I Am CaitHimselfEpisode: "Meeting Cait"
Bibliography
Raising Kanye: Life Lessons from the Mother of a Hip-Hop Superstar (2007)
Thank You and You're Welcome (2009)
Through the Wire: Lyrics & Illuminations (2009)
Glow in the Dark (2009)
See also
ReferencesNotes'
Further reading
Kanye in Oxford: The #YeezOx highlights. Retrieved April 27, 2015
External links
Category:1977 births
Category:20th-century American musicians
Category:21st-century American singers
Category:21st-century American businesspeople
Category:African-American businesspeople
Category:African-American Christians
Category:African-American fashion designers
Category:African-American film directors
Category:African-American investors
Category:African-American male rappers
Category:African-American record producers
Category:Alternative hip hop musicians
Category:American fashion businesspeople
Category:American food industry business executives
Category:American hip hop record producers
Category:American hip hop singers
Category:American music industry executives
Category:American music publishers (people)
Category:American music video directors
Category:American philanthropists
Category:American restaurateurs
Category:Brit Award winners
Category:NME Awards winners
Category:Businesspeople from Chicago
Category:Chicago State University alumni
Category:Def Jam Recordings artists
Category:Film directors from Illinois
Category:GOOD Music artists
Category:Grammy Award winners
Category:Hip hop activists
Category:Kardashian family
Category:Kim Kardashian
Category:Living people
Category:Midwest hip hop musicians
Category:Participants in American reality television series
Category:Rappers from Chicago
Category:Roc-A-Fella Records artists
Category:Shoe designers
Category:Songwriters from Illinois
Category:World Music Awards winners
Category:Music auteurs | 523,032 | 2017-01 |
Umayyad Caliphate | The Umayyad Caliphate (, trans. Al-Khilāfah al-ʾumawiyya), also spelled Omayyad, was the second of the four major Arab caliphates established after the death of Muhammad. This caliphate was centred on the Umayyad dynasty (, al-ʾUmawiyyūn, or , Banū ʾUmayya, "Sons of Umayya"), hailing from Mecca. The Umayyad family had first come to power under the third caliph, Uthman ibn Affan (r. 644–656), but the Umayyad regime was founded by Muawiya ibn Abi Sufyan, long-time governor of Syria, after the end of the First Muslim Civil War in AD 661/41 AH. Syria remained the Umayyads' main power base thereafter, and Damascus was their capital. The Umayyads continued the Muslim conquests, incorporating the Caucasus, Transoxiana, Sindh, the Maghreb and the Iberian Peninsula (Al-Andalus) into the Muslim world. At its greatest extent, the Umayyad Caliphate covered and 62 million people (29% of the world's population), making it one of the largest empires in history in both area and proportion of the world's population.
At the time, the Umayyad taxation and administrative practice were perceived as unjust by some Muslims. The Christian and Jewish population still had autonomy; their judicial matters were dealt with in accordance with their own laws and by their own religious heads or their appointees, although they did pay a poll tax for policing to the central state.A Chronology Of Islamic History 570-1000 CE, By H.U. Rahman 1999 Page 128 Muhammad had stated explicitly during his lifetime that abrahamic religious groups (still a majority in times of the Umayyad Caliphate), should be allowed to practice their own religion, provided that they paid the jizya taxation. The welfare state of both the Muslim and the non-Muslim poor started by Umar ibn al Khattab had also continued, financed by the zakat tax levied only on Muslims. Muawiya's wife Maysum (Yazid's mother) was also a Christian. The relations between the Muslims and the Christians in the state were stable in this time. The Umayyads were involved in frequent battles with the Christian Byzantines without being concerned with protecting themselves in Syria, which had remained largely Christian like many other parts of the empire. Prominent positions were held by Christians, some of whom belonged to families that had served in Byzantine governments. The employment of Christians was part of a broader policy of religious assimilation that was necessitated by the presence of large Christian populations in the conquered provinces, as in Syria. This policy also boosted Muawiya's popularity and solidified Syria as his power base.Middle East, Western Asia, and Northern Africa By Ali Aldosari, page 185 The Tragedy of the Templars: The Rise and Fall of the Crusader States By Michael Haag Chapter 3 Palestine under the Umayyads and the Arab Tribe
The rivalries between the Arab tribes had caused unrest in the provinces outside Syria, most notably in the Second Muslim Civil War of AD 680–692 and the Berber Revolt of 740–743. During the Second Civil War, leadership of the Umayyad clan shifted from the Sufyanid branch of the family to the Marwanid branch. As the constant campaigning exhausted the resources and manpower of the state, the Umayyads, weakened by the Third Muslim Civil War of 744–747, were finally toppled by the Abbasid Revolution in 750/132 AH. A branch of the family fled across North Africa to Al-Andalus, where they established the Caliphate of Córdoba, which lasted until 1031 before falling due to the Fitna of al-Ándalus.
Origins
According to tradition, the Umayyad family (also known as the Banu Abd-Shams) and Muhammad both descended from a common ancestor, Abd Manaf ibn Qusai, and they originally came from the city of Mecca. Muhammad descended from Abd Manāf via his son Hashim, while the Umayyads descended from Abd Manaf via a different son, Abd-Shams, whose son was Umayya. The two families are therefore considered to be different clans (those of Hashim and of Umayya, respectively) of the same tribe (that of the Quraish). However Muslim Shia historians suspect that Umayya was an adopted son of Abd Shams so he was not a blood relative of Abd Manaf ibn Qusai. Umayya was later discarded from the noble family. Sunni historians disagree with this and view Shia claims as nothing more than outright polemics due to their hostility to the Umayyad family in general. They point to the fact that the grand sons of Uthman, Zaid bin Amr bin Uthman bin Affan and Abdullah bin Amr bin Uthman got married to the Sukaina and Fatima the daughters of Hussein son of Ali to show closeness of Banu hashem and Bani Ummayah.History of Prophets and Kings by Al Tabari Vol 18
While the Umayyads and the Hashimites may have had bitterness between the two clans before Muhammad, the rivalry turned into a severe case of tribal animosity after the Battle of Badr. The battle saw three top leaders of the Umayyad clan (Utba ibn Rabi'ah, Walid ibn Utbah and Shaybah) killed by Hashimites (Ali, Hamza ibn ‘Abd al-Muttalib and Ubaydah ibn al-Harith) in a three-on-three melee.Sunan Abu Dawud: Book 14, Number 2659 This fueled the opposition of Abu Sufyan ibn Harb, the grandson of Umayya, to Muhammad and to Islam. Abu Sufyan sought to exterminate the adherents of the new religion by waging another battle with Muslims based in Medina only a year after the Battle of Badr. He did this to avenge the defeat at Badr. The Battle of Uhud is generally believed by scholars to be the first defeat for the Muslims, as they had incurred greater losses than the Meccans. After the battle, Abu Sufyan's wife Hind, who was also the daughter of Utba ibn Rabi'ah, is reported to have cut open the corpse of Hamza, taking out his liver which she then attempted to eat.Ibn Ishaq (1955) 380—388, cited in Peters (1994) p. 218 Within five years after his defeat in the Battle of Uhud, however, Muhammad took control of MeccaWatt (1956), p. 66 and announced a general amnesty for all. Abu Sufyan and his wife Hind embraced Islam on the eve of the conquest of Mecca, as did their son (the future caliph Muawiyah I).
thumb||Expansion of the caliphate under the Umayyads:
Most historians consider Caliph Muawiyah (661–80) to have been the second ruler of the Umayyad dynasty, even though he was the first to assert the Umayyads' right to rule on a dynastic principle. It was really the caliphate of Uthman Ibn Affan (644–656), a member of Umayyad clan himself, that witnessed the revival and then the ascendancy of the Umayyad clan to the corridors of power. Uthman placed some of the trusted members of his clan at prominent and strong positions throughout the state. Most notable was the appointment of Marwan ibn al-Hakam, Uthman's first cousin, as his top advisor, which created a stir among the Hashimite companions of Muhammad, as Marwan along with his father Al-Hakam ibn Abi al-'As had been permanently exiled from Medina by Muhammad during his lifetime. Uthman also appointed as governor of Kufa his half-brother, Walid ibn Uqba, who was accused by Hashmites of leading prayer while under the influence of alcohol.Ibn Taymiya, in his A Great Compilation of Fatwa Uthman also consolidated Muawiyah's governorship of Syria by granting him control over a larger areaIbn Kathir: Al-Bidayah wal-Nihayah, Volume 8 page 164 and appointed his foster brother Abdullah ibn Saad as the Governor of Egypt. However, since Uthman never named an heir, he cannot be considered the founder of a dynasty.
In 639, Muawiyah I was appointed as the governor of Syria after the previous governor Abu Ubaidah ibn al-Jarrah died in a plague along with 25,000 other people.Wilferd Madelung, The Succession to Muhammad: A Study of the Early Caliphate, p.61 To stop the Byzantine harassment from the sea during the Arab-Byzantine Wars, in 649 Muawiyah I set up a navy manned by Monophysite Christian, Copt and Jacobite Syrian Christian sailors and Muslim troops. This resulted in the defeat of the Byzantine navy at the Battle of the Masts in 655, opening up the Mediterranean.European Naval and Maritime History, 300-1500 By Archibald Ross Lewis, Timothy J. Runyan, page 24Leonard Michael Kroll, History of the Jihad, page 123Jim Bradbury, The Medieval Siege, A History of Byzantium By Timothy E. Gregory page 183Prophets and Princes: Saudi Arabia from Muhammad to the Present By Mark Weston, page 61Prophets and Princes: Saudi Arabia from Muhammad to the Present By Mark Weston page 11
Muawiyah I was a very successful governor and built up a very loyal and disciplined army from the old Roman Syrian army. He also befriended Amr ibn al-As who had conquered Egypt but was removed by Uthman ibn al-Affan.
The Quran and Muhammad talked about racial equality and justice as in The Farewell Sermon.The Spread of Islam: The Contributing Factors By Abu al-Fazl Izzati, A. Ezzati Page 301Islam For Dummies By Malcolm Clark PageSpiritual Clarity By Jackie Wellman Page 51The Koran For Dummies By Sohaib Sultan PageQuran: The Surah Al-Nisa, Ch4:v2Quran: Surat Al-Hujurat [49:13]Quran: Surat An-Nisa' [4:1] Tribal and nationalistic differences were discouraged. But after Muhammad's passing, the old tribal differences between the Arabs started to resurface. Following the Roman–Persian Wars and the Byzantine–Sassanid Wars, deep rooted differences between Iraq, formerly under the Persian Sassanid Empire, and Syria, formerly under the Byzantine Empire, also existed. Each wanted the capital of the newly established Islamic State to be in their area.Iraq a Complicated State: Iraq's Freedom War By Karim M. S. Al-Zubaidi Page 32 Previously, the second caliph Umar ibn Al-Khattab was very firm on the governors and his spies kept an eye on them. If he felt that a governor or a commander was becoming attracted to wealth, he had him removed from his position.Arab Socialism. [al-Ishtirakiyah Al-?Arabiyah]: A Documentary Survey By Sami A. Hanna, George H. Gardner, page 271
Early Muslim armies stayed in encampments away from cities because Umar ibn Al-Khattab feared that they might get attracted to wealth and luxury. In the process, they might turn away from the worship of God and start accumulating wealth and establishing dynasties.Arab Socialism. [al-Ishtirakiyah Al-Arabiyah]: A Documentary Survey By Sami A. Hanna, George H. Gardner, page 271 Men Around the Messenger By Khalid Muhammad Khalid, Muhammad Khali Khalid, page 117 The Cambridge History of Islam, Volume 2 edited By P. M. Holt, Ann K. S. Lambton, Bernard Lewis, page 605The Early Caliphate By Maulana Muhammad Ali When Uthman ibn al-Affan became very old, Marwan I, a relative of Muawiyah I, slipped into the vacuum, became his secretary, slowly assumed more control and relaxed some of these restrictions. Marwan I had previously been excluded from positions of responsibility. In 656, Muhammad ibn Abi Bakr, the son of Abu Bakr, the adopted son of Ali ibn Abi Talib, and the great grandfather of Ja'far al-Sadiq, showed some Egyptians the house of Uthman ibn al-Affan. Later the Egyptians ended up killing Uthman ibn al-Affan.
After the assassination of Uthman in 656, Ali, a member of the Quraysh tribe and the cousin and son-in-law of Muhammad, was elected as the caliph. He soon met with resistance from several factions, owing to his relative political inexperience. Ali moved his capital from Medina to Kufa. The resulting conflict, which lasted from 656 until 661, is known as the First Fitna ("civil war"). Muawiyah I, the governor of Syria, a relative of Uthman ibn al-Affan and Marwan I, wanted the culprits arrested. Marwan I manipulated everyone and created conflict. Aisha, the wife of Muhammad, and Talhah and Al-Zubayr, two of the companions of Muhammad, went to Basra to tell Ali to arrest the culprits who murdered Uthman. Marwan I and other people who wanted conflict manipulated everyone to fight. The two sides clashed at the Battle of the Camel in 656, where Ali won a decisive victory.
Following this battle, Ali fought a battle against Muawiyah, known as the Battle of Siffin. The battle was stopped before either side had achieved victory, and the two parties agreed to arbitrate their dispute. After the battle Amr ibn al-As was appointed by Muawiyah as an arbitrator, and Ali appointed Abu Musa Ashaari. Seven months later, in February 658, the two arbitrators met at Adhruh, about 10 miles north west of Maan in Jordon. Amr ibn al-As convinced Abu Musa Ashaari that both Ali and Muawiyah should step down and a new Caliph be elected. Ali and his supporters were stunned by the decision which had lowered the Caliph to the status of the rebellious Muawiyah I. Ali was therefore outwitted by Muawiyah and Amr. Ali refused to accept the verdict and found himself technically in breach of his pledge to abide by the arbitration. This put Ali in a weak position even amongst his own supporters. The most vociferous opponents in Ali's camp were the very same people who had forced Ali into the ceasefire. They broke away from Ali's force, rallying under the slogan, "arbitration belongs to God alone." This group came to be known as the Kharijites ("those who leave"). In 659 Ali's forces and the Kharijites met in the Battle of Nahrawan. Although Ali won the battle, the constant conflict had begun to affect his standing, and in the following years some Syrians seem to have acclaimed Muawiyah as a rival caliph.A Chronology of Islamic History 570-1000 CE By H U Rahman Page 59
thumb|left|
Ali was assassinated in 661 by a Kharijite partisan. Six months later in the same year, in the interest of peace, Hasan ibn Ali, highly regarded for his wisdom and as a peacemaker, and the Second Imam for the Shias, and the grandson of Muhammad, made a peace treaty with Muawiyah I. In the Hasan-Muawiya treaty, Hasan ibn Ali handed over power to Muawiya on the condition that he be just to the people and keep them safe and secure, and after his death he not establish a dynasty.The Succession to Muhammad: A Study of the Early Caliphate By Wilferd Madelung, page 232Sahih Al Bukhari Volume 3, Book 49 (Peacemaking), Number 867 This brought to an end the era of the Rightly Guided Caliphs for the Sunnis, and Hasan ibn Ali was also the last Imam for the Shias to be a Caliph. Following this, Mu'awiyah broke the conditions of the agreement and began the Umayyad dynasty, with its capital in Damascus.
After Mu'awiyah's death in 680, conflict over succession broke out again in a civil war known as the "Second Fitna". After making every one else fight,Sahih Al Bukhari Volume 6, Book 60, Number 352 the Umayyad dynasty later fell into the hands of Marwan I, who was also an Umayyad.
Syria would remain the base of Umayyad power until the end of the dynasty in 750. However, this Dynasty became reborn in Cordoba (Al Andalus, today's Portugal and Spain) in the form of an Emirate and then a Caliphate, lasting until AD 1031. Muslim rule continued in Iberia for another 500 years in several forms: Taifas, Berber kingdoms, and under the Kingdom of Granada until the 16th century.
In the year 712, Muhammad bin Qasim, an Umayyad general, sailed from the Persian Gulf into Sindh in Pakistan and conquered both the Sindh and the Punjab regions along the Indus river. The conquest of Sindh and Punjab, in modern-day Pakistan, although costly, were major gains for the Umayyad Caliphate. However, further gains were halted by Hindu kingdoms in India in the Caliphate campaigns in India. The Arabs tried to invade India but they were defeated by the north Indian king Nagabhata of the Gurjara Pratihara Dynasty and by the south Indian Emperor Vikramaditya II of the Chalukya dynasty in the early 8th century. After this the Arab chroniclers admit that the Caliph Mahdi "gave up the project of conquering any part of India."
During the later period of its existence, and particularly from 1031 under the Ta'ifa system of Islamic Emirates (Princedoms) in the southern half of Iberia, the Emirate/Sultanate of Granada maintained its independence largely due to the payment of Tributes to the northern Christian kingdoms, which from 1031 began to gradually expand south at its expense.
Muslim rule in Iberia came to an end on 2 January 1492 with the conquest of the Nasrid kingdom of Granada. The last Muslim ruler of Granada, Muhammad XII, better known as Boabdil, surrendered his kingdom to Ferdinand II of Aragon and Isabella I of Castile, the Catholic Monarchs, los Reyes Católicos.
History
Sufyanids
Muawiyah's personal dynasty, the "Sufyanids" (descendants of Abu Sufyan), reigned from 661 to 684, until his grandson Muawiya II. The reign of Muawiyah I was marked by internal security and external expansion. On the internal front, only one major rebellion is recorded, that of Hujr ibn Adi in Kufa. Hujr ibn Adi supported the claims of the descendants of Ali to the caliphate, but his movement was easily suppressed by the governor of Iraq, Ziyad ibn Abi Sufyan.
Muawiyah also encouraged peaceful coexistence with the Christian communities of Syria, granting his reign with "peace and prosperity for Christians and Arabs alike", and one of his closest advisers was Sarjun, the father of John of Damascus. At the same time, he waged unceasing war against the Byzantine Roman Empire. During his reign, Rhodes and Crete were occupied, and several assaults were launched against Constantinople. After their failure, and faced with a large-scale Christian uprising in the form of the Mardaites, Muawiyah concluded a peace with Byzantium. Muawiyah also oversaw military expansion in North Africa (the foundation of Kairouan) and in Central Asia (the conquest of Kabul, Bukhara, and Samarkand).
Following Muawiyah's death in 680, he was succeeded by his son, Yazid I. The hereditary accession of Yazid was opposed by a number of prominent Muslims, most notably Abd-Allah ibn al-Zubayr, son of one of the companions of Muhammad, and Husayn ibn Ali, grandson of Muhammad and younger son of Ali. The resulting conflict is known as the Second Fitna.
In 680 Ibn al-Zubayr fled Medina for Mecca. Hearing about Husayn's opposition to Yazid I, the people of Kufa sent to Husayn asking him to take over with their support. Al-Husayn sent his cousin Muslim bin Aqeel to verify if they would rally behind him. When the news reached Yazid I, he sent Ubayd-Allah bin Ziyad, ruler of Basrah, with the instruction to prevent the people of Kufa rallying behind Al-Husayn. Ubayd-Allah bin Ziyad managed to disperse the crowd that gathered around Muslim bin Aqeel and captured him. Realizing Ubayd-Allah bin Ziyad had been instructed to prevent Husayn from establishing support in Kufa, Muslim bin Aqeel requested a message to be sent to Husayn to prevent his immigration to Kufa. The request was denied and Ubayd-Allah bin Ziyad killed Muslim bin Aqeell. While Ibn al-Zubayr would stay in Mecca until his death, Husayn decided to travel on to Kufa with his family, unaware of the lack of support there. Husayn and his family were intercepted by Yazid I's forces led by Amru bin Saad, Shamar bin Thi Al-Joshan, and Hussain bin Tamim, who fought Al-Husayn and his male family members until they were killed. There were 200 people in Husayn's caravan, many of whom were women, including his sisters, wives, daughters and their children. The women and children from Husayn's camp were taken as prisoners of war and led back to Damascus to be presented to Yazid I. They remained imprisoned until public opinion turned against him as word of Husayn's death and his family's capture spread. They were then granted passage back to Medina. The sole adult male survivor from the caravan was Ali ibn Husayn who was with fever too ill to fight when the caravan was attacked.Kitab Al-Irshad by Historian Sheikh Mufid
Following the death of Husayn, Ibn al-Zubayr, although remaining in Mecca, was associated with two opposition movements, one centered in Medina and the other around Kharijites in Basra and Arabia. Because Medina had been home to Muhammad and his family, including Husayn, word of his death and the imprisonment of his family led to a large opposition movement. In 683, Yazid dispatched an army to subdue both movements. The army suppressed the Medinese opposition at the Battle of al-Harrah. The Grand Mosque in Medina was severely damaged and widespread pillaging caused deep-seated dissent. Yazid's army continued on and laid siege to Mecca. At some point during the siege, the Kaaba was badly damaged in a fire. The destruction of the Kaaba and Grand Mosque became a major cause for censure of the Umayyads in later histories of the period.
Yazid died while the siege was still in progress, and the Umayyad army returned to Damascus, leaving Ibn al-Zubayr in control of Mecca. Yazid's son Muawiya II (683–84) initially succeeded him but seems to have never been recognized as caliph outside of Syria. Two factions developed within Syria: the Confederation of Qays, who supported Ibn al-Zubayr, and the Quda'a, who supported Marwan, a descendant of Umayya via Wa'il ibn Umayyah. The partisans of Marwan triumphed at a battle at Marj Rahit, near Damascus, in 684, and Marwan became caliph shortly thereafter.
First Marwanids
thumb|left|
Marwan's first task was to assert his authority against the rival claims of Ibn al-Zubayr, who was at this time recognized as caliph throughout most of the Islamic world. Marwan recaptured Egypt for the Umayyads, but died in 685, having reigned for only nine months.
Marwan was succeeded by his son, Abd al-Malik (685–705), who reconsolidated Umayyad control of the caliphate. The early reign of Abd al-Malik was marked by the revolt of Al-Mukhtar, which was based in Kufa. Al-Mukhtar hoped to elevate Muhammad ibn al-Hanafiyyah, another son of Ali, to the caliphate, although Ibn al-Hanafiyyah himself may have had no connection to the revolt. The troops of al-Mukhtar engaged in battles both with the Umayyads in 686, defeating them at the river Khazir near Mosul, and with Ibn al-Zubayr in 687, at which time the revolt of al-Mukhtar was crushed. In 691, Umayyad troops reconquered Iraq, and in 692 the same army captured Mecca. Ibn al-Zubayr was killed in the attack.
The second major event of the early reign of Abd al-Malik was the construction of the Dome of the Rock in Jerusalem. Although the chronology remains somewhat uncertain, the building seems to have been completed in 692, which means that it was under construction during the conflict with Ibn al-Zubayr. This had led some historians, both medieval and modern, to suggest that the Dome of the Rock was built as a destination for pilgrimage to rival the Kaaba, which was under the control of Ibn al-Zubayr.
Abd al-Malik is credited with centralizing the administration of the Caliphate and with establishing Arabic as its official language. He also introduced a uniquely Muslim coinage, marked by its aniconic decoration, which supplanted the Byzantine and Sasanian coins that had previously been in use. Abd al-Malik also recommenced offensive warfare against Byzantium, defeating the Byzantines at Sebastopolis and recovering control over Armenia and Caucasian Iberia.
Following Abd al-Malik's death, his son, Al-Walid I (705–15), became caliph. Al-Walid was also active as a builder, sponsoring the construction of Al-Masjid al-Nabawi in Medina and the Great Mosque of Damascus.
A major figure during the reigns of both al-Walid and Abd al-Malik was the Umayyad governor of Iraq, Al-Hajjaj bin Yousef. Many Iraqis remained resistant to Umayyad rule, and to maintain order al-Hajjaj imported Syrian troops, which he housed in a new garrison town, Wasit. These troops became crucial in the suppression of a revolt led by an Iraqi general, Ibn al-Ash'ath, in the early eighth century.
thumb|Coin of the Umayyad Caliphate, based on a Sassanian prototype. Copper falus, Aleppo, Syria, circa 695.
Al-Walid was succeeded by his brother, Sulayman (715–17), whose reign was dominated by a protracted siege of Constantinople. The failure of the siege marked the end of serious Arab ambitions against the Byzantine capital. However, the first two decades of the eighth century witnessed the continuing expansion of the Caliphate, which pushed into the Iberian Peninsula in the west, and into Transoxiana (under Qutayba ibn Muslim) and northern India in the east.
Sulayman was succeeded by his cousin, Umar ibn Abd al-Aziz (717–20), whose position among the Umayyad caliphs is somewhat unique. He is the only Umayyad ruler to have been recognized by subsequent Islamic tradition as a genuine caliph (khalifa) and not merely as a worldly king (malik).
Umar is honored for his attempt to resolve the fiscal problems attendant upon conversion to Islam. During the Umayyad period, the majority of people living within the caliphate were not Muslim, but Christian, Jewish, Zoroastrian, or members of other small groups. These religious communities were not forced to convert to Islam, but were subject to a tax (jizyah) which was not imposed upon Muslims. This situation may actually have made widespread conversion to Islam undesirable from the point of view of state revenue, and there are reports that provincial governors actively discouraged such conversions. It is not clear how Umar attempted to resolve this situation, but the sources portray him as having insisted on like treatment of Arab and non-Arab (mawali) Muslims, and on the removal of obstacles to the conversion of non-Arabs to Islam.
After the death of Umar, another son of Abd al-Malik, Yazid II (720–24) became caliph. Yazid is best known for his "iconoclastic edict", which ordered the destruction of Christian images within the territory of the Caliphate. In 720, another major revolt arose in Iraq, this time led by Yazid ibn al-Muhallab.
Hisham and the limits of military expansion
thumb|left|North gate of the city of Resafa, site of Hisham's palace and court.
The final son of Abd al-Malik to become caliph was Hisham (724–43), whose long and eventful reign was above all marked by the curtailment of military expansion. Hisham established his court at Resafa in northern Syria, which was closer to the Byzantine border than Damascus, and resumed hostilities against the Byzantines, which had lapsed following the failure of the last siege of Constantinople. The new campaigns resulted in a number of successful raids into Anatolia, but also in a major defeat (the Battle of Akroinon), and did not lead to any significant territorial expansion.
From the caliphate's north-western African bases, a series of raids on coastal areas of the Visigothic Kingdom paved the way to the permanent occupation of most of Iberia by the Umayyads (starting in 711), and on into south-eastern Gaul (last stronghold at Narbonne in 759). Hisham's reign witnessed the end of expansion in the west, following the defeat of the Arab army by the Franks at the Battle of Tours in 732. In 739 a major Berber Revolt broke out in North Africa, which was subdued only with difficulty, but it was followed by the collapse of Umayyad authority in al-Andalus. In India the Arab armies were defeated by the south Indian Chalukya dynasty and by the north Indian Pratiharas Dynasty in the 8th century and the Arabs were driven out of India.The Cambridge Shorter History of India p.131-132Early India: From the Origins to A.D. 1300 by Romila Thapar p.333An Atlas and Survey of South Asian History by Karl J. Schmidt p.34 In the Caucasus, the confrontation with the Khazars peaked under Hisham: the Arabs established Derbent as a major military base and launched several invasions of the northern Caucasus, but failed to subdue the nomadic Khazars. The conflict was arduous and bloody, and the Arab army even suffered a major defeat at the Battle of Marj Ardabil in 730. Marwan ibn Muhammad, the future Marwan II, finally ended the war in 737 with a massive invasion that is reported to have reached as far as the Volga, but the Khazars remained unsubdued.
Hisham suffered still worse defeats in the east, where his armies attempted to subdue both Tokharistan, with its center at Balkh, and Transoxiana, with its center at Samarkand. Both areas had already been partially conquered, but remained difficult to govern. Once again, a particular difficulty concerned the question of the conversion of non-Arabs, especially the Sogdians of Transoxiana. Following the Umayyad defeat in the "Day of Thirst" in 724, Ashras ibn 'Abd Allah al-Sulami, governor of Khurasan, promised tax relief to those Sogdians who converted to Islam, but went back on his offer when it proved too popular and threatened to reduce tax revenues. Discontent among the Khurasani Arabs rose sharply after the losses suffered in the Battle of the Defile in 731, and in 734, al-Harith ibn Surayj led a revolt that received broad backing from Arabs and natives alike, capturing Balkh but failing to take Merv. After this defeat, al-Harith's movement seems to have been dissolved, but the problem of the rights of non-Arab Muslims would continue to plague the Umayyads.
Third Fitna
Hisham was succeeded by Al-Walid II (743–44), the son of Yazid II. Al-Walid is reported to have been more interested in earthly pleasures than in religion, a reputation that may be confirmed by the decoration of the so-called "desert palaces" (including Qusayr Amra and Khirbat al-Mafjar) that have been attributed to him. He quickly attracted the enmity of many, both by executing a number of those who had opposed his accession, and by persecuting the Qadariyya.
In 744, Yazid III, a son of al-Walid I, was proclaimed caliph in Damascus, and his army tracked down and killed al-Walid II. Yazid III has received a certain reputation for piety, and may have been sympathetic to the Qadariyya. He died a mere six months into his reign.
Yazid had appointed his brother, Ibrahim, as his successor, but Marwan II (744–50), the grandson of Marwan I, led an army from the northern frontier and entered Damascus in December 744, where he was proclaimed caliph. Marwan immediately moved the capital north to Harran, in present-day Turkey. A rebellion soon broke out in Syria, perhaps due to resentment over the relocation of the capital, and in 746 Marwan razed the walls of Homs and Damascus in retaliation.
Marwan also faced significant opposition from Kharijites in Iraq and Iran, who put forth first Dahhak ibn Qays and then Abu Dulaf as rival caliphs. In 747, Marwan managed to reestablish control of Iraq, but by this time a more serious threat had arisen in Khorasan.
Abbasid Revolution
thumb|left|The Caliphate at the beginning of the Abbasid revolt, before the Battle of the Zab.
The Hashimiyya movement (a sub-sect of the Kaysanites Shia), led by the Abbasid family, overthrew the Umayyad caliphate. The Abbasids were members of the Hashim clan, rivals of the Umayyads, but the word "Hashimiyya" seems to refer specifically to Abu Hashim, a grandson of Ali and son of Muhammad ibn al-Hanafiyya. According to certain traditions, Abu Hashim died in 717 in Humeima in the house of Muhammad ibn Ali, the head of the Abbasid family, and before dying named Muhammad ibn Ali as his successor. This tradition allowed the Abbasids to rally the supporters of the failed revolt of Mukhtar, who had represented themselves as the supporters of Muhammad ibn al-Hanafiyya.
Beginning around 719, Hashimiyya missions began to seek adherents in Khurasan. Their campaign was framed as one of proselytism (dawah). They sought support for a "member of the family" of Muhammad, without making explicit mention of the Abbasids. These missions met with success both among Arabs and non-Arabs (mawali), although the latter may have played a particularly important role in the growth of the movement.
Around 746, Abu Muslim assumed leadership of the Hashimiyya in Khurasan. In 747, he successfully initiated an open revolt against Umayyad rule, which was carried out under the sign of the black flag. He soon established control of Khurasan, expelling its Umayyad governor, Nasr ibn Sayyar, and dispatched an army westwards. Kufa fell to the Hashimiyya in 749, the last Umayyad stronghold in Iraq, Wasit, was placed under siege, and in November of the same year Abul Abbas as-Saffah was recognized as the new caliph in the mosque at Kufa. At this point Marwan mobilized his troops from Harran and advanced toward Iraq. In January 750 the two forces met in the Battle of the Zab, and the Umayyads were defeated. Damascus fell to the Abbasids in April, and in August, Marwan was killed in Egypt.
thumb|The Great Mosque of Córdoba in Spain, built by Banu Umayya
The victors desecrated the tombs of the Umayyads in Syria, sparing only that of Umar II, and most of the remaining members of the Umayyad family were tracked down and killed. When Abbasids declared amnesty for members of the Umayyad family, eighty gathered to receive pardons, and all were massacred. One grandson of Hisham, Abd al-Rahman I, survived and established a kingdom in Al-Andalus (Moorish Iberia), proclaiming his family to be the Umayyad Caliphate revived.
Previté-Orton argues that the reasons for the decline of the Umayyads was the rapid expansion of Islam. During Umayyad period, mass conversions brought Persians, Berbers, Copts, and Aramaics to Islam. These mawalis (enslaved) were often better educated and more civilised than their Arab invaders. The new converts, on the basis of equality of all Muslims, transformed the political landscape. Previté-Orton also argues that the feud between Syria and Iraq further weakened the empire.Previté-Orton (1971), vol. 1, pg. 239
Umayyad Administration
One of Muawiya's first tasks was to create a stable administration for the empire. He followed the main ideas of the Byzantine Empire which had ruled the same region previously, and had four main governmental branches: political and military affairs, tax collection, and religious administration. Each of these was further subdivided into more branches, offices, and departments.
Provinces
Geographically, the empire was divided into several provinces, the borders of which changed numerous times during the Umayyad reign. Each province had a governor appointed by the khalifah. The governor was in charge of the religious officials, army leaders, police, and civil administrators in his province. Local expenses were paid for by taxes coming from that province, with the remainder each year being sent to the central government in Damascus. As the central power of the Umayyad rulers waned in the later years of the dynasty, some governors neglected to send the extra tax revenue to Damascus and created great personal fortunes.
Government workers
As the empire grew, the number of qualified Arab workers was too small to keep up with the rapid expansion of the empire. Therefore, Muawiya allowed many of the local government workers in conquered provinces to keep their jobs under the new Umayyad government. Thus, much of the local government's work was recorded in Greek, Coptic, and Persian. It was only during the reign of Abd al-Malik that government work began to be regularly recorded in Arabic.
Currency
thumb|Coin of the Umayyad Caliphate, based on a Sassanian prototype, 695.
thumb|right|A coin weight from the Umayyad Dynasty, dated 743, made of glass. One of the oldest Islamic objects in an American museum, the Walters Art Museum.
thumb|right|Golden coin of the Umayyad Caliphate, Iran
The Byzantine and Sassanid Empires relied on money economies before the Muslim conquest, and that system remained in effect during the Umayyad period. Pre-existing coins remained in use, but with phrases from the Quran stamped on them. In addition to this, the Umayyad government began to mint its own coins in Damascus (which were similar to pre-existing coins), the first coins minted by a Muslim government in history. Gold coins were called dinars while silver coins were called dirhams.
Central diwans
To assist the Caliph in administration there were six Boards at the Centre: Diwan al-Kharaj (the Board of Revenue), Diwan al-Rasa'il (the Board of Correspondence), Diwan al-Khatam (the Board of Signet), Diwan al-Barid (the Board of Posts), Diwan al-Qudat (the Board of Justice) and Diwan al-Jund (the Military Board)
Diwan al-Kharaj
The Central Board of Revenue administered the entire finances of the empire. It also imposed and collected taxes and disbursed revenue.
Diwan al-Rasa'il
A regular Board of Correspondence was established under the Umayyads. It issued state missives and circulars to the Central and Provincial Officers. It co-ordinated the work of all Boards and dealt with all correspondence as the chief secretariat.
Diwan al-Khatam
In order to check forgery, Diwan al-Khatam (Bureau of Registry), a kind of state chancellery, was instituted by Mu'awiyah. It used to make and preserve a copy of each official document before sealing and despatching the original to its destination. Thus in the course of time a state archive developed in Damascus by the Umayyads under Abd al-Malik. This department survived till the middle of the Abbasid period.
Diwan al-Barid
Mu'awiyah introduced postal service, Abd al-Malik extended it throughout his empire, and Walid made full use of it. The Umayyad Caliph Abd al-Malik developed a regular postal service. Umar bin Abdul-Aziz developed it further by building caravanserais at stages along the Khurasan highway. Relays of horses were used for the conveyance of dispatches between the caliph and his agents and officials posted in the provinces. The main highways were divided into stages of each and each stage had horses, donkeys or camels ready to carry the post. Primarily the service met the needs of Government officials, but travellers and their important dispatches were also benefitted by the system. The postal carriages were also used for the swift transport of troops. They were able to carry fifty to a hundred men at a time. Under Governor Yusuf bin Umar, the postal department of Iraq cost 4,000,000 dirhams a year.
Diwan al-Qudat
In the early period of Islam, justice was administered by Muhammad and the orthodox Caliphs in person. After the expansion of the Islamic State, Umar al-Faruq had to separate judiciary from the general administration and appointed the first qadi in Egypt as early as AD 643/23 AH. After 661, a series of judges succeeded one after another in Egypt under the Umayyad Caliphs, Hisham and Walid II.
Diwan al-Jund
The Diwan of Umar, assigning annuities to all Arabs and to the Muslim soldiers of other races, underwent a change in the hands of the Umayyads. The Umayyads meddled with the register and the recipients regarded pensions as the subsistence allowance even without being in active service. Hisham reformed it and paid only to those who participated in battle.
On the pattern of the Byzantine system the Umayyads reformed their army organization in general and divided it into five corps: the centre, two wings, vanguards and rearguards, following the same formation while on march or on a battle field. Marwan II (740–50) abandoned the old division and introduced Kurdus (cohort), a small compact body.
The Umayyad troops were divided into three divisions: infantry, cavalry and artillery. Arab troops were dressed and armed in Greek fashion. The Umayyad cavalry used plain and round saddles. The artillery used arradah (ballista), manjaniq (the mangonel) and dabbabah or kabsh (the battering ram). The heavy engines, siege machines and baggage were carried on camels behind the army.
Social Organization
thumb|Ivory (circa 8th century) discovered in the Abbasid homestead in Humeima, Jordan. The style indicates an origin in northeastern Iran, the base of Hashimiyya military power.R.M. Foote et al., Report on Humeima excavations, in V. Egan and P.M. Bikai, "Archaeology in Jordan", American Journal of Archaeology 103 (1999), p. 514.
The Umayyad Caliphate exhibited four main social classes:
Muslim Arabs
Muslim non-Arabs (clients of the Muslim Arabs)
Non-Muslim free persons (Christians, Jews, and Zoroastrians)
Slaves
The Muslim Arabs were at the top of the society and saw it as their duty to rule over the conquered areas. Despite the fact that Islam teaches the equality of all Muslims, the Arab Muslims held themselves in higher esteem than Muslim non-Arabs and generally did not mix with other Muslims.
The inequality of Muslims in the empire led to social unrest. As Islam spread, more and more of the Muslim population was constituted of non-Arabs. This caused tension as the new converts were not given the same rights as Muslim Arabs. Also, as conversions increased, tax revenues from non-Muslims decreased to dangerous lows. These issues continued to grow until they helped cause the Abbasid Revolt in the 740s.
Non-Muslims
Non-Muslim groups in the Umayyad Caliphate, which included Christians, Jews, Zoroastrians, and pagan Berbers, were called dhimmis. They were given a legally protected status as second-class citizens as long as they accepted and acknowledged the political supremacy of the ruling Muslims. They were allowed to have their own courts, and were given freedom of their religion within the empire. Although they could not hold the highest public offices in the empire, they had many bureaucratic positions within the government. Christians and Jews still continued to produce great theological thinkers within their communities, but as time wore on, many of the intellectuals converted to Islam, leading to a lack of great thinkers in the non-Muslim communities.
Legacy
Currently many Sunni scholars agree that Muawiyah's family, including his progenitors, Abu Sufyan ibn Harb and Hind bint Utbah, were originally opponents of Islam and particularly of Muhammad until the Conquest of Mecca.
However many early history books like the Islamic Conquest of Syria Fatuhusham by al-Imam al-Waqidi state that after the conversion to Islam Muawiyah's father Abu Sufyan ibn Harb and his brothers Yazid ibn Abi Sufyan were appointed as commanders in the Muslim armies by Muhammad. Muawiyah, Abu Sufyan ibn Harb, Yazid ibn Abi Sufyan and Hind bint UtbahIslamic Conquest of Syria A translation of Fatuhusham by al-Imam al-Waqidi Translated by Mawlana Sulayman al-Kindi, page 325 al-Baladhuri 892 [19] Medieval Sourcebook: Al-Baladhuri: The Battle Of The Yarmuk (636) Islamic Conquest of Syria A translation of Fatuhusham by al-Imam al-Waqidi Translated by Mawlana Sulayman al-Kindi, page 331 to 334 Islamic Conquest of Syria A translation of Fatuhusham by al-Imam al-Waqidi Translated by Mawlana Sulayman al-Kindi, page 343-344 al-Baladhuri 892 [20] from The Origins of the Islamic State, being a translation from the Arabic of the Kitab Futuh al-Buldha of Ahmad ibn-Jabir al-Baladhuri, trans. by P. K. Hitti and F. C. Murgotten, Studies in History, Economics and Public Law, LXVIII (New York, Columbia University Press, 1916 and 1924), I, 207-211 fought in the Battle of Yarmouk. The defeat of the Byzantine Emperor Heraclius at the Battle of Yarmouk opened the way for the Muslim expansion into Jerusalem and Syria.
In 639, Muawiyah was appointed as the governor of Syria by the second caliph Umar after his brother the previous governor Yazid ibn Abi Sufyan and the governor before him Abu Ubaidah ibn al-Jarrah died in a plague along with 25,000 other people.A Chronology Of Islamic History 570-1000 CE, By H.U. Rahman 1999 Page 40 'Amr ibn al-'As was sent to take on the Roman Army in Egypt. Fearing an attack by the Romans, Umar asked Muawiyah to defend against a Roman attack.
With limited resources Muawiyah went about creating allies. Muawiyah married Maysum the daughter of the chief of the Kalb tribe, that was a large Jacobite Christian Arab tribe in Syria. His marriage to Maysum was politically motivated. The Kalb tribe had remained largely neutral when the Muslims first went into Syria.Encyclopedia of Islam Volume VII, page 265, By Bosworth After the plague that killed much of the Muslim Army in Syria, by marrying Maysum, Muawiyah started to use the Jacobite Christians, against the Romans. Muawiya's wife Maysum (Yazid's mother) was also a Jacobite Christian.A Chronology Of Islamic History 570-1000 CE, By H.U. Rahman 1999 Page 72 With limited resources and the Byzantine just over the border, Muawiyah worked in cooperation with the local Christian population. To stop Byzantine harassment from the sea during the Arab-Byzantine Wars, in 649 Muawiyah set up a navy; manned by Monophysitise Christians, Copts and Jacobite Syrian Christians sailors and Muslim troops.
Muawiya was one of the first to realize the full importance of having a navy; as long as the Byzantine fleet could sail the Mediterranean unopposed, the coast line of Syria, Palestine and Egypt would never be safe. Muawiyah along with Adbullah ibn Sa'd the new governor of Egypt successfully persuaded Uthman to give them permission to construct a large fleet in the dockyards of Egypt and SyriaA Chronology Of Islamic History 570-1000 CE, By H.U. Rahman 1999 Page 48-49The Great Arab Conquests By Hugh Kennedy, page 326
The first real naval engagement between the Muslim and the Byzantine navy was the so-called Battle of the Masts (Dhat al-sawari) or battle of Phoenix off the Lycian coast in 655.The Great Arab Conquests By Hugh Kennedy, page 327 This resulted in the defeat of the Byzantine navy at the Battle of the Masts in 655, opening up the Mediterranean.
Muawiyah came to power after the death of Ali and established a dynasty.
Historical significance
The Umayyad caliphate was marked both by territorial expansion and by the administrative and cultural problems that such expansion created. Despite some notable exceptions, the Umayyads tended to favor the rights of the old Arab families, and in particular their own, over those of newly converted Muslims (mawali). Therefore, they held to a less universalist conception of Islam than did many of their rivals. As G.R. Hawting has written, "Islam was in fact regarded as the property of the conquering aristocracy."G.R. Hawting, The first dynasty of Islam: the Umayyad caliphate, AD 661–750 (London, 2000), 4.
During the period of the Umayyads, Arabic became the administrative language. State documents and currency were issued in the language. Mass conversions brought a large influx of Muslims to the caliphate. The Umayyads also constructed famous buildings such as the Dome of the Rock at Jerusalem, and the Umayyad Mosque at Damascus.
According to one common view, the Umayyads transformed the caliphate from a religious institution (during the rashidun) to a dynastic one.Previté-Orton (1971), pg 236 However, the Umayyad caliphs do seem to have understood themselves as the representatives of God on earth, and to have been responsible for the "definition and elaboration of God's ordinances, or in other words the definition or elaboration of Islamic law."P. Crone and M. Hinds, God's caliph: religious authority in the first centuries of Islam (Cambridge, 1986), p. 43.
The Umayyads have met with a largely negative reception from later Islamic historians, who have accused them of promoting a kingship (mulk, a term with connotations of tyranny) instead of a true caliphate (khilafa). In this respect it is notable that the Umayyad caliphs referred to themselves not as khalifat rasul Allah ("successor of the messenger of God", the title preferred by the tradition), but rather as khalifat Allah ("deputy of God"). The distinction seems to indicate that the Umayyads "regarded themselves as God's representatives at the head of the community and saw no need to share their religious power with, or delegate it to, the emergent class of religious scholars."G.R. Hawting, The first dynasty of Islam: the Umayyad caliphate, AD 661–750 (London, 2000), 13. In fact, it was precisely this class of scholars, based largely in Iraq, that was responsible for collecting and recording the traditions that form the primary source material for the history of the Umayyad period. In reconstructing this history, therefore, it is necessary to rely mainly on sources, such as the histories of Tabari and Baladhuri, that were written in the Abbasid court at Baghdad.
Modern Arab nationalism regards the period of the Umayyads as part of the Arab Golden Age which it sought to emulate and restore.
This is particularly true of Syrian nationalists and the present-day state of Syria, centered like that of the Umayyads on Damascus.
White, one of the four Pan-Arab colors which appear in various combinations on the flags of most Arab countries, is considered as representing the Umayyads.
Theological opinions concerning the Umayyads
Sunni opinions
Many Muslims criticized the Umayyads for having too many non-Muslim, former Roman administrators in their government. St John of Damascus was also a high administrator in the Umayyad administration.A Companion to the History of the Middle East edited by Youssef M. Choueiri, page 48 As the Muslims took over cities, they left the peoples political representatives and the Roman tax collectors and the administrators. The taxes to the central government were calculated and negotiated by the peoples political representatives. The Central government got paid for the services it provided and the local government got the money for the services it provided. Many Christian cities also used some of the taxes on maintain their churches and run their own organizations. Later the Umayyads were criticized by some Muslims for not reducing the taxes of the people who converted to Islam. These new converts continues to pay the same taxes that were previously negotiated.Student Resources, Chapter 12: The First Global Civilization: The Rise and Spread of Islam, The Arab Empire of the Umayyads - Converts and "People of the Book"
Later when Umar ibn Abd al-Aziz came to power, he reduced these taxes. He is therefore praised as one of the greatest Muslim rulers after the four Rightly Guided Caliphs. Imam Abu Muhammad Adbullah ibn Abdul Hakam who lived in 829 and wrote a biography on Umar Ibn Adbul AzizUmar Ibn Adbul Aziz By Imam Abu Muhammad Adbullah ibn Abdul Hakam died 214 AH 829 C.E. Publisher Zam Zam Publishers Karachi stated that the reduction in these taxes stimulated the economy and created wealth but it also reduced the government budget and this then led to a reduction in the defense budget.
Only Umayyad ruler (Caliphs of Damascus), Umar ibn Abd al-Aziz, is unanimously praised by Sunni sources for his devout piety and justice. In his efforts to spread Islam he established liberties for the Mawali by abolishing the jizya tax for converts to Islam. Imam Abu Muhammad Adbullah ibn Abdul Hakam stated that Umar ibn Abd al-Aziz also stopped the personal allowance offered to his relatives stating that he could only give them an allowance if he gave an allowance to everyone else in the empire. Umar ibn Abd al-Aziz was later poisoned in the year 720. When successive governments tried to reverse Umar ibn Abd al-Aziz's tax policies it created rebellion.
Early literature
The book Al Muwatta by Imam Malik was written in the early Abbasid period in Madina. It does not contain any anti-Umayyad content because it was more concerned with what the Quran and what Muhammad said and was not a history book on the Umayyads.
Even the earliest pro-Shia accounts of al-Masudi are more balanced. al-Masudi in Ibn Hisham is the earliest Shia account of Muawiyah. He recounted that Muawiyah spent a great deal of time in prayer, in spite of the burden of managing a large empire.Muawiya Restorer of the Muslim Faith By Aisha Bewley Page 41
Az-Zuhri stated that Muawiya led the Hajj Pilgrimage with the people twice during his era as caliph.
Books written in the early Abbasid period like al-Baladhuri's "The Origins of the Islamic State" provide a more accurate and balanced history. Ibn Hisham also wrote about these events.
Much of the anti-Umayyad literature started to appear in the later Abbasid period in Persia.
After killing off most of the Umayyads and destroying the graves of the Umayyad rulers apart from Muawiyah and Umar Ibn Adbul Aziz, the history books written during the later Abbasid period are more anti-Umayyad. The Abbasids justified their rule by saying that their ancestor Abd Allah ibn Abbas was a cousin of Muhammad.
The books written later in the Abbasid period in Iran are more anti-Umayyad. Iran was Sunni at the time. There was much anti-Arab feeling in Iran after the fall of the Persian empire. This anti-Arab feeling also influenced the books on Islamic history. Al-Tabri was also written in Iran during that period. Al-Tabri was a huge collection including all the texts that he could find, from all the sources. It was a collection preserving everything for future generations to codify and for future generations to judge whether the histories were true or false.
Shi'a opinions
The negative view of the Umayyads by Shias is briefly expressed in the Shi'a book "Sulh al-Hasan".Sulh al-HasanChapter 24 According to some sources Ali described them as the worst Fitna.Sermon 92
Bahá'í standpoint
Asked for an explanation of the prophecies in the Book of Revelation (12:3), `Abdu'l-Bahá suggests in Some Answered Questions that the "great red dragon, having seven heads and ten horns, and seven crowns upon his heads," refers to the Umayyad caliphs who "rose against the religion of Prophet Muhammad and against the reality of Ali".
The seven heads of the dragon is symbolic of the seven provinces of the lands dominated by the Umayyads: Damascus, Persia, Arabia, Egypt, Africa, Andalusia, and Transoxania. The ten horns represent the ten names of the leaders of the Umayyad dynasty; Abu Sufyan, Muawiya, Yazid, Marwan, Abd al-Malik, Walid, Sulayman, Umar, Hisham, and Ibrahim. Some names were re-used, as in the case of Yazid II and Yazid III, which were not counted for this interpretation.
List of Umayyad Caliphs
thumb|Genealogic tree of the Umayyad family. In blue: Caliph Uthman, one of the four Rashidun Caliphs. In green, the Umayyad Caliphs of Damascus. In yellow, the Umayyad emirs of Córdoba. In orange, the Umayyad Caliphs of Córdoba. Abd Al-Rahman III was an emir until 929 when he proclaimed himself Caliph. Muhammad is included (in caps) to show the kinship of the Umayyads with him.
thumb||Mosque of Córdoba, Spain. Miḥrāb
Caliph ReignCaliphs of Damascus Muawiya I ibn Abu Sufyan 28 July 661 – 27 April 680 Yazid I ibn Muawiyah 27 April 680 – 11 November 683 Muawiya II ibn Yazid 11 November 683– June 684 Marwan I ibn al-Hakam June 684– 12 April 685 Abd al-Malik ibn Marwan 12 April 685 – 8 October 705 al-Walid I ibn Abd al-Malik 8 October 705 – 23 February 715 Sulayman ibn Abd al-Malik 23 February 715 – 22 September 717 Umar ibn Abd al-Aziz 22 September 717 – 4 February 720 Yazid II ibn Abd al-Malik 4 February 720 – 26 January 724 Hisham ibn Abd al-Malik 26 January 724 – 6 February 743 al-Walid II ibn Yazid 6 February 743 – 17 April 744 Yazid III ibn al-Walid 17 April 744 – 4 October 744 Ibrahim ibn al-Walid 4 October 744 – 4 December 744 Marwan II ibn Muhammad (ruled from Harran in the Jazira) 4 December 744 – 25 January 750Emirs of Cordoba Abd al-Rahman I 756–788 Hisham I 788–796 al-Hakam I 796–822 Abd ar-Rahman II 822–852 Muhammad I 852–886 Al-Mundhir 886–888 Abdallah ibn Muhammad 888–912 Abd ar-Rahman III 912–929Caliphs of Cordoba Abd ar-Rahman III, as caliph 929–961 Al-Hakam II 961–976 Hisham II 976–1008 Muhammad II 1008–1009 Sulayman ibn al-Hakam 1009–1010 Hisham II, restored 1010–1012 Sulayman ibn al-Hakam, restored 1012–1017 Abd ar-Rahman IV 1021–1022 Abd ar-Rahman V 1022–1023 Muhammad III 1023–1024 Hisham III 1027–1031
See also
History of Islam
List of Sunni Muslim dynasties
Umayya ibn Abd Shams
Umayyad family tree
References
Further reading
AL-Ajmi, Abdulhadi, The Umayyads, in Muhammad in History, Thought, and Culture: An Encyclopedia of the Prophet of God (2 vols.), Edited by C. Fitzpatrick and A. Walker, Santa Barbara, ABC-CLIO, 2014. ISBN 1610691776
A. Bewley, Mu'awiya, Restorer of the Muslim Faith (London, 2002)
Boekhoff-van der Voort, Nicolet, Umayyad Court, in Muhammad in History, Thought, and Culture: An Encyclopedia of the Prophet of God (2 vols.), Edited by C. Fitzpatrick and A. Walker, Santa Barbara, ABC-CLIO, 2014. ISBN 1610691776
P. Crone, Slaves on horses (Cambridge, 1980).
P. Crone and M.A. Cook, Hagarism (Cambridge, 1977).
F. M. Donner, The early Islamic conquests (Princeton, 1981).
G. R. Hawting, The first dynasty of Islam: the Umayyad caliphate, AD 661–750 Rutledge Eds. (London, 2000)
Previté-Orton, C. W (1971). The Shorter Cambridge Medieval History. Cambridge: Cambridge University Press.
J. Wellhausen, The Arab Kingdom and its fall (London, 2000).
External links
Umayyads
Umayyads – First caliphate dynasty
Timeline of Islamic caliphs by Happy Books
Interactive Family tree of Umayyah ibn Abd Shams by Happy Books
|-
|-
|-
|-
|-
|-
|-
Category:750 disestablishments
Category:History of Al-Andalus
Category:7th century in Iran
Category:8th century in Iran
Category:History of Islam
Category:History of Saudi Arabia
Category:History of the Mediterranean
Category:History of North Africa
Category:States in medieval Anatolia
Category:661 establishments
pl:Umajjadzi#Kalifat Umajjadów | 49,855 | 2017-01 |
Estonia | Estonia (; ), officially the Republic of Estonia (), is a country in the Baltic region of Northern Europe. It is bordered to the north by the Gulf of Finland, to the west by the Baltic Sea, to the south by Latvia (343 km), and to the east by Lake Peipus and Russia (338.6 km).. Official website of the Republic of Estonia (in Estonian) Across the Baltic Sea lies Sweden in the west and Finland in the north. The territory of Estonia consists of a mainland and 2,222 islands and islets in the Baltic Sea,Matthew Holehouse Estonia discovers it's actually larger after finding 800 new islands The Telegraph, 28 August 2015 covering of land, and is influenced by a humid continental climate.
The territory of Estonia has been inhabited since at least 6500 BC, with Finno-Ugric speakers – the linguistic ancestors of modern Estonians – arriving no later than around 1800 BC.Petri Kallio 2006: Suomalais-ugrilaisen kantakielen absoluuttisesta kronologiasta. — Virittäjä 2006. (With English summary). Following centuries of successive Teutonic, Danish, Swedish, and Russian rule, Estonians experienced a national awakening that culminated in independence from the Russian Empire towards the end of World War I. During World War II, Estonia was occupied by the Soviet Union in 1940, then by Nazi Germany a year later and was again annexed by the Soviets in 1944, after which it was reconstituted as the Estonian Soviet Socialist Republic. In 1988, during the Singing Revolution, the Estonian Supreme Soviet issued the Estonian Sovereignty Declaration in defiance of Soviet rule,For a legal evaluation of the incorporation of the three Baltic states into the Soviet Union, see K. Marek, Identity and Continuity of States in Public International Law (1968), 383–91 and independence was restored on 20. August 1991.
Modern Estonia is a democratic parliamentary republic divided into fifteen counties; its capital and largest city is Tallinn. With a population of 1.3 million, it is one of the least-populous member states of the European Union, Eurozone, North Atlantic Treaty Organization (NATO), OECD and Schengen Area.
Ethnic Estonians are a Finnic people, sharing close cultural ties with their northern neighbour, Finland, and the official language, Estonian, is a Finno-Ugric language closely related to Finnish and the Sami languages, and distantly to Hungarian.
Estonia is a developed country with an advanced, high-income economy that is among the fastest growing in the EU. Its Human Development Index ranks very highly, and it performs favourably in measurements of economic freedom, civil liberties and press freedom (3rd in the world in 2012 and 2007). The 2015 PISA test places the Estonian high school students 3rd in the world, behind Singapore and Japan.|archivedate=7 December 2016 |title=Asian countries dominate, science teaching criticised in survey |publisher=Yahoo.com |accessdate=10 December 2016
Citizens of Estonia are provided with universal health care,Comparing Performance of Universal Health Care Countries, 2016 Fraser Institute free educationEstonia OECD 2016 and the longest paid maternity leave in the OECD. Since independence the country has rapidly developed its IT sector, becoming one of the world's most digitally advanced societies. In 2005 Estonia became the first nation to hold elections over the Internet, and in 2014 the first nation to provide E-residency.
Etymology
In the Estonian language, the oldest known endonym of the Estonians was ,Ariste, Paul (1956). Maakeel ja eesti keel. Eesti NSV Teaduste Akadeemia Toimetised 5: 117–24; Beyer, Jürgen (2007). Ist maarahvas (‚Landvolk‘), die alte Selbstbezeichnung der Esten, eine Lehnübersetzung? Eine Studie zur Begriffsgeschichte des Ostseeraums. Zeitschrift für Ostmitteleuropa-Forschung 56: 566–593. meaning "country people" or "people of the land".
One hypothesis regarding the modern name of Estonia is that it originated from the Aesti, a people described by the Roman historian Tacitus in his Germania (ca. 98 AD).Germania, Tacitus, Chapter XLV The historic Aesti were allegedly Baltic people, whereas the modern Estonians are Finno-Ugric. The geographical areas between Aesti and Estonia do not match, with Aesti being further down south.
Ancient Scandinavian sagas refer to a land called Eistland, as the country is still called in Icelandic, and close to the Danish, German, Dutch, Swedish and Norwegian term Estland for the country. Early Latin and other ancient versions of the name are Estia and Hestia.
Esthonia was a common alternative English spelling prior to 1921.
History
Prehistory
thumb|left|Bronze Age stone-cist graves
Human settlement in Estonia became possible 13,000 to 11,000 years ago, when the ice from the last glacial era melted. The oldest known settlement in Estonia is the Pulli settlement, which was on the banks of the river Pärnu, near the town of Sindi, in south-western Estonia. According to radiocarbon dating it was settled around 11,000 years ago.
The earliest human inhabitation during the Mesolithic period is connected to Kunda culture, which is named after the town of Kunda in northern Estonia. At that time the country was covered with forests, and people lived in semi-nomadic communities near bodies of water. Subsistence activities consisted of hunting, gathering and fishing. Around 4900 BC appear ceramics of the neolithic period, known as Narva culture. Starting from around 3200 BC the Corded Ware culture appeared; this included new activities like primitive agriculture and animal husbandry.
The Bronze Age started around 1800 BC, and saw the establishment of the first hill fort settlements. A transition from hunting-fishing-gathering subsistence to single farm based settlement started around 1000 BC, and was complete by the beginning of the Iron Age around 500 BC. Large amount of bronze objects indicate existence of an active communication with Scandinavian and Germanic tribes.
thumb|right|alt=Iron Age metal plates and buttons from a hoard|Iron Age artifacts of a hoard from Kumna
A more troubled and war-ridden middle Iron Age followed, with external threats appearing from different directions. Several Scandinavian sagas referred to major confrontations with Estonians, notably when Estonians defeated and killed the Swedish king Ingvar. Similar threats appeared in the east, where Russian principalities were expanding westward. In 1030 Yaroslav the Wise defeated Estonians and established a fort in what's modern day Tartu; this foothold lasted until Sosols (Estonian tribe) destroyed it in 1061, followed by their raid to Pskov. Around the 11th century, the Scandinavian Viking era around the Baltic Sea was succeeded by the Baltic Viking era, with seaborne raids by Curonians and by Estonians from the island of Saaremaa, known as Oeselians. In 1187 Estonians (Oeselians), Curonians or/and Karelians sacked Sigtuna, which was a major city of Sweden at the time.<ref name=Tarvel>Enn Tarvel (2007). Sigtuna hukkumine.] Haridus, 2007 (7-8), p 38–41</ref>
In the early centuries AD, political and administrative subdivisions began to emerge in Estonia. Two larger subdivisions appeared: the parish (Estonian: kihelkond) and the county (Estonian: maakond), which consisted of multiple parishes. A parish was led by elders and centered around a hill fort; in some rare cases a parish had multiple forts. By the 13th century Estonia consisted of eight major counties: Harjumaa, Järvamaa, Läänemaa, Revala, Saaremaa, Sakala, Ugandi, and Virumaa; and six minor, single-parish counties: Alempois, Jogentagana, Mõhu, Nurmekund, Soopoolitse, and Vaiga. Counties were independent entities and engaged only in a loose cooperation against foreign threats.
There is little known of early Estonian pagan religious practices. The Chronicle of Henry of Livonia mentions Tharapita as the superior god of the Oeselians. Spiritual practices were guided by shamans, with sacred groves, especially oak groves, serving as places of worship.
Middle Ages
thumb|left|alt=Kuressaare Castle, square stone keep with one square corner tower and red tile roof|Kuressaare Castle in Saaremaa dates back to the 1380s
In 1199 Pope Innocent III declared a crusade to "defend the Christians of Livonia". Fighting reached Estonia in 1206, when Danish king Valdemar II unsuccessfully invaded Saaremaa. The German Livonian Brothers of the Sword, who had previously subjugated Livonians, Latgalians, and Selonians, started campaigning against Estonians in 1208, and over next years both sides made numerous raids and counter-raids. A major leader of the Estonian resistance was Lembitu, an elder of Sakala County, but in 1217 Estonians suffered a significant defeat at the Battle of St. Matthew's Day and Lembitu was killed. In 1219 Valdemar II landed at Lyndanisse, defeated the Estonians in battle, and started conquering Northern Estonia. Next year Sweden invaded Western Estonia, but were repelled by Oeselians. In 1223 a major revolt ejected Germans and Danes from the whole of Estonia except Reval, but the crusaders soon resumed the offensive and in 1227 Saaremaa was the last county to surrender.
After crusade the territory of present-day Estonia and Latvia was named Terra Mariana, but later it became known simply as Livonia. Northern-Estonia became Danish Duchy of Estonia, while the rest was divided between the Sword Brothers and prince-bishoprics of Dorpat and Ösel–Wiek. In 1236, after suffering a major defeat, the Sword Brothers merged into the Teutonic Order becoming Livonian Order. On following decades there were several uprisings against foreign rulers on Saaremaa. In 1343 started a major rebellion, known as St. George's Night Uprising, encompassing whole Northern-Estonia and Saaremaa. Teutonic Order finished suppressing the rebellion in 1345, and next year Danish king sold his possessions in Estonia to the Order. The unsuccessful rebellion led to a consolidation of power for the Baltic German minority. For the subsequent centuries they remained the ruling elite in both cities and the countryside.
thumb|right|alt=color historical map of Livonia|Terra Mariana was the official name for Medieval Livonia
During the crusade Reval (Tallinn) was founded, as the capital of Danish Estonia, on the site of Lyndanisse. In 1248 Reval received full town rights and adopted the Lübeck law. The Hanseatic League controlled trade on the Baltic Sea, and overall four largest towns in Estonia became members: Reval, Dorpat (Tartu), Pernau (Pärnu), and Fellin (Viljandi). Reval acted as a trade intermediary between Novgorod and Western Hanseatic cities, while Dorpat filled the same role with Pskov. Many guilds were formed during that period, but only a very few allowed participation of native Estonians. Protected by their stone walls and alliance with the Hansa, prosperous cities like Reval and Dorpat repeatedly defied other rulers of Livonia. After the decline of the Teutonic Order following its defeat in the Battle of Grunwald in 1410, and the defeat of the Livonian Order in the Battle of Swienta on 1 September 1435, the Livonian Confederation Agreement was signed on 4 December 1435.
The Reformation in Europe began in 1517, and soon spread in Livonia despite opposition by the Livonian Order. Towns were the first to embrace Protestantism in 1520s, and by 1530s majority of gentry had adopted Lutheranism for themselves and their serf peasants. Church services were now conducted in vernacular, which initially meant German, but in 1530s first religious services in Estonian also took place.
During the 16th century expansionist monarchies of Muscowy, Sweden, and Poland–Lithuania consolidated power, posing a growing threat to decentralized Livonia weakened by disputes between cities, nobility, bishops, and the Order.
Swedish Estonia
thumb|left|250px|alt=historical map of the Swedish Empire|Estonian territory as part of the Swedish Empire (1561–1721)
thumb|right|290px|alt=walled city of Reval and castle on hill|Estonian capital Tallinn (then Reval) in the first half of the 17th century.
In 1558 czar Ivan the Terrible of Russia invaded Livonia, starting the Livonian War. The Livonian Order was decisively defeated in 1560, prompting Livonian factions to seek foreign protection. Majority of Livonia accepted Polish-Lithuanian rule, while Reval and nobles of Northern-Estonia swore loyalty to the Swedish king, and Bishop of Ösel-Wiek sold his lands to the Danish king. Russian forces gradually conquered majority of Livonia, but in late 1570s Polish-Lithuanian and Swedish armies started their own offensives and bloody war finally ended in 1583 with Russian defeat. As result of the war Northern-Estonia became Swedish Duchy of Estonia, Southern-Estonia became Polish-Lithuanian Duchy of Livonia, and Saaremaa remained under Danish control.
In 1600 the Polish-Swedish War broke out, causing further devastation. The protracted war ended in 1629 with Sweden gaining Livonia, including Southern-Estonia and Northern-Latvia. Danish Saaremaa was transferred to Sweden in 1645. The wars had reduced the Estonian population from about 250–270,000 people in mid 16th century to 115–120,000 in the 1630s.
Under the Swedish rule serfdom was retained, but legal reforms took place which strengthened peasants land-usage and inheritance rights; as a result this period earned a reputation of the "Good Old Swedish Time" in people's historical memory. Swedish king Gustaf II Adolf established gymnasiums in Reval and Dorpat, the latter was upgraded to Tartu University in 1632. Printing presses were also established in both towns. In 1680s the beginnings of Estonian elementary education appeared, largely due to efforts of Bengt Gottfried Forselius. He also introduced ortographical reforms to written Estonian. The population of Estonia grew rapidly for a 60–70 year period, until the Great Famine of 1695–97 in which some 70,000–75,000 people perished – about 20% of the population.
National awakening and Russian Empire
thumb|left|upright=0.7|Carl Robert Jakobson played a key role in the Estonian national awakening.
In 1700 the Great Northern War started, and by 1710 whole Estonia was conquered by the Russian Empire. The war again devastated population of Estonia, with 1712 population estimated at 150,000–170,000. Russian administration restored all the political and landholding rights of Baltic Germans . Rights of Estonian peasants reached their lowest point, as serfdom completely dominated agricultural relations during 18th century. Serfdom was formally abolished in 1816-1819, but this initially had a very little practical effect, major improvements in rights of the peasantry started with reforms at mid 19th century.
As a result of the abolition of serfdom and the availability of education to the native Estonian-speaking population, an active Estonian nationalist movement developed in the 19th century. It began on a cultural level, with the establishment of Estonian language literature, theatre and professional music, and led on to the formation of the Estonian national identity and the Age of Awakening (). Although Estonian national consciousness spread in the course of the 19th century,Gellner, Ernest (1996). "Do nations have navels?" Nations and Nationalism 2.2, 365–370. some degree of ethnic awareness in the literate middle class preceded this development.Raun, Toivo U. (2003). "Nineteenth- and early twentieth-century Estonian nationalism revisited". Nations and Nationalism 9.1, 129–147. By the 18th century the self-denomination eestlane, along with the older maarahvas, spread among Estonians in the then provinces of Estonia and Livonia of the Russian Empire.Ariste, Paul (1956). "Maakeel ja eesti keel". Eesti NSV Teaduste Akadeemia Toimetised 5: 117–124. The Bible was translated in 1739, and the number of books and pamphlets published in Estonian increased from 18 in the 1750s to 54 in the 1790s. By the end of the century more than half of the adult peasants were able to read. The first university-educated intellectuals identifying themselves as Estonians, including Friedrich Robert Faehlmann (1798–1850), Kristjan Jaak Peterson (1801–1822) and Friedrich Reinhold Kreutzwald (1803–1882), came to prominence in the 1820s. The ruling elite had remained predominantly German in language and culture since the conquest of the early 13th century. Garlieb Merkel (1769–1850), a Baltic German Estophile, was the first author to treat the Estonians as a nationality equal to others; he became a source of inspiration for the Estonian national movement, modelled on the Baltic German cultural world before the middle of the 19th century. However, in the middle of the century the Estonians, with such leaders as Carl Robert Jakobson (1841–1882), Jakob Hurt (1839–1907) and Johann Voldemar Jannsen (1819–1890), became more ambitious in their political demands and started leaning towards the Finns as a successful model of national movement.
Significant accomplishments were the publication of the national epic, Kalevipoeg in 1862, and the organisation of the first national song festival in 1869. In response to a period of Russification initiated by the Russian Empire in the 1890s, Estonian nationalism took on more political tones, with intellectuals first calling for greater autonomy, and later for complete independence from the Russian Empire.
Independence
thumb|upright|alt=newspaper clipping of Estonian Declaration of Independence|Estonian Declaration of Independence
thumb|left|alt=photograph of crowd around flag raising|Declaration of independence in Pärnu on 23 February 1918. One of the first images of the Republic.
Following the Bolshevik takeover of power in Russia after the October Revolution of 1917 and German victories against the Russian army, between the Russian Red Army's retreat and the arrival of advancing German troops, the Committee of Elders of the Maapäev issued the Estonian Declaration of Independence. http://www.president.ee in Pärnu on 23 February and in Tallinn on 24 February 1918.
The country was occupied by German troops, and the Treaty of Brest-Litovsk was signed, whereby the Russian government waived all claims to Estonia. The Germans stayed until November 1918 when, with the end of the war in the west, the soldiers returned to Germany, leaving a vacuum which allowed the Bolshevik troops to move into Estonia. This caused the Estonian War of Independence, which lasted 14 months.
After winning the Estonian War of Independence against Soviet Russia and later the German Freikorps included in the Baltische Landeswehr as volunteers, who had earlier fought alongside Estonia, the Tartu Peace Treaty was signed on 2 February 1920. The Republic of Estonia was recognised (de jure) by Finland on 7 July 1920, by Poland on 31 December 1920, by Argentina on 12 January 1921, by the Western Allies on 26 January 1921 and by India on 22 September 1921.
Estonia maintained its independence for twenty-two years. Initially a parliamentary democracy, the parliament (Riigikogu) was disbanded in 1934, following political unrest caused by the global economic crisis. Subsequently, the country was ruled by decree by Konstantin Päts, who became president in 1938, the year parliamentary elections resumed.
thumb|right|alt=Estonian men fighting for Finland|Estonian volunteers in Finland during the Continuation War
Second World War
The fate of Estonia in the Second World War was decided by the German–Soviet Non-aggression Pact and its Secret Additional Protocol of August 1939. World War II casualties of Estonia are estimated at around 25% of the population. War and occupation deaths have been estimated at 90,000. These include the Soviet deportations in 1941, the German deportations and Holocaust victims.Encyclopædia Britannica: Baltic states, World War II losses
Soviet occupation
thumb|left|alt=schematic map of Soviet blockade and invasion of Estonia|Schematics of the Soviet military blockade and invasion of Estonia and Latvia in 1940
In August 1939 Joseph Stalin gained Adolf Hitler's agreement to divide Eastern Europe into "spheres of special interest" according to the Molotov–Ribbentrop Pact and its Secret Additional Protocol.
On 24 September 1939, warships of the Red Navy appeared off Estonian ports and Soviet bombers began a patrol over Tallinn and the nearby countryside.Moscow's Week at Time Magazine on Monday, 9 October 1939 The Estonian government was forced to allow the USSR to establish military bases and station 25,000 troops on Estonian soil for "mutual defence".David J. Smith (2002) The Baltic States: Estonia, Latvia and Lithuania, Routledge, p. 24, ISBN 0415285801. On 12 June 1940, the order for a total military blockade of Estonia was given to the Soviet Baltic Fleet.Pavel Petrov, Viktor Stepakov, Dmitry Frolov (2002). The State Archive of the Russian Navy (in Russian) ISBN 951-707-100-0.
On 14 June, while the world's attention was focused on the fall of Paris to Nazi Germany a day earlier, the Soviet military blockade of Estonia went into effect. Two Soviet bombers downed the Finnish passenger aeroplane "Kaleva" flying from Tallinn to Helsinki carrying three diplomatic pouches from the US delegations in Tallinn, Riga and Helsinki.Eric A. Johnson and Anna Hermann The Last Flight from Tallinn. Foreign Service Journal. American Foreign Service Association. May 2007 On 16 June, the Soviet Union invaded Estonia."Five Years of Dates", The Time magazine, 24 June 1940. The Red Army exited from their military bases in Estonia on 17 June.Estonia: Identity and Independence by Jean-Jacques Subrenat, David Cousins, Alexander Harding, Richard C. Waterhouse ISBN 90-420-0890-3 The following day, some 90,000 additional troops entered the country. In the face of overwhelming Soviet force, the Estonian government capitulated on 17 June 1940 to avoid bloodshed.The Baltic States: Estonia, Latvia and Lithuania by David J. Smith p.19 ISBN 0-415-28580-1 The military occupation of Estonia was complete by 21 June.The Baltic States: Estonia, Latvia and Lithuania by David J. Smith, Page 27, ISBN 0-415-28580-1
Most of the Estonian Defence Forces surrendered according to the orders of the Estonian government, believing that resistance was useless, and they were disarmed by the Red Army.14 June the Estonian government surrendered without offering any military resistance; The occupation authorities began ... by disarming the Estonian Army and removing the higher military command from power the Estonian armed forces were disarmed by the Soviet occupation in June 1940 Only the Estonian Independent Signal Battalion showed resistance to Red Army and Communist militia "People's Self-Defence" units in front of the XXI Grammar School in Tallinn on 21 June. As the Red Army brought in additional reinforcements supported by six armoured fighting vehicles, the battle lasted several hours until sundown. Finally the military resistance was ended with negotiations and the Independent Signal Battalion surrendered and was disarmed.51 years from the Raua Street Battle. Estonian Defence Forces (in Estonian) There were two dead Estonian servicemen, Aleksei Männikus and Johannes Mandre, and several wounded on the Estonian side and about ten killed and more wounded on the Soviet side.
thumb|right|250px|Estonia Theatre after bombing by Red Air Force in March 1944
On 6 August 1940, Estonia was annexed by the Soviet Union as the Estonian SSR. The provisions of the Estonian constitution requiring a popular referendum to decide on joining a supra-national body were ignored. Instead the vote to join the Soviet Union was taken by those elected in the elections held the previous month. Further, those who had failed to do their "political duty" of voting Estonia into the USSR, specifically those who had failed to have their passports stamped for voting, were condemned to death by Soviet tribunals.Justice in The Baltic. Time, 19 August 1940 The repressions followed with the mass deportations carried out by the Soviets in Estonia on 14 June 1941. Many of the country's political and intellectual leaders were killed or deported to remote areas of the USSR by the Soviet authorities in 1940–1941. Repressive actions were also taken against thousands of ordinary people.
When the German Operation Barbarossa started against the Soviet Union, about 34,000 young Estonian men were forcibly drafted into the Red Army, fewer than 30% of whom survived the war. Political prisoners who could not be evacuated were executed by the NKVD.The Baltic Revolution: Estonia, Latvia, Lithuania and the Path to Independence by Anatol Lieven p424 ISBN 0-300-06078-5
Many countries, including the UK and US, did not recognise the annexation of Estonia by the USSR de jure. Such countries recognised Estonian diplomats and consuls who still functioned in the name of their former governments. These diplomats persisted in this anomalous situation until the eventual restoration of Estonia's independence.Diplomats Without a Country: Baltic Diplomacy, International Law, and the Cold War by James T. McHugh, James S. Pacy ISBN 0-313-31878-6
The official Soviet and current Russian version claims that Estonians voluntarily gave up their statehood. Anti-communist partisans of 1944–1976 are labelled "bandits" or "Nazis", though the Russian position is not recognised internationally.
German occupation
thumb|upright|alt=head shot of facing left Jüri Uluots|Jüri Uluots
After Germany invaded the Soviet Union on 22 June 1941, the Wehrmacht crossed the Estonian southern border on 7 July. The Red Army retreated behind the Pärnu River – Emajõgi line on 12 July. At the end of July the Germans resumed their advance in Estonia, working in tandem with the Estonian Forest Brothers. Both German troops and Estonian partisans took Narva on 17 August and the Estonian capital Tallinn on 28 August. After the Soviets were driven out from Estonia, German troops disarmed all the partisan groups.Dave Lande Resistance! Occupied Europe and Its Defiance of Hitler, p. 188, ISBN 0-7603-0745-8
Although initially the Germans were welcomed by most Estonians as liberators from the USSR and its oppressions, and hopes were raised for the restoration of the country's independence, it was soon realised that the Nazis were but another occupying power. The Germans used Estonia's resources for their war effort; for the duration of the occupation Estonia was incorporated into the German province of Ostland. The Germans and their collaborators also carried out The Holocaust in Estonia in which they established a network of concentration camps and murdered thousands of Estonian Jews and Estonian Gypsies, other Estonians, non-Estonian Jews, and Soviet prisoners of war.
Some Estonians, unwilling to side directly with the Nazis, joined the Finnish Army (which was allied with the Nazis) to fight against the Soviet Union. The Finnish Infantry Regiment 200 (Estonian: soomepoisid) was formed out of Estonian volunteers in Finland. Although many Estonians were recruited into the German armed forces (including Estonian Waffen-SS), the majority of them did so only in 1944 when the threat of a new invasion of Estonia by the Red Army had become imminent.Estonia 1940–1945, Estonian International Commission for the Investigation of Crimes Against Humanity, p.613 ISBN 9949-13-040-9 In January 1944 Estonia was again facing the prospect of invasion from the Red Army, and the last legitimate prime minister of the Republic of Estonia (according to the Constitution of the Republic of Estonia) delivered a radio address asking all able-bodied men born from 1904 to 1923 to report for military service. The call resulted in around 38,000 new enlistmentsResistance! Occupied Europe and Its Defiance of Hitler (Paperback)
by Dave Lande on Page 200 ISBN 0-7603-0745-8 and several thousand Estonians who had joined the Finnish Army came back to join the newly formed Territorial Defense Force, assigned to defend Estonia against the Soviet advance. It was hoped that by engaging in such a war Estonia would be able to attract Western support for Estonian independence.The Baltic States: The National Self-Determination of Estonia, Latvia and Lithuania
Graham Smith p.91 ISBN 0-312-16192-1
Soviet Estonia
thumb|left|alt=sailing ship filled with refugees|Estonian Swedes fleeing the Soviet occupation to Sweden (1944)
thumb|upright=0.7|Otto Tief, the Prime Minister of the last government of Estonia before the occupation of the country.
The Soviet forces reconquered Estonia in autumn 1944 after battles in the northeast of the country on the Narva river, on the Tannenberg Line (Sinimäed), in Southeast Estonia, on the Emajõgi river, and in the West Estonian Archipelago.
In the face of re-occupation by the Red Army, tens of thousands of Estonians (including a majority of the education, culture, science, political and social specialists) chose either to retreat with the Germans or to flee to Finland or Sweden, from where they sought refuge in other western countries, often on refugee ships such as the SS Walnut. On 12 January 1949, the Soviet Council of Ministers issued a decree "on the expulsion and deportation" from Baltic states of "all kulaks and their families, the families of bandits and nationalists", and others.Stephane Courtois; Werth, Nicolas; Panne, Jean-Louis; Paczkowski, Andrzej; Bartosek, Karel; Margolin, Jean-Louis & Kramer, Mark (1999). The Black Book of Communism: Crimes, Terror, Repression. Harvard University Press. ISBN 0-674-07608-7.
More than 10% of the adult Baltic population were deported or sent to Soviet labour camps. In response to the continuing insurgency against Soviet rule, more than 20,000 Estonians were forcibly deported either to labour camps or to Siberia.Valge raamat, p. 18 Almost all of the remaining rural households were collectivised.
After the Second World War, as part of the goal to more fully integrate Estonia into the Soviet Union, mass deportations were conducted in Estonia and the policy of encouraging Russian immigration to the country continued.Background Note: Latvia at US Department of State
Half the deported perished, and the other half were not allowed to return until the early 1960s (years after Stalin's death). The activities of Soviet forces in 1940–41 and after reoccupation sparked a guerrilla war against Soviet authorities in Estonia by the Forest Brothers, who consisted mostly of Estonian veterans of the German and Finnish armies and some civilians. This conflict continued into the early 1950s.Valge raamat, pp. 25–30 Material damage caused by the world war and the following Soviet era significantly slowed Estonia's economic growth, resulting in a wide wealth gap in comparison with neighbouring Finland and Sweden.Valge raamat, pp. 125, 148
Militarization was another aspect of the Soviet state. Large parts of the country, especially the coastal areas, were closed to all but the Soviet military. Most of the coast and all sea islands (including Saaremaa and Hiiumaa) were declared "border zones". People not actually residing there were restricted from travelling to them without a permit. A notable closed military installation was the city of Paldiski, which was entirely closed to all public access. The city had a support base for the Soviet Baltic Fleet's submarines and several large military bases, including a nuclear submarine training centre complete with a full-scale model of a nuclear submarine with working nuclear reactors. The Paldiski reactors building passed into Estonian control in 1994 after the last Russian troops left the country. Immigration was another effect of Soviet occupation. Hundreds of thousands of migrants were relocated to Estonia from other parts of the Soviet Union to assist industrialisation and militarisation, contributing an increase of about half a million people within 45 years.
Return to independence
thumb|right|alt=crowd in front of cathedral celebration joining the EU in 2007|Estonia joined the European Union in 2004 and signed the Lisbon Treaty in 2007.
thumb|left|upright=0.9||Kersti Kaljulaid is the current president of Estonia.
The United States, United Kingdom, France, Italy and the majority of other Western countries considered the annexation of Estonia by the USSR illegal. They retained diplomatic relations with the representatives of the independent Republic of Estonia, never de jure recognised the existence of the Estonian SSR, and never recognised Estonia as a legal constituent part of the Soviet Union. "whereas the Soviet annexias of the three Baltic States still has not been formally recognised by most European States and the USA, Canada, the United Kingdom, Australia and the Vatican still adhere to the concept of the Baltic States". Estonia's return to independence became possible as the Soviet Union faced internal regime challenges, loosening its hold on its outer empire. As the 1980s progressed, a movement for Estonian autonomy started. In the initial period of 1987–1989, this was partially for more economic independence, but as the Soviet Union weakened and it became increasingly obvious that nothing short of full independence would do, Estonia began a course towards self-determination.
In 1989, during the "Singing Revolution", in a landmark demonstration for more independence, more than two million people formed a human chain stretching through Lithuania, Latvia and Estonia, called the Baltic Way. All three nations had similar experiences of occupation and similar aspirations for regaining independence. The Estonian Sovereignty Declaration was issued on 16 November 1988. On 20 August 1991, Estonia declared formal independence during the Soviet military coup attempt in Moscow, reconstituting the pre-1940 state. The Soviet Union recognised the independence of Estonia on 6 September 1991. The first country to diplomatically recognise Estonia's reclaimed independence was Iceland. The last units of the Russian army left on 31 August 1994.
Estonia joined NATO on 29 March 2004.
After signing a treaty on 16 April 2003, Estonia was among the group of ten countries admitted to the European Union on 1 May 2004.
Estonia celebrated its 90th anniversary over the period 28 November 2007 to 28 November 2008.
Territorial history timeline
Geography
thumb|right|180px|alt=limestone cliffs at the shore with clouds|The northern coast of Estonia consists mainly of limestone cliffs.
Estonia lies on the eastern shores of the Baltic Sea immediately across the Gulf of Finland from Finland on the level northwestern part of the rising East European platform between 57.3° and 59.5° N and 21.5° and 28.1° E. Average elevation reaches only and the country's highest point is the Suur Munamägi in the southeast at . There is of coastline marked by numerous bays, straits, and inlets. The number of islands and islets is estimated at some 2,355 (including those in lakes). Two of them are large enough to constitute separate counties: Saaremaa and Hiiumaa. A small, recent cluster of meteorite craters, the largest of which is called Kaali is found on Saaremaa, Estonia.
Estonia is situated in the northern part of the temperate climate zone and in the transition zone between maritime and continental climate. Estonia has four seasons of near-equal length. Average temperatures range from on the islands to inland in July, the warmest month, and from on the islands to inland in February, the coldest month. The average annual temperature in Estonia is . The average precipitation in 1961–1990 ranged from per year.
Snow cover, which is deepest in the south-eastern part of Estonia, usually lasts from mid-December to late March. Estonia has over 1,400 lakes. Most are very small, with the largest, Lake Peipus, being . There are many rivers in the country. The longest of them are Võhandu (), Pärnu (), and Põltsamaa (). Estonia has numerous fens and bogs. Forest land covers 50% of Estonia.Facts Estonian Timber The most common tree species are pine, spruce and birch.
Phytogeographically, Estonia is shared between the Central European and Eastern European provinces of the Circumboreal Region within the Boreal Kingdom. According to the WWF, the territory of Estonia belongs to the ecoregion of Sarmatic mixed forests.
Administrative divisions
The Republic of Estonia is divided into fifteen counties (Maakonnad), which are the administrative subdivisions of the country. The first documented reference to Estonian political and administrative subdivisions comes from the Chronicle of Henry of Livonia, written in the thirteenth century during the Northern Crusades.History of Estonia History of Estonia
A maakond (county) is the biggest administrative subdivision.
The county government (Maavalitsus) of each county is led by a county governor (Maavanem), who represents the national government at the regional level. Governors are appointed by the Government of Estonia for a term of five years. Several changes were made to the borders of counties after Estonia became independent, most notably the formation of Valga County (from parts of Võru, Tartu and Viljandi counties) and Petseri County (area acquired from Russia with the 1920 Tartu Peace Treaty).
During the Soviet rule, Petseri County was annexed and ceded to the Russian SFSR in 1945 where it became Pechorsky District of Pskov Oblast. Counties were again re-established on 1 January 1990 in the borders of the Soviet-era districts. Because of the numerous differences between the current and historical (pre-1940, and sometimes pre-1918) layouts, the historical borders are still used in ethnology, representing cultural and linguistic differences better.
Each county is further divided into municipalities (omavalitsus), which is also the smallest administrative subdivision of Estonia. There are two types of municipalities: an urban municipality – linn (town), and a rural municipality – vald (parish). There is no other status distinction between them. Each municipality is a unit of self-government with its representative and executive bodies. The municipalities in Estonia cover the entire territory of the country.
A municipality may contain one or more populated places. Tallinn is divided into eight districts (linnaosa) with limited self-government (Haabersti, Kesklinn (centre), Kristiine, Lasnamäe, Mustamäe, Nõmme, Pirita and Põhja-Tallinn).
Municipalities range in size from Tallinn with 400,000 inhabitants to Ruhnu with as few as sixty. As over two-thirds of the municipalities have a population of under 3,000, many of them have found it advantageous to co-operate in providing services and carrying out administrative functions. There have also been calls for an administrative reform to merge smaller municipalities together.
As of March 2013, there are a total of 226 municipalities in Estonia, 33 of them being urban and 193 rural.
thumb|center|900px|alt=Ahja River, forest landscape, on the left side of the image on the riverbank there is outcrop of Devonian sandstone.|A view of the natural environment in Estonia
Politics
Estonia is a parliamentary representative democratic republic in which the Prime Minister of Estonia is the head of government and which includes a multi-party system. The political culture is stable in Estonia, where power is held between two and three parties that have been in politics for a long time. This situation is similar to other countries in Northern Europe. The former Prime Minister of Estonia, Andrus Ansip, is also Europe's longest-serving Prime Minister (from 2005 until 2014). The current Estonian Prime Minister is Jüri Ratas, who is the former Second Vice-President of the Parliament and the head of the Estonian Centre Party.
Parliament
thumb|left|alt=Toompea Castle pink stucco three story building with red hip roof|The seat of the Parliament of Estonia in Toompea Castle
The Parliament of Estonia () or the legislative branch is elected by people for a four-year term by proportional representation. The Estonian political system operates under a framework laid out in the 1992 constitutional document. The Estonian parliament has 101 members and influences the governing of the state primarily by determining the income and the expenses of the state (establishing taxes and adopting the budget). At the same time the parliament has the right to present statements, declarations and appeals to the people of Estonia, ratify and denounce international treaties with other states and international organisations and decide on the Government loans.
The Riigikogu elects and appoints several high officials of the state, including the President of the Republic. In addition to that, the Riigikogu appoints, on the proposal of the President of Estonia, the Chairman of the National Court, the chairman of the board of the Bank of Estonia, the Auditor General, the Legal Chancellor and the Commander-in-Chief of the Defence Forces. A member of the Riigikogu has the right to demand explanations from the Government of the Republic and its members. This enables the members of the parliament to observe the activities of the executive power and the above-mentioned high officials of the state.
Government
The Government of Estonia () or the executive branch is formed by the Prime Minister of Estonia, nominated by the president and approved by the parliament. The government exercises executive power pursuant to the Constitution of Estonia and the laws of the Republic of Estonia and consists of twelve ministers, including the Prime Minister. The Prime Minister also has the right to appoint other ministers and assign them a subject to deal with. These are ministers without portfolio — they don't have a ministry to control.
thumb|left|alt=Stenbock House gray stucco three story building with pediment and portico and red hip roof|Stenbock House, the seat of the Government of Estonia on Toompea Hill
The Prime Minister has the right to appoint a maximum of three such ministers, as the limit of ministers in one government is fifteen. It is also known as the cabinet. The cabinet carries out the country's domestic and foreign policy, shaped by parliament; it directs and co-ordinates the work of government institutions and bears full responsibility for everything occurring within the authority of executive power. The government, headed by the Prime Minister, thus represents the political leadership of the country and makes decisions in the name of the whole executive power.
Estonia has pursued the development of the e-state and e-government. Internet voting is used in elections in Estonia. The first internet voting took place in the 2005 local elections and the first in a parliamentary election was made available for the 2007 elections, in which 30,275 individuals voted over the internet. Voters have a chance to invalidate their electronic vote in traditional elections, if they wish to. In 2009 in its eighth Worldwide Press Freedom Index, Reporters Without Borders ranked Estonia sixth out of 175 countries.Reporters Without Borders. Worldwide press freedom index 2009 In the first ever State of World Liberty Index report, Estonia was ranked first out of 159 countries.
Law
thumb|alt=Huey helicopter landing on a pad next to a wetland|Estonian Border Guard
According to the Constitution of Estonia () the supreme power of the state is vested in the people. The people exercise their supreme power of the state on the elections of the Riigikogu through citizens who have the right to vote. The supreme judicial power is vested in the Supreme Court or Riigikohus, with nineteen justices. The Chief Justice is appointed by the parliament for nine years on nomination by the president. The official Head of State is the President of Estonia, who gives assent to the laws passed by Riigikogu, also having the right of sending them back and proposing new laws.
The President, however, does not use these rights very often, having a largely ceremonial role. He or she is elected by Riigikogu, with two-thirds of the votes required. If the candidate does not gain the amount of votes required, the right to elect the President goes over to an electoral body, consisting of the 101 members of Riigikogu and representatives from local councils. As in other spheres, Estonian law-making has been successfully integrated with the Information Age.
Foreign relations
thumb|left|President Barack Obama giving a speech at the Nordea Concert Hall in Tallinn.
Estonia was a member of the League of Nations from 22 September 1921, has been a member of the United Nations since 17 September 1991,Estonian date of admission into the United Nations and of NATO since 29 March 2004,Estonian date of admission into the NATO as well as the European Union since 1 May 2004.Estonian date of admission into the European Union Estonia is also a member of the Organization for Security and Cooperation in Europe (OSCE), Organisation for Economic Co-operation and Development (OECD), Council of the Baltic Sea States (CBSS) and the Nordic Investment Bank (NIB). As an OSCE participating State, Estonia's international commitments are subject to monitoring under the mandate of the U.S. Helsinki Commission. Estonia has also signed the Kyoto Protocol.
Since regaining independence, Estonia has pursued a foreign policy of close co-operation with its Western European partners. The two most important policy objectives in this regard have been accession into NATO and the European Union, achieved in March and May 2004 respectively. Estonia's international realignment toward the West has been accompanied by a general deterioration in relations with Russia, most recently demonstrated by the protest triggered by the controversial relocation of the Bronze Soldier World War II memorial in Tallinn.
Since the early 1990s, Estonia is involved in active trilateral Baltic states co-operation with Latvia and Lithuania, and Nordic-Baltic co-operation with the Nordic countries. The Baltic Council is the joint forum of the interparliamentary Baltic Assembly (BA) and the intergovernmental Baltic Council of Ministers (BCM). Nordic-Baltic Eight (NB-8) is the joint co-operation of the governments of Denmark, Estonia, Finland, Iceland, Latvia, Lithuania, Norway and Sweden. Nordic-Baltic Six (NB-6), comprising Nordic-Baltic countries that are European Union member states, is a framework for meetings on EU related issues. Parliamentary co-operation between the Baltic Assembly and Nordic Council began in 1989. Annual summits take place, and in addition meetings are organised on all possible levels: speakers, presidiums, commissions, and individual members. The Nordic Council of Ministers has an office in Tallinn with a subsidiary in Tartu and information points in Narva, Valga and Pärnu. Joint Nordic-Baltic projects include the education programme Nordplus and mobility programmes for business and industry and for public administration.
thumb|right|alt=Foreign ministers standing in arc around microphones 2011|Foreign ministers of the Nordic and Baltic countries in Helsinki, 2011
An important element in Estonia's post-independence reorientation has been closer ties with the Nordic countries, especially Finland and Sweden. Estonians consider themselves a Nordic people rather than Balts,Estonian foreign ministry publication, 2004Estonian foreign ministry publication, 2002 based on their historical ties with Sweden, Denmark and particularly Finland. In December 1999, then Estonian foreign minister (and since 2006, President of Estonia) Toomas Hendrik Ilves delivered a speech entitled "Estonia as a Nordic Country" to the Swedish Institute for International Affairs. In 2003, the foreign ministry also hosted an exhibit called "Estonia: Nordic with a Twist".
In 2005, Estonia joined the European Union's Nordic Battle Group. It has also shown continued interest in joining the Nordic Council.
Whereas in 1992 Russia accounted for 92% of Estonia's international trade, today there is extensive economic interdependence between Estonia and its Nordic neighbours: three quarters of foreign investment in Estonia originates in the Nordic countries (principally Finland and Sweden), to which Estonia sends 42% of its exports (as compared to 6.5% going to Russia, 8.8% to Latvia, and 4.7% to Lithuania). On the other hand, the Estonian political system, its flat rate of income tax, and its non-welfare-state model distinguish it from the Nordic countries and their Nordic model, and from many other European countries.
The European Union Agency for large-scale IT systems is based in Tallinn, which started operations at the end of 2012. Estonia will hold the Presidency of the Council of the European Union in the first half of 2018.
Military
thumb|right|Estonian soldiers during a NATO exercise in 2015
The military of Estonia is based upon the Estonian Defence Forces (), which is the name of the unified armed forces of the republic with Maavägi (Army), Merevägi (Navy), Õhuvägi (Air Force) and a paramilitary national guard organisation Kaitseliit (Defence League). The Estonian National Defence Policy aim is to guarantee the preservation of the independence and sovereignty of the state, the integrity of its land, territorial waters, airspace and its constitutional order. Current strategic goals are to defend the country's interests, develop the armed forces for interoperability with other NATO and EU member forces, and participation in NATO missions.
The current national military service () is compulsory for men between 18 and 28, and conscripts serve eight-month to eleven-month tours of duty depending on the army branch they serve in. Estonia has retained conscription unlike Latvia and Lithuania and has no plan to transition to a professional army. In 2008, annual military spending reached 1.85% of GDP, or 5 billion kroons, and was expected to continue to increase until 2010, when a 2.0% level was anticipated.
Estonia co-operates with Latvia and Lithuania in several trilateral Baltic defence co-operation initiatives, including Baltic Battalion (BALTBAT), Baltic Naval Squadron (BALTRON), Baltic Air Surveillance Network (BALTNET) and joint military educational institutions such as the Baltic Defence College in Tartu. Future co-operation will include sharing of national infrastructures for training purposes and specialisation of training areas (BALTTRAIN) and collective formation of battalion-sized contingents for use in the NATO rapid-response force. In January 2011 the Baltic states were invited to join NORDEFCO, the defence framework of the Nordic countries.
thumb|right|alt=Estonian armored car in desert camouflage Afghanistan|An Estonian Patria Pasi XA-180 in Afghanistan
In January 2008, the Estonian military had almost 300 troops stationed in foreign countries as part of various international peacekeeping forces, including 35 Defence League troops stationed in Kosovo; 120 Ground Forces soldiers in the NATO-led ISAF force in Afghanistan; 80 soldiers stationed as a part of MNF in Iraq; and 2 Estonian officers in Bosnia-Herzegovina and 2 Estonian military agents in Israeli occupied Golan Heights.
The Estonian Defence Forces have also previously had military missions in Croatia from March until October 1995, in Lebanon from December 1996 until June 1997 and in Macedonia from May until December 2003. Estonia participates in the Nordic Battlegroup and has announced readiness to send soldiers also to Sudan to Darfur if necessary, creating the first African peacekeeping mission for the armed forces of Estonia., Estonian Ministry of Defence (in Estonian)
The Ministry of Defence and the Defence Forces have been working on a cyberwarfare and defence formation for some years now. In 2007, a military doctrine of an e-military of Estonia was officially introduced as the country was under massive cyberattacks in 2007. The proposed aim of the e-military is to secure the vital infrastructure and e-infrastructure of Estonia. The main cyber warfare facility is the Computer Emergency Response Team of Estonia (CERT), founded in 2006. The organisation operates on security issues in local networks.
The President of the US, George W. Bush, announced his support of Estonia as the location of a NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) in 2007.Krister Paris USA toetab Eesti küberkaitsekeskust. Eesti Päevaleht, 28 June 2007 In the aftermath of the 2007 cyberattacks, plans to combine network defence with Estonian military doctrine have been nicknamed as the Tiger's Defence, in reference to Tiigrihüpe.. Office of the President of Estonia. 25 June 2007 The CCDCOE started its operations in November 2008.
Economy
thumb|left|State budget revenues per capita for 2016 in Estonia, Latvia and Lithuania.
thumb|alt=map of European Union eurozone|Estonia is part of a monetary union, the eurozone (dark blue), and of the EU single market.
thumb|left|alt=aerial view of high rises at sunset|The central business district of Tallinn
Estonia is economically deeply integarated with the economies of its northern neighbours, Sweden and Finland. As a member of the European Union, Estonia is considered a high-income economy by the World Bank. The GDP (PPP) per capita of the country, a good indicator of wealth, was in 2015 $28,781 according to the IMF, between that of Slovak Republic and Lithuania, but below that of other long-time EU members such as Italy or Spain. The country is ranked 8th in the 2015 Index of Economic Freedom, and the 4th freest economy in Europe. Because of its rapid growth, Estonia has often been described as a Baltic Tiger beside Lithuania and Latvia. Beginning 1 January 2011, Estonia adopted the euro and became the 17th eurozone member state.
According to Eurostat, Estonia had the lowest ratio of government debt to GDP among EU countries at 6.7% at the end of 2010.Eurostat news release
thumb|right|alt=logo of skype white letters on blue background|The IT sector's share in GDP has sharply increased since 2004. Skype was created by Estonian developers and is mainly developed in Estonia.
A balanced budget, almost non-existent public debt, flat-rate income tax, free trade regime, competitive commercial banking sector, innovative e-Services and even mobile-based services are all hallmarks of Estonia's market economy.
Estonia produces about 75% of its consumed electricity."Electricity Balance, Yearly" 8 June 2010 (Estonian) In 2011 about 85% of it was generated with locally mined oil shale. Alternative energy sources such as wood, peat, and biomass make up approximately 9% of primary energy production. Renewable wind energy was about 6% of total consumption in 2009."Energy Effectiveness, Yearly" 22 September 2010 (Estonian) Estonia imports petroleum products from western Europe and Russia. Oil shale energy, telecommunications, textiles, chemical products, banking, services, food and fishing, timber, shipbuilding, electronics, and transportation are key sectors of the economy. The ice-free port of Muuga, near Tallinn, is a modern facility featuring good transshipment capability, a high-capacity grain elevator, chill/frozen storage, and new oil tanker off-loading capabilities. The railroad serves as a conduit between the West, Russia, and other points to the East.
Because of the global economic recession that began in 2007, the GDP of Estonia decreased by 1.4% in the 2nd quarter of 2008, over 3% in the 3rd quarter of 2008, and over 9% in the 4th quarter of 2008. The Estonian government made a supplementary negative budget, which was passed by Riigikogu. The revenue of the budget was decreased for 2008 by EEK 6.1 billion and the expenditure by EEK 3.2 billion. In 2010, the economic situation stabilized and started a growth based on strong exports. In the fourth quarter of 2010, Estonian industrial output increased by 23% compared to the year before. The country has been experiencing economic growth ever since.
According to Eurostat data, Estonian PPS GDP per capita stood at 67% of the EU average in 2008. In March 2016, the average monthly gross salary in Estonia was €1105.
However, there are vast disparities in GDP between different areas of Estonia; currently, over half of the country's GDP is created in Tallinn. In 2008, the GDP per capita of Tallinn stood at 172% of the Estonian average,Half of the gross domestic product of Estonia is created in Tallinn. Statistics Estonia. Stat.ee. 29 September 2008. Retrieved 23 December 2011. which makes the per capita GDP of Tallinn as high as 115% of the European Union average, exceeding the average levels of other counties.
The unemployment rate in March 2016 was 6.4%, which is below the EU average, while real GDP growth in 2011 was 8.0%, five times the euro-zone average. In 2012, Estonia remained the only euro member with a budget surplus, and with a national debt of only 6%, it is one of the least indebted countries in Europe.
Historic development
thumb|right|Estonia's GDP growth from 2000 till 2012
By 1929, a stable currency, the kroon, was established. It is issued by the Bank of Estonia, the country's central bank.
Since re-establishing independence, Estonia has styled itself as the gateway between East and West and aggressively pursued economic reform and integration with the West. Estonia's market reforms put it among the economic leaders in the former COMECON area. In 1994, based on the economic theories of Milton Friedman, Estonia became one of the first countries to adopt a flat tax, with a uniform rate of 26% regardless of personal income. This rate has since been reduced three times, to 24% in January 2005, 23% in January 2006, and finally to 21% by January 2008.Personal Income Tax, Ministry of Finance of the Republic of Estonia The Government of Estonia finalised the design of Estonian euro coins in late 2004, and adopted the euro as the country's currency on 1 January 2011, later than planned due to continued high inflation. A Land Value Tax is levied which is used to fund local municipalities. It is a state level tax, however 100% of the revenue is used to fund Local Councils. The rate is set by the Local Council within the limits of 0.1–2.5%. It is one of the most important sources of funding for municipalities. The Land Value Tax is levied on the value of the land only with improvements and buildings not considered. Very few exemptions are considered on the land value tax and even public institutions are subject to the tax. The tax has contributed to a high rate (~90%) of owner-occupied residences within Estonia, compared to a rate of 67.4% in the United States.
In 1999, Estonia experienced its worst year economically since it regained independence in 1991, largely because of the impact of the 1998 Russian financial crisis. Estonia joined the WTO in November 1999. With assistance from the European Union, the World Bank and the Nordic Investment Bank, Estonia completed most of its preparations for European Union membership by the end of 2002 and now has one of the strongest economies of the new member states of the European Union. Estonia joined the OECD in 2010.
Resources
thumb|right|The oil shale industry in Estonia is one of the most developed in the world.IEA (2013), p. 20 In 2012, oil shale supplied 70% of Estonia's total primary energy and accounted for 4% of Estonia's gross domestic product.IEA (2013), p. 7
Although Estonia is in general resource-poor, the land still offers a large variety of smaller resources. The country has large oil shale and limestone deposits, along with forests that cover 48% of the land.Forest resources based on national forest inventory Statistics Estonia 2012 In addition to oil shale and limestone, Estonia also has large reserves of phosphorite, pitchblende, and granite that currently are not mined, or not mined extensively.
Significant quantities of rare earth oxides are found in tailings accumulated from 50 years of uranium ore, shale and loparite mining at Sillamäe. Because of the rising prices of rare earths, extraction of these oxides has become economically viable. The country currently exports around 3000 tonnes per annum, representing around 2% of world production.
In recent years, public debate has discussed whether Estonia should build a nuclear power plant to secure energy production after closure of old units in the Narva Power Plants, if they are not reconstructed by the year 2016.Tulevikuraport: Soome-Eesti tuumajaam võiks olla Eestis (Future Report: Finnish and Estonian joint nuclear power station could be located in Estonia), Postimees. 25 June 2008 (in Estonian)
Industry and environment
thumb|left|alt=Rõuste wind turbines next to wetland|Rõuste wind farm in Hanila Parish.
Food, construction, and electronic industries are currently among the most important branches of Estonia's industry. In 2007, the construction industry employed more than 80,000 people, around 12% of the entire country's workforce. Another important industrial sector is the machinery and chemical industry, which is mainly located in Ida-Viru County and around Tallinn.
The oil shale based mining industry, which is also concentrated in East-Estonia, produces around 90% of the entire country's electricity. Although the amount of pollutants emitted to the air have been falling since the 1980s,M. Auer (2004). Estonian Environmental Reforms: A Small Nation's Outsized Accomplishments. In: Restoring Cursed Earth: Appraising Environmental Policy Reforms in Eastern Europe and Russia. Rowman & Littlefield. pp 117–144. the air is still polluted with sulphur dioxide from the mining industry that the Soviet Union rapidly developed in the early 1950s. In some areas the coastal seawater is polluted, mainly around the Sillamäe industrial complex.
Estonia is a dependent country in the terms of energy and energy production. In recent years many local and foreign companies have been investing in renewable energy sources. The importance of wind power has been increasing steadily in Estonia and currently the total amount of energy production from wind is nearly 60 MW while at the same time roughly 399 MW worth of projects are currently being developed and more than 2800 MW worth of projects are being proposed in the Lake Peipus area and the coastal areas of Hiiumaa.Peipsile võib kerkida mitusada tuulikut, Postimees. 21 October 2007 (in Estonian) Henrik Ilves Tuule püüdmine on saanud Eesti kullapalavikuks, Eesti Päevaleht. 13 June 2008 (in Estonian)
Currently, there are plans to renovate some older units of the Narva Power Plants, establish new power stations, and provide higher efficiency in oil shale based energy production. Estonia liberalised 35% of its electricity market in April 2010. The electricity market as whole will be liberalised by 2013.
Together with Lithuania, Poland, and Latvia, the country considered participating in constructing the Visaginas nuclear power plant in Lithuania to replace the Ignalina. However, due to the slow pace of the project and problems with the sector (like Fukushima disaster and bad example of Olkiluoto plant), Eesti Energia has shifted its main focus to shale oil production that is seen as much more profitable business.
Estonia has a strong information technology sector, partly owing to the Tiigrihüpe project undertaken in the mid-1990s, and has been mentioned as the most "wired" and advanced country in Europe in the terms of e-Government of Estonia.Hackers Take Down the Most Wired Country in Europe, August 2007 New direction is to offer those services present in Estonia to the non-residents via e-residency program.
Skype was written by Estonia-based developers Ahti Heinla, Priit Kasesalu, and Jaan Tallinn, who had also originally developed Kazaa. Other notable tech startups include GrabCAD, Fortumo and TransferWise. It is even claimed that Estonia has the most startups per person in world.
The Estonian electricity network forms a part of the Nord Pool Spot network.
Trade
thumb|right|alt=graph of exports in 2010 showing $10,345,000,000 2.8 percent cars, 12 percent lubricating oil, 3.8 percent telephone|Graphical depiction of Estonia's product exports in 28 color-coded categories
Estonia (2016) Export Import20%9%16%13%9%9%6%8%5%-%5%5%4%-%4%-%3%-%3%4%-%8%-%6%-%4%-%3%
Estonia has had a market economy since the end of the 1990s and one of the highest per capita income levels in Eastern Europe. Proximity to the Scandinavian markets, its location between the East and West, competitive cost structure and a highly skilled labour force have been the major Estonian comparative advantages in the beginning of the 2000s (decade). As the largest city, Tallinn has emerged as a financial centre and the Tallinn Stock Exchange joined recently with the OMX system. The current government has pursued tight fiscal policies, resulting in balanced budgets and low public debt.
In 2007, however, a large current account deficit and rising inflation put pressure on Estonia's currency, which was pegged to the Euro, highlighting the need for growth in export-generating industries.
Estonia exports mainly machinery and equipment, wood and paper, textiles, food products, furniture, and metals and chemical products. Estonia also exports 1.562 billion kilowatt hours of electricity annually. At the same time Estonia imports machinery and equipment, chemical products, textiles, food products and transportation equipment. Estonia imports 200 million kilowatt hours of electricity annually.
Between 2007 and 2013, Estonia received 53.3 billion kroons (3.4 billion euros) from various European Union Structural Funds as direct supports, creating the largest foreign investments into Estonia. Majority of the European Union financial aid will be invested into to the following fields: energy economies, entrepreneurship, administrative capability, education, information society, environment protection, regional and local development, research and development activities, healthcare and welfare, transportation and labour market.Archived copy at (Unknown) (14 November 2010).. Riigi Raha Raamat. 21 July 2011 (in Estonian)
Demographics
thumb|left|Oskar Friberg is the last male Estonian Swede on the island of Vormsi who outlived the Soviet Occupation
thumb|270px|alt=The population of Estonia, from 1970 to 2009, with a peak in 1990|Population of Estonia 1970–2009. The changes are largely attributed to soviet immigration and emigration.
Before World War II, ethnic Estonians constituted 88% of the population, with national minorities constituting the remaining 12%. The largest minority groups in 1934 were Russians, Germans, Swedes, Latvians, Jews, Poles, Finns and Ingrians.
The share of Baltic Germans in Estonia had fallen from 5.3% (~46,700) in 1881 to 1.3% (16,346) by the year 1934,Baltic Germans in Estonia. Estonian Institute http://www.einst.ee which was mainly due to emigration to Germany in the light of general Russification in the end of the 19th century and the independence of Estonia in the 20th century.
Between 1945 and 1989, the share of ethnic Estonians in the population resident within the currently defined boundaries of Estonia dropped to 61%, caused primarily by the Soviet programme promoting mass immigration of urban industrial workers from Russia, Ukraine, and Belarus, as well as by wartime emigration and Joseph Stalin's mass deportations and executions. By 1989, minorities constituted more than one-third of the population, as the number of non-Estonians had grown almost fivefold.
At the end of the 1980s, Estonians perceived their demographic change as a national catastrophe. This was a result of the migration policies essential to the Soviet Nationalisation Programme aiming to russify Estonia – administrative and military immigration of non-Estonians from the USSR coupled with the deportation of Estonians to the USSR. In the decade following the reconstitution of independence, large-scale emigration by ethnic Russians and the removal of the Russian military bases in 1994 caused the proportion of ethnic Estonians in Estonia to increase from 61% to 69% in 2006.
Modern Estonia is a fairly ethnically heterogeneous country, but this heterogeneity is not a feature of much of the country as the non-Estonian population is concentrated in two of Estonia's counties. Thirteen of Estonia's 15 counties are over 80% ethnic Estonian, the most homogeneous being Hiiumaa, where Estonians account for 98.4% of the population. In the counties of Harju (including the capital city, Tallinn) and Ida-Viru, however, ethnic Estonians make up 60% and 20% of the population, respectively. Russians make up 25.6% of the total population but account for 36% of the population in Harju county and 70% of the population in Ida-Viru county.
The Estonian Cultural Autonomy law that was passed in 1925 was unique in Europe at that time. Cultural autonomies could be granted to minorities numbering more than 3,000 people with longstanding ties to the Republic of Estonia. Before the Soviet occupation, the Germans and Jewish minorities managed to elect a cultural council. The Law on Cultural Autonomy for National Minorities was reinstated in 1993. Historically, large parts of Estonia's northwestern coast and islands have been populated by indigenous ethnically Rannarootslased (Coastal Swedes).
In recent years the numbers of Coastal Swedes has risen again, numbering in 2008 almost 500 people, owing to the property reforms in the beginning of the 1990s. In 2005, the Ingrian Finnish minority in Estonia elected a cultural council and was granted cultural autonomy. The Estonian Swedish minority similarly received cultural autonomy in 2007.
Society
thumb|left|A folk dance group performing during the Midsummer festivities in Viljandi.
Estonian society has undergone considerable changes over the last twenty years, one of the most notable being the increasing level of stratification, and the distribution of family income. The Gini coefficient has been steadily higher than the European Union average (31 in 2009),CIA World Factbook. . Retrieved 7 November 2011 although it has clearly dropped. The registered unemployment rate in January 2012 was 7.7%.Registreeritud töötus ja kindlustushüvitised jaanuaris 2012. Estonian unemployment office (in Estonian)
Modern Estonia is a multinational country in which 109 languages are spoken, according to a 2000 census. 67.3% of Estonian citizens speak Estonian as their native language, 29.7% Russian, and 3% speak other languages. As of 2 July 2010, 84.1% of Estonian residents are Estonian citizens, 8.6% are citizens of other countries and 7.3% are "citizens with undetermined citizenship". Since 1992 roughly 140,000 people have acquired Estonian citizenship by passing naturalisation exams.Eesti andis mullu kodakondsuse 2124 inimesele, Postimees. 9 January 2009
The ethnic distribution in Estonia is very homogeneous, where in most counties over 90% of the people are ethnic Estonians. This is in contrast to large urban centres like Tallinn, where Estonians account for 60% of the population, and the remainder is composed mostly of Russian and other Slavic inhabitants, who arrived in Estonia during the Soviet period.
The 2008 United Nations Human Rights Council report called "extremely credible" the description of the citizenship policy of Estonia as "discriminatory".Naturalisation in Estonia Statement by the Legal Information Centre for Human Rights (Tallinn, Estonia) ([...]the Special Rapporteur considers extremely credible the views of the representatives of the Russian-speaking minorities who expressed that the citizenship policy is discriminatory[...]) According to surveys, only 5% of the Russian community have considered returning to Russia in the near future. Estonian Russians have developed their own identity – more than half of the respondents recognised that Estonian Russians differ noticeably from the Russians in Russia. When comparing the result with a survey from 2000, then Russians' attitude toward the future is much more positive.Eesti ühiskond Society. (2006, PDF in Estonian/English). Retrieved 23 December 2011.
Estonia has been the first post-soviet republic that has legalized civil unions of same-sex couples. The law was approved in October 2014 and came into effect 1 January 2016.
Urbanization
Tallinn is the capital and the largest city of Estonia. It lies on the northern coast of Estonia, along the Gulf of Finland. There are 33 cities and several town-parish towns in the country. In total, there are 47 linna, with "linn" in English meaning both "cities" and "towns". More than 70% of the population lives in towns. The 20 largest cities are listed below:
Religion
Religion 2000 Census2011 Census Number % Number %Orthodox Christians 143,55412.80176,773 16.15Lutheran Christians152,23713.57108,5139.91Baptists6,0090.544,5070.41Roman Catholics5,7450.514,5010.41Jehovah's Witnesses3,8230.343,9380.36Old Believers2,5150.222,6050.24Christian FreeCongregations 2230.022,1890.20Earth Believers 1,0580.091,9250.18Taara Believers1,0470.10Pentecostals2,6480.241,8550.17Muslims1,3870.121,5080.14Adventists1,5610.141,1940.11Buddhists6220.061,1450.10Methodists1,4550.131,0980.10Other religion 4,9950.458,0740.74No religion450,45840.16592,58854.14Undeclared343,29230.61181,10416.55Total11,121,582100.001,094,564100.00
1Population, persons aged 15 and older.
thumb|left|Ruhnu stave church, built in 1644, is the oldest surviving wooden building in Estonia
According to Livonian Chronicle of Henry, Tharapita was the predominant deity for the Oeselians before Christianization.Taarapita – the Great God of the Oeselians. Article by Urmas Sutrop
Estonia was Christianised by the Teutonic Knights in the 13th century. During the Reformation, Protestantism spread, and the Lutheran church was officially established in Estonia in 1686. Before the Second World War, Estonia was approximately 80% Protestant; overwhelmingly Lutheran, with individuals adhering to Calvinism, as well as other Protestant branches. Many Estonians profess not to be particularly religious, because religion through the 19th century was associated with German feudal rule. Historically, there has been another minority religion, Russian Old-believers, near Lake Peipus area in Tartu County.
Today, Estonia's constitution guarantees freedom of religion, separation of church and state, and individual rights to privacy of belief and religion.Constitution of Estonia#Chapter 2: Fundamental Rights, Liberties, and Duties Article 40.–42. According to the Dentsu Communication Institute Inc, Estonia is one of the least religious countries in the world, with 75.7% of the population claiming to be irreligious. The Eurobarometer Poll 2005 found that only 16% of Estonians profess a belief in a god, the lowest belief of all countries studied. According to the Lutheran World Federation, the historic Lutheran denomination has a large presence with 180,000 registered members.
According to new polls about religiosity in the European Union in 2012 by Eurobarometer found that Christianity is the largest religion in Estonia accounting 28.06% of Estonians. The question asked was "Do you consider yourself to be...?" With a card showing: Catholic, Orthodox, Protestant, Other Christian, Jewish, Muslim, Sikh, Buddhist, Hindu, Atheist, and Non-believer/Agnostic. Space was given for Other (SPONTANEOUS) and DK. Jewish, Sikh, Buddhist, Hindu did not reach the 1% threshold. Eastern Orthodox are the largest Christian group in Estonia, accounting for 17% of Estonia citizens, while Protestants make up 6%, and Other Christian make up 22%. Non believer/Agnostic account 22%, Atheist accounts for 15%, and undeclared accounts for 15%.
thumb|right|alt=St. Olaf's church, Tallinn spire look over city and river|St. Olaf's church, Tallinn
The largest religious denomination in the country is Lutheranism, adhered to by 160,000 Estonians (or 13% of the population), principally ethnic Estonians. Other organizations, such as the World Council of Churches, report that there are as many as 265,700 Estonian Lutherans. Additionally, there are between 8,000–9,000 members abroad.
Another major group, inhabitants who follow Eastern Orthodox Christianity, practised chiefly by the Russian minority, and the Russian Orthodox Church is the second largest denomination with 150,000 members. The Estonian Apostolic Orthodox Church, under the Greek-Orthodox Ecumenical Patriarchate, claims another 20,000 members. Thus, the number of adherents of Lutheranism and Orthodoxy, without regard to citizenship or ethnicity, is roughly equal. Catholics have their Latin Apostolic Administration of Estonia.
According to the census of 2000 (data in table to the right), there were about 1,000 adherents of the Taara faith or Maausk in Estonia (see Maavalla Koda). The Jewish community has an estimated population of about 1,900 (see History of the Jews in Estonia). Around 68,000 people consider themselves atheists.
Languages
thumb|left|195px|The Finnic languages
thumb|right|150px|alt=The four distinct characters in the Estonian alphabet. Ö, Ä, Ü, and Õ|The four distinct characters in the Estonian alphabet
The official language, Estonian, belongs to the Finnic branch of the Uralic languages. Estonian is closely related to Finnish, spoken across the other side of the Gulf of Finland, and is one of the few languages of Europe that is not of an Indo-European origin. Despite some overlaps in the vocabulary due to borrowings, in terms of its origin, Estonian and Finnish are not related to their nearest geographical neighbors, Swedish, Latvian, and Russian, which are all Indo-European languages.
Although the Estonian and Germanic languages are of very different origins, one can identify many similar words in Estonian and German, for example. This is primarily because the Estonian language has borrowed nearly one third of its vocabulary from Germanic languages, mainly from Low Saxon (Middle Low German) during the period of German rule, and High German (including standard German). The percentage of Low Saxon and High German loanwords can be estimated at 22–25 percent, with Low Saxon making up about 15 percent.
South Estonian (including Võro and Seto varieties), spoken in South-Eastern Estonia, is genealogically distinct from northern Estonian, but traditionally and officially considered as dialects and "regional forms of the Estonian language", not separate language(s).
Russian is still spoken as a secondary language by forty- to seventy-year-old ethnic Estonians, because Russian was the unofficial language of the Estonian SSR from 1944 to 1991 and taught as a compulsory second language during the Soviet era. In 1998, most first- and second-generation industrial immigrants from the former Soviet Union (mainly the Russian SFSR) did not speak Estonian. However, by 2010, 64.1% of non-ethnic Estonians spoke Estonian.Table ML133, Eesti Statistika. Retrieved 30 April 2011 The latter, mostly Russian-speaking ethnic minorities, reside predominantly in the capital city of Tallinn and the industrial urban areas in Ida-Virumaa.
From the 13th to 20th century, there were Swedish-speaking communities in Estonia, particularly in the coastal areas and on the islands (e.g., Hiiumaa, Vormsi, Ruhnu; in Swedish, known as Dagö, Ormsö, Runö, respectively) along the Baltic sea, communities which today have all but disappeared. The Swedish-speaking minority was represented in parliament, and entitled to use their native language in parliamentary debates.
From 1918–1940, when Estonia was independent, the small Swedish community was well treated. Municipalities with a Swedish majority, mainly found along the coast, used Swedish as the administrative language and Swedish-Estonian culture saw an upswing. However, most Swedish-speaking people fled to Sweden before the end of World War II, that is, before the invasion of Estonia by the Soviet army in 1944. Only a handful of older speakers remain.
Apart from many other areas the influence of Swedish is especially distinct in the Noarootsi Parish in Läänemaa (known as Nuckö kommun in Swedish and Noarootsi vald in Estonian) where there are many villages with bilingual Estonian and/or Swedish names and street signs.
The most common foreign languages learned by Estonian students are English, Russian, German and French. Other popular languages include Finnish, Spanish and Swedish.
Education and science
thumb|alt=gray stucco building three story building with gray slate hip roof, central portico and pediment|The University of Tartu is one of the oldest universities in Northern Europe and the highest-ranked university in Estonia.
The history of formal education in Estonia dates back to the 13th and 14th centuries when the first monastic and cathedral schools were founded. The first primer in the Estonian language was published in 1575. The oldest university is the University of Tartu, established by the Swedish king Gustav II Adolf in 1632. In 1919, university courses were first taught in the Estonian language.
Today's education in Estonia is divided into general, vocational, and hobby. The education system is based on four levels: pre-school, basic, secondary, and higher education. A wide network of schools and supporting educational institutions have been established. The Estonian education system consists of state, municipal, public, and private institutions. There are currently 589 schools in Estonia.. Estonian Education Infosystem, (in Estonian)
According to the Programme for International Student Assessment, the performance levels of gymnasium-age pupils in Estonia is among the highest in the world: in 2010, the country was ranked 13th for the quality of its education system, well above the OECD average. Additionally, around 89% of Estonian adults aged 25–64 have earned the equivalent of a high-school degree, one of the highest rates in the industrialised world.
thumb|left|alt=Building of Estonian Students' Society in Tartu. In August 2008 a Georgian flag was hoisted besides Estonian to support Georgia in the South Ossetia war.|Building of the Estonian Students' Society in Tartu. It is considered to be the first example of Estonian national architecture.Eesti Üliõpilaste Seltsi maja Tartus — 100 aastat Estonian World Review, 16 October 2002 Treaty of Tartu between Finland and Soviet Russia was signed in that building in 1920.
Academic higher education in Estonia is divided into three levels: bachelor's, master's, and doctoral studies. In some specialties (basic medical studies, veterinary, pharmacy, dentistry, architect-engineer, and a classroom teacher programme) the bachelor's and master's levels are integrated into one unit. Estonian public universities have significantly more autonomy than applied higher education institutions.
In addition to organising the academic life of the university, universities can create new curricula, establish admission terms and conditions, approve the budget, approve the development plan, elect the rector, and make restricted decisions in matters concerning assets. Estonia has a moderate number of public and private universities. The largest public universities are the University of Tartu, Tallinn University of Technology, Tallinn University, Estonian University of Life Sciences, Estonian Academy of Arts; the largest private university is Estonian Business School.
thumb|alt=ESTCube-1 micro satellite orbiting globe and beaming light to Estonia|ESTCube-1 is the first Estonian satellite.
The Estonian Academy of Sciences is the national academy of science. The strongest public non-profit research institute that carries out fundamental and applied research is the National Institute of Chemical Physics and Biophysics (NICPB; Estonian KBFI). The first computer centres were established in the late 1950s in Tartu and Tallinn. Estonian specialists contributed in the development of software engineering standards for ministries of the Soviet Union during the 1980s. , Estonia spends around 2.38% of its GDP on Research and Development, compared to an EU average of around 2.0%.
Some of the best known scientists related to Estonia include astronomers Friedrich Georg Wilhelm von Struve, Ernst Öpik and Jaan Einasto, biologist Karl Ernst von Baer, Jakob von Uexküll, chemists Wilhelm Ostwald and Carl Schmidt, economist Ragnar Nurkse, matematician Edgar Krahn, medical researchers Ludvig Puusepp and Nikolay Pirogov, physicist Thomas Johann Seebeck, political scientist Rein Taagepera, psychologist Endel Tulving and Risto Näätänen, semiotician Yuri Lotman.
Culture
The culture of Estonia incorporates indigenous heritage, as represented by the Estonian language and the sauna, with mainstream Nordic and European cultural aspects. Because of its history and geography, Estonia's culture has been influenced by the traditions of the adjacent area's various Finnic, Baltic, Slavic and Germanic peoples as well as the cultural developments in the former dominant powers Sweden and Russia.
Today, Estonian society encourages liberty and liberalism, with popular commitment to the ideals of the limited government, discouraging centralised power and corruption. The Protestant work ethic remains a significant cultural staple, and free education is a highly prized institution. Like the mainstream culture in the other Nordic countries, Estonian culture can be seen to build upon the ascetic environmental realities and traditional livelihoods, a heritage of comparatively widespread egalitarianism out of practical reasons (see: Everyman's right and universal suffrage), and the ideals of closeness to nature and self-sufficiency (see: summer cottage).
The Estonian Academy of Arts (Estonian: Eesti Kunstiakadeemia, EKA) is providing higher education in art, design, architecture, media, art history and conservation while Viljandi Culture Academy of University of Tartu has an approach to popularise native culture through such curricula as native construction, native blacksmithing, native textile design, traditional handicraft and traditional music, but also jazz and church music. In 2010, there were 245 museums in Estonia whose combined collections contain more than 10 million objects.Eesti 245 muuseumis säilitatakse 10 miljonit museaali. Postimees, 30 Oct 2011. (in Estonian)
Music
thumb|left|The 26th Estonian Song Festival (2014) at the Tallinn Song Festival Grounds.
The earliest mention of Estonian singing dates back to Saxo Grammaticus Gesta Danorum (ca. 1179). Saxo speaks of Estonian warriors who sang at night while waiting for a battle. The older folksongs are also referred to as regilaulud, songs in the poetic metre regivärss the tradition shared by all Baltic Finns. Runic singing was widespread among Estonians until the 18th century, when rhythmic folk songs began to replace them.
Traditional wind instruments derived from those used by shepherds were once widespread, but are now becoming again more commonly played. Other instruments, including the fiddle, zither, concertina, and accordion are used to play polka or other dance music. The kannel is a native instrument that is now again becoming more popular in Estonia. A Native Music Preserving Centre was opened in 2008 in Viljandi.Margus Haav Pärimusmuusika ait lööb uksed valla (Estonian Native Music Preserving Centre is opened). Postimees. 27 March 2008 (in Estonian)
thumb|right|alt=Arvo Pärt bearded balding man facing left|Arvo Pärt has been the world's most performed living composer since 2010.
The tradition of Estonian Song Festivals (Laulupidu) started at the height of the Estonian national awakening in 1869. Today, it is one of the largest amateur choral events in the world. In 2004, about 100,000 people participated in the Song Festival. Since 1928, the Tallinn Song Festival Grounds (Lauluväljak) have hosted the event every five years in July. The last festival took place in July 2014. In addition, Youth Song Festivals are also held every four or five years, the last of them in 2011, and the next is scheduled for 2017.Estonian Song and Dance Celebrations. Estonian Song and Dance Celebration Foundation
Professional Estonian musicians and composers such as Rudolf Tobias, Miina Härma, Mart Saar, Artur Kapp, Juhan Aavik Artur Lemba and Heino Eller emerged in the late 19th century. At the time of this writing, the most known Estonian composers are Arvo Pärt, Eduard Tubin, and Veljo Tormis. In 2014, Arvo Pärt was the world's most performed living composer for the fourth year in a row.
In the 1950s, Estonian baritone Georg Ots rose to worldwide prominence as an opera singer.
In popular music, Estonian artist Kerli Kõiv has become popular in Europe, as well as gaining moderate popularity in North America. She has provided music for the 2010 Disney film Alice in Wonderland and the television series Smallville in the United States of America.
Estonia won the Eurovision Song Contest in 2001 with the song "Everybody" performed by Tanel Padar and Dave Benton. In 2002, Estonia hosted the event. Maarja-Liis Ilus has competed for Estonia on two occasions (1996 and 1997), while Eda-Ines Etti, Koit Toome and Evelin Samuel owe their popularity partly to the Eurovision Song Contest. Lenna Kuurmaa is a very popular singer in Europe, with her band Vanilla Ninja. "Rändajad" by Urban Symphony, was the first ever song in Estonian to chart in the UK, Belgium, and Switzerland.
Literature
right|thumb|alt=pastel drawing of Kalevipoeg carrying boards by Oskar Kallis |Kalevipoeg is the national epic of Estonia, composed by Friedrich Reinhold Kreutzwald.
The Estonian literature refers to literature written in the Estonian language (ca. 1 million speakers). The domination of Estonia after the Northern Crusades, from the 13th century to 1918 by Germany, Sweden, and Russia resulted in few early written literary works in the Estonian language. The oldest records of written Estonian date from the 13th century. Originates Livoniae in Chronicle of Henry of Livonia contains Estonian place names, words and fragments of sentences. The Liber Census Daniae (1241) contains Estonian place and family names.
The cultural stratum of Estonian was originally characterised by a largely lyrical form of folk poetry based on syllabic quantity. Apart from a few albeit remarkable exceptions, this archaic form has not been much employed in later times. One of the most outstanding achievements in this field is the national epic Kalevipoeg. At a professional level, traditional folk song reached its new heyday during the last quarter of the 20th century, primarily thanks to the work of composer Veljo Tormis.
Oskar Luts was the most prominent prose writer of the early Estonian literature, who is still widely read today, especially his lyrical school novel Kevade (Spring).Seeking the contours of a 'truly' Estonian literature Estonica.org Anton Hansen Tammsaare's social epic and psychological realist pentalogy Truth and Justice captured the evolution of Estonian society from a peasant community to an independent nation.Literature and an independent Estonia Estonica.org In modern times, Jaan Kross and Jaan Kaplinski are Estonia's best known and most translated writers.Jaan Kross at google.books Among the most popular writers of the late 20th and early 21st centuries are Tõnu Õnnepalu and Andrus Kivirähk, who uses elements of Estonian folklore and mythology, deforming them into absurd and grotesque.Andrus Kivirähk. The Old Barny (novel) Estonian Literature Centre
Media
The cinema of Estonia started in 1908 with the production of a newsreel about Swedish King Gustav V's visit to Tallinn. The first public TV broadcast in Estonia was in July 1955. Regular, live radio broadcasts began in December 1926. Deregulation in the field of electronic media has brought radical changes compared to the beginning of the 1990s. The first licenses for private TV broadcasters were issued in 1992. The first private radio station went on the air in 1990.
Today the media is a vibrant and competitive sector. There is a plethora of weekly newspapers and magazines, and Estonians have a choice of 9 domestic TV channels and a host of radio stations. The Constitution guarantees freedom of speech, and Estonia has been internationally recognised for its high rate of press freedom, having been ranked 3rd in the 2012 Press Freedom Index by Reporters Without Borders.
Estonia has two news agencies. The Baltic News Service (BNS), founded in 1990, is a private regional news agency covering Estonia, Latvia and Lithuania. The ETV24 is an agency owned by Eesti Rahvusringhääling who is a publicly funded radio and television organisation created on 30 June 2007 to take over the functions of the formerly separate Eesti Raadio and Eesti Televisioon under the terms of the Estonian National Broadcasting Act.
Architecture
thumb|right|A traditional farmhouse built in the Estonian vernacular style.
The architectural history of Estonia mainly reflects its contemporary development in northern Europe. Worth mentioning is especially the architectural ensemble that makes out the medieval old town of Tallinn, which is on the UNESCO World Heritage List. In addition, the country has several unique, more or less preserved hill forts dating from pre-Christian times, a large number of still intact medieval castles and churches, while the countryside is still shaped by the presence of a vast number of manor houses from earlier centuries.
Holidays
The Estonian National Day is the Independence Day celebrated on 24 February, the day the Estonian Declaration of Independence was issued. , there are 12 public holidays (which come with a day off) and 12 national holidays celebrated annually.
Cuisine
thumb|right|alt=Lingonberry jam. left and traditional blood sausage from Tampere, Finland|Similar to Finland, blood sausage with lingonberry sauce is considered a national delicacy
Historically, the cuisine of Estonia has been heavily dependent on seasons and simple peasant food, which today is influenced by many countries. Today, it includes many typical international foods. The most typical foods in Estonia are black bread, pork, potatoes, and dairy products. (in Estonian) Traditionally in summer and spring, Estonians like to eat everything fresh – berries, herbs, vegetables, and everything else that comes straight from the garden. Hunting and fishing have also been very common, although currently hunting and fishing are enjoyed mostly as hobbies. Today, it is also very popular to grill outside in summer.
Traditionally in winter, jams, preserves, and pickles are brought to the table. Gathering and conserving fruits, mushrooms, and vegetables for winter has always been popular, but today gathering and conserving is becoming less common because everything can be bought from stores. However, preparing food for winter is still very popular in the countryside.
Sports
thumb|left|alt=Estonian delegation in 2010 Winter Olympics, opening ceremony white jackets and blue trousers |Estonian delegation in Vancouver, 2010.
Sport plays an important role in Estonian culture. After declaring independence from Russia in 1918, Estonia first competed as a nation at the 1920 Summer Olympics, although the National Olympic Committee was established in 1923. Estonian athletes took part of the Olympic Games until the country was annexed by the Soviet Union in 1940. The 1980 Summer Olympics Sailing regatta was held in the capital city Tallinn. After regaining independence in 1991, Estonia has participated in all Olympics. Estonia has won most of its medals in athletics, weightlifting, wrestling and cross-country skiing. Estonia has had very good success at the Olympic games given the country's small population. Estonia's best results were being ranked 13th in the medal table at the 1936 Summer Olympics, and 12th at the 2006 Winter Olympics.
The list of notable Estonian athletes include wrestlers Kristjan Palusalu, Johannes Kotkas, Voldemar Väli, and Georg Lurich, skiers Andrus Veerpalu and Kristina Šmigun-Vähi, fencer Nikolai Novosjolov, decathlete Erki Nool, tennis players Kaia Kanepi and Anett Kontaveit, cyclists Jaan Kirsipuu and Erika Salumäe and discus throwers Gerd Kanter and Aleksander Tammert.
thumb|right|alt=Torsten Siem Ice yachting at the DN European Championship 2011, in Nasva, Estonia|Ice yachting DN European Championship 2011, Nasva, Estonia. Torsten Siems.
Kiiking, a relatively new sport, was invented in 1996 by Ado Kosk in Estonia. Kiiking involves a modified swing in which the rider of the swing tries to go around 360 degrees.
Paul Keres, Estonian and Soviet chess grandmaster, was among the world's top players from the mid-1930s to the mid-1960s. He narrowly missed a chance at a World Chess Championship match on five occasions.
Basketball is also a notable sport in Estonia. The domestic top-tier basketball championship is called the Korvpalli Meistriliiga. BC Kalev/Cramo are the most recent champions, having won the league in the 2015–16 season. University of Tartu team has won the league a record 26 times. Estonian clubs also participate in European and regional competitions. Estonia national basketball team previously participated in 1936 Summer Olympics, appeared in EuroBasket four times. Estonian national team also competed at the EuroBasket 2015.
At the 2016 Bandy World Championship the national team will play in Division A for the first time.
Kelly Sildaru, an Estonian freestyle skier, won the gold medal in the slopestyle event in the 2016 Winter X Games. At age 13, she became the youngest gold medalist to date at a Winter X Games event, and the first person to win a Winter X Games medal for Estonia. She has also won the women's slopestyle at 2015 and 2016 Winter Dew Tour.
International rankings
The following are links to international rankings of Estonia.
IndexRankCountries reviewedFreedom House Internet Freedom 20161st 65Global Gender Gap Report Global Gender Gap Index 201521st 136Index of Economic Freedom 20158th 178International Tax Competitiveness Index 20151st 35Reporters Without Borders Press Freedom Index 2011–201211th 187State of World Liberty Index 20061st 159Human Development Index 201530th 169Corruption Perceptions Index 201523rd 176Networked Readiness Index 201421st 133Ease of Doing Business Index 201616th 185State of The World's Children's Index 201210th 165State of The World's Women's Index 201218th 165World Freedom Index 20148th 165Legatum Prosperity Index 201626th 149EF English Proficiency Index 20134th60Programme for International Student Assessment 2015 (Maths)9th72Programme for International Student Assessment 2015 (Science)3rd72Programme for International Student Assessment 2015 (Reading)6th72
See also
Outline of Estonia
Index of Estonia-related articles
References
Bibliography
Jaak Kangilaski et al. (2005) Valge raamat (1940–1991), Justiitsministeerium, ISBN 9985-70-194-1.
Further reading
Giuseppe D'Amato [http://www.europarussia.com/books/viaggio_nellhansa_baltica/travel-to-the-baltic-hansa Travel to the Baltic Hansa. The European Union and its enlargement to the East. Book in Italian. Viaggio nell'Hansa baltica. L'Unione europea e l'allargamento ad Est. Greco&Greco editori, Milano, 2004. ISBN 88-7980-355-7
External links
Government
The President of Estonia
The Parliament of Estonia
Estonian Government
Estonian Ministry of Foreign Affairs
Statistical Office of Estonia
Chief of State and Cabinet Members
Travel
Official gateway to Estonia
E-Estonia Portal
VisitEstonia Portal
Maps
google.com map of Estonia
General information
Encyclopedia Estonica
Estonian Institute
BBC News – Estonia country profile
Estonia at UCB Libraries GovPubs''
News
Estonian Public Broadcasting
Postimees
Eesti Päevaleht
Õhtuleht
aripaev.ee
Delfi
Weather and time
Estonian Weather Service
*
Category:Liberal democracies
Category:Member states of NATO
Category:Member states of the Council of Europe
Category:Member states of the European Union
Category:Member states of the Union for the Mediterranean
Category:Member states of the United Nations
Category:Members of the Unrepresented Nations and Peoples Organization
Category:Republics
Category:States and territories established in 1918
Category:States and territories established in 1991 | 28,222,445 | 2017-01 |
Race (human categorization) | Race is the classification of humans into groups based on physical traits, ancestry, genetics or social relations, or the relations between them. First used to refer to speakers of a common language and then to denote national affiliations, by the 17th century race began to refer to physical (i.e. phenotypical) traits. The term was often used in a general biological taxonomic sense, starting from the 19th century, to denote genetically differentiated human populations defined by phenotype.
Social conceptions and groupings of races vary over time, involving folk taxonomiesSee:
that define essential types of individuals based on perceived traits. Scientists consider biological essentialism obsolete, and generally discourage racial explanations for collective differentiation in both physical and behavioral traits.
Even though there is a broad scientific agreement that essentialist and typological conceptualizations of race are untenable, scientists around the world continue to conceptualize race in widely differing ways, some of which have essentialist implications. While some researchers sometimes use the concept of race to make distinctions among fuzzy sets of traits or observable differences in behaviour that has not been invalidated as a taxonomic construct,J Philippe Rushton. (December 2001). 'Is race a valid taxonomic construct ?'. Department of Psychology, University of Western Ontario. . others in the scientific community suggest that the idea of race often is used in a naive or simplistic way, and argue that, among humans, race has no taxonomic significance by pointing out that all living humans belong to the same species, Homo sapiens, and subspecies, Homo sapiens sapiens.
Since the second half of the 20th century, the association of race with the ideologies and theories that grew out of the work of 19th-century anthropologists and physiologists has led to the use of the word race itself becoming problematic. Although still used in general contexts, race has often been replaced by less ambiguous and emotionally charged synonyms: populations, people(s), ethnic groups, or communities, depending on context. Provides 8 definitions, from biological to literary; only the most pertinent have been quoted.
Complications and various definitions of the concept
A popular view in American sociology is that the racial categories that are common in everyday usage are socially constructed, and that racial groups cannot be biologically defined.Templeton, A. R. "The genetic and evolutionary significance of human races". In Race and Intelligence: Separating Science from Myth. J. M. Fish (ed.), pp. 31-56. Mahwah, New Jersey: Lawrence Erlbaum Associates, 2002.Steve Olson, Mapping Human History: Discovering the Past Through Our Genes, Boston, 2002 Nonetheless, some biologists argue that racial categories correlate with biological traits (e.g. phenotype), and that certain genetic markers have varying frequencies among human populations, some of which correspond more or less to traditional racial groupings. For this reason, there is no current consensus about whether racial categories can be considered to have significance for understanding human genetic variation.
When people define and talk about a particular conception of race, they create a social reality through which social categorization is achieved. In this sense, races are said to be social constructs.See: These constructs develop within various legal, economic, and sociopolitical contexts, and may be the effect, rather than the cause, of major social situations.See:
as cited in While race is understood to be a social construct by many, most scholars agree that race has real material effects in the lives of people through institutionalized practices of preference and discrimination.
Socioeconomic factors, in combination with early but enduring views of race, have led to considerable suffering within disadvantaged racial groups. Racial discrimination often coincides with racist mindsets, whereby the individuals and ideologies of one group come to perceive the members of an outgroup as both racially defined and morally inferior. As a result, racial groups possessing relatively little power often find themselves excluded or oppressed, while hegemonic individuals and institutions are charged with holding racist attitudes. Racism has led to many instances of tragedy, including slavery and genocide.
In some countries, law enforcement uses race to profile suspects. This use of racial categories is frequently criticized for perpetuating an outmoded understanding of human biological variation, and promoting stereotypes. Because in some societies racial groupings correspond closely with patterns of social stratification, for social scientists studying social inequality, race can be a significant variable. As sociological factors, racial categories may in part reflect subjective attributions, self-identities, and social institutions.
Scholars continue to debate the degrees to which racial categories are biologically warranted and socially constructed, as well as the extent to which the realities of race must be acknowledged in order for society to comprehend and address racism adequately. Accordingly, the racial paradigms employed in different disciplines vary in their emphasis on biological reduction as contrasted with societal construction.
In the social sciences, theoretical frameworks such as racial formation theory and critical race theory investigate implications of race as social construction by exploring how the images, ideas and assumptions of race are expressed in everyday life. A large body of scholarship has traced the relationships between the historical, social production of race in legal and criminal language, and their effects on the policing and disproportionate incarceration of certain groups.
Historical origins of racial classification
thumb|300px|The three great races according to Meyers Konversations-Lexikon of 1885-90. The subtypes of the Mongoloid race are shown in yellow and orange tones, those of the Caucasoid race in light and medium grayish spring green-cyan tones and those of the Negroid race in brown tones. Dravidians and Sinhalese are in olive green and their classification is described as uncertain. The Mongoloid race sees the widest geographic distribution, including all of the Americas, North Asia, East Asia, and Southeast Asia, the entire inhabited Arctic while they form most of Central Asia and the Pacific Islands.
thumb|The racial diversity of Asia's peoples, Nordisk familjebok (1904)
Groups of humans have always identified themselves as distinct from neighboring groups, but such differences have not always been understood to be natural, immutable and global. These features are the distinguishing features of how the concept of race is used today. In this way the idea of race as we understand it today came about during the historical process of exploration and conquest which brought Europeans into contact with groups from different continents, and of the ideology of classification and typology found in the natural sciences.
Race and colonialism
According to Smedley and Marks the European concept of "race", along with many of the ideas now associated with the term, arose at the time of the scientific revolution, which introduced and privileged the study of natural kinds, and the age of European imperialism and colonization which established political relations between Europeans and peoples with distinct cultural and political traditions. As Europeans encountered people from different parts of the world, they speculated about the physical, social, and cultural differences among various human groups. The rise of the Atlantic slave trade, which gradually displaced an earlier trade in slaves from throughout the world, created a further incentive to categorize human groups in order to justify the subordination of African slaves. Drawing on Classical sources and upon their own internal interactions—for example, the hostility between the English and Irish powerfully influenced early European thinking about the differences between people—Europeans began to sort themselves and others into groups based on physical appearance, and to attribute to individuals belonging to these groups behaviors and capacities which were claimed to be deeply ingrained. A set of folk beliefs took hold that linked inherited physical differences between groups to inherited intellectual, behavioral, and moral qualities. Similar ideas can be found in other cultures, for example in China, where a concept often translated as "race" was associated with supposed common descent from the Yellow Emperor, and used to stress the unity of ethnic groups in China. Brutal conflicts between ethnic groups have existed throughout history and across the world.
Early taxonomic models
The first post-Classical published classification of humans into distinct races seems to be François Bernier's Nouvelle division de la terre par les différents espèces ou races qui l'habitent ("New division of Earth by the different species or races which inhabit it"), published in 1684. In the 18th century the differences among human groups became a focus of scientific investigation. But the scientific classification of phenotypic variation was frequently coupled with racist ideas about innate predispositions of different groups, always attributing the most desirable features to the White, European race and arranging the other races along a continuum of progressively undesirable attributes. The 1735 classification of Carl Linnaeus, inventor of zoological taxonomy, divided the human species Homo sapiens into continental varieties of europaeus, asiaticus, americanus, and afer, each associated with a different humour: sanguine, melancholic, choleric, and phlegmatic, respectively.Slotkin (1965), p. 177. Homo sapiens europaeus was described as active, acute, and adventurous, whereas Homo sapiens afer was said to be crafty, lazy, and careless.
The 1775 treatise "The Natural Varieties of Mankind", by Johann Friedrich Blumenbach proposed five major divisions: the Caucasoid race, the Mongoloid race, the Ethiopian race (later termed Negroid), the American Indian race, and the Malayan race, but he did not propose any hierarchy among the races. Blumenbach also noted the graded transition in appearances from one group to adjacent groups and suggested that "one variety of mankind does so sensibly pass into the other, that you cannot mark out the limits between them".
From the 17th through 19th centuries, the merging of folk beliefs about group differences with scientific explanations of those differences produced what Smedley has called an "ideology of race". According to this ideology, races are primordial, natural, enduring and distinct. It was further argued that some groups may be the result of mixture between formerly distinct populations, but that careful study could distinguish the ancestral races that had combined to produce admixed groups. Subsequent influential classifications by Georges Buffon, Petrus Camper and Christoph Meiners all classified "Negros" as inferior to Europeans. In the United States the racial theories of Thomas Jefferson were influential. He saw Africans as inferior to Whites especially in regards to their intellect, and imbued with unnatural sexual appetites, but described Native Americans as equals to whites.
Race and polygenism
In the last two decades of the 18th century, the theory of polygenism, the belief that different races had evolved separately in each continent and shared no common ancestor, was advocated in England by historian Edward Long and anatomist Charles White, in Germany by ethnographers Christoph Meiners and Georg Forster, and in France by Julien-Joseph Virey. In the US, Samuel George Morton, Josiah Nott and Louis Agassiz promoted this theory in the mid-nineteenth century. Polygenism was popular and most widespread in the 19th century, culminating in the founding of the Anthropological Society of London (1863) during the period of the American Civil War, in opposition to the Ethnological Society, which had abolitionist sympathies.
Modern debate
Models of human evolution
Today, all humans are classified as belonging to the species Homo sapiens and sub-species Homo sapiens sapiens. However, this is not the first species of homininae: the first species of genus Homo, Homo habilis, evolved in East Africa at least 2 million years ago, and members of this species populated different parts of Africa in a relatively short time. Homo erectus evolved more than 1.8 million years ago, and by 1.5 million years ago had spread throughout Europe and Asia. Virtually all physical anthropologists agree that Archaic Homo sapiens (A group including the possible species H. heidelbergensis, H. rhodesiensis and H. neanderthalensis) evolved out of African Homo erectus (sensu lato) or Homo ergaster.Camilo J. Cela-Conde and Francisco J. Ayala. 2007. Human Evolution Trails from the Past Oxford University Press p. 195Lewin, Roger. 2005. Human Evolution an illustrated introduction. Fifth edition. p. 159. Blackwell Anthropologists increasingly support the idea that anatomically modern humans (Homo sapiens sapiens) evolved in North or East Africa from H. heidelbergensis and then migrated out of Africa, mixing with and replacing H. heidelbergensis and H. neanderthalensis populations throughout Europe and Asia, and H. rhodesiensis populations in Sub-Saharan Africa (a combination of the Out of Africa and Multiregional models).
Biological classification
In the early 20th century, many anthropologists accepted and taught the belief that biologically distinct races were isomorphic with distinct linguistic, cultural, and social groups, while popularly applying that belief to the field of eugenics, in conjunction with a practice that is now called scientific racism. After the Nazi eugenics program, racial essentialism lost widespread popularity. Race anthropologists were pressured to acknowledge findings coming from studies of culture and population genetics, and to revise their conclusions about the sources of phenotypic variation. A significant number of modern anthropologists and biologists in the West came to view race as an invalid genetic or biological designation.
The first to challenge the concept of race on empirical grounds were the anthropologists Franz Boas, who provided evidence of phenotypic plasticity due to environmental factors, and Ashley Montagu, who relied on evidence from genetics. E. O. Wilson then challenged the concept from the perspective of general animal systematics, and further rejected the claim that "races" were equivalent to "subspecies".
According to Jonathan Marks,
The term race in biology is used with caution because it can be ambiguous. Generally, when it is used it is effectively a synonym of subspecies. (For animals, the only taxonomic unit below the species level is usually the subspecies; there are narrower infraspecific ranks in botany, and race does not correspond directly with any of them.)
Population geneticists have debated whether the concept of population can provide a basis for a new conception of race. To do this, a working definition of population must be found. Surprisingly, there is no generally accepted concept of population that biologists use. Although the concept of population is central to ecology, evolutionary biology and conservation biology, most definitions of population rely on qualitative descriptions such as "a group of organisms of the same species occupying a particular space at a particular time". Waples and Gaggiotti identify two broad types of definitions for populations; those that fall into an ecological paradigm, and those that fall into an evolutionary paradigm. Examples of such definitions are:
Ecological paradigm: A group of individuals of the same species that co-occur in space and time and have an opportunity to interact with each other.
Evolutionary paradigm: A group of individuals of the same species living in close-enough proximity that any member of the group can potentially mate with any other member.
Morphologically differentiated populations
Traditionally, subspecies are seen as geographically isolated and genetically differentiated populations. That is, "the designation 'subspecies' is used to indicate an objective degree of microevolutionary divergence". One objection to this idea is that it does not specify what degree of differentiation is required. Therefore, any population that is somewhat biologically different could be considered a subspecies, even to the level of a local population. As a result, Templeton has argued that it is necessary to impose a threshold on the level of difference that is required for a population to be designated a subspecies.
This effectively means that populations of organisms must have reached a certain measurable level of difference to be recognised as subspecies.
Dean Amadon proposed in 1949 that subspecies would be defined according to the seventy-five percent rule which means that 75% of a population must lie outside 99% of the range of other populations for a given defining morphological character or a set of characters. The seventy-five percent rule still has defenders but other scholars argue that it should be replaced with ninety or ninety-five percent rule.
In 1978, Sewall Wright suggested that human populations that have long inhabited separated parts of the world should, in general, be considered different subspecies by the usual criterion that most individuals of such populations can be allocated correctly by inspection. Wright argued that it does not require a trained anthropologist to classify an array of Englishmen, West Africans, and Chinese with 100% accuracy by features, skin color, and type of hair despite so much variability within each of these groups that every individual can easily be distinguished from every other. However, it is customary to use the term race rather than subspecies for the major subdivisions of the human species as well as for minor ones.
On the other hand, in practice subspecies are often defined by easily observable physical appearance, but there is not necessarily any evolutionary significance to these observed differences, so this form of classification has become less acceptable to evolutionary biologists. Likewise this typological approach to race is generally regarded as discredited by biologists and anthropologists.
Because of the difficulty in classifying subspecies morphologically, many biologists have found the concept problematic, citing issues such as:
Visible physical differences do not always correlate with one another, leading to the possibility of different classifications for the same individual organisms.
Parallel evolution can lead to the existence of the appearance of similarities between groups of organisms that are not part of the same species.
Isolated populations within previously designated subspecies have been found to exist.
The criteria for classification may be arbitrary if they ignore gradual variation in traits.
Sesardic argues that when several traits are analyzed at the same time, forensic anthropologists can classify a person's race with an accuracy of close to 100% based on only skeletal remains. This is discussed in a later section.
Ancestrally differentiated populations
Cladistics is another method of classification. A clade is a taxonomic group of organisms consisting of a single common ancestor and all the descendants of that ancestor. Every creature produced by sexual reproduction has two immediate lineages, one maternal and one paternal. Whereas Carl Linnaeus established a taxonomy of living organisms based on anatomical similarities and differences, cladistics seeks to establish a taxonomy—the phylogenetic tree—based on genetic similarities and differences and tracing the process of acquisition of multiple characteristics by single organisms. Some researchers have tried to clarify the idea of race by equating it to the biological idea of the clade. Often mitochondrial DNA or Y chromosome sequences are used to study ancient human migration paths. These single-locus sources of DNA do not recombine and are inherited from a single parent. Individuals from the various continental groups tend to be more similar to one another than to people from other continents, and tracing either mitochondrial DNA or non-recombinant Y-chromosome DNA explains how people in one place may be largely derived from people in some remote location.
Often taxonomists prefer to use phylogenetic analysis to determine whether a population can be considered a subspecies. Phylogenetic analysis relies on the concept of derived characteristics that are not shared between groups, usually applying to populations that are allopatric (geographically separated) and therefore discretely bounded. This would make a subspecies, evolutionarily speaking, a clade – a group with a common evolutionary ancestor population. The smooth gradation of human genetic variation in general tends to rule out any idea that human population groups can be considered monophyletic (cleanly divided), as there appears to always have been considerable gene flow between human populations. Rachel Caspari (2003) have argued that clades are by definition monophyletic groups (a taxon that includes all descendants of a given ancestor) and since no groups currently regarded as races are monophyletic, none of those groups can be clades. Robin Andreasen (2000) proposes that cladistics can be used to categorize human races biologically, and that races can be both biologically real and socially constructed.
For the anthropologists Lieberman and Jackson (1995), however, there are more profound methodological and conceptual problems with using cladistics to support concepts of race. They claim that "the molecular and biochemical proponents of this model explicitly use racial categories in their initial grouping of samples". For example, the large and highly diverse macroethnic groups of East Indians, North Africans, and Europeans are presumptively grouped as Caucasians prior to the analysis of their DNA variation. This is claimed to limit and skew interpretations, obscure other lineage relationships, deemphasize the impact of more immediate clinal environmental factors on genomic diversity, and can cloud our understanding of the true patterns of affinity. They argue that however significant the empirical research, these studies use the term race in conceptually imprecise and careless ways. They suggest that the authors of these studies find support for racial distinctions only because they began by assuming the validity of race. "For empirical reasons we prefer to place emphasis on clinal variation, which recognizes the existence of adaptive human hereditary variation and simultaneously stresses that such variation is not found in packages that can be labeled races."
These scientists do not dispute the importance of cladistic research, only its retention of the word race, when reference to populations and clinal gradations are more than adequate to describe the results.
Clines
One crucial innovation in reconceptualizing genotypic and phenotypic variation was the anthropologist C. Loring Brace's observation that such variations, insofar as it is affected by natural selection, slow migration, or genetic drift, are distributed along geographic gradations or clines. In part this is due to isolation by distance. This point called attention to a problem common to phenotype-based descriptions of races (for example, those based on hair texture and skin color): they ignore a host of other similarities and differences (for example, blood type) that do not correlate highly with the markers for race. Thus, anthropologist Frank Livingstone's conclusion, that since clines cross racial boundaries, "there are no races, only clines".
In a response to Livingstone, Theodore Dobzhansky argued that when talking about race one must be attentive to how the term is being used: "I agree with Dr. Livingstone that if races have to be 'discrete units', then there are no races, and if 'race' is used as an 'explanation' of the human variability, rather than vice versa, then the explanation is invalid." He further argued that one could use the term race if one distinguished between "race differences" and "the race concept". The former refers to any distinction in gene frequencies between populations; the latter is "a matter of judgment". He further observed that even when there is clinal variation, "Race differences are objectively ascertainable biological phenomena ... but it does not follow that racially distinct populations must be given racial (or subspecific) labels." In short, Livingstone and Dobzhansky agree that there are genetic differences among human beings; they also agree that the use of the race concept to classify people, and how the race concept is used, is a matter of social convention. They differ on whether the race concept remains a meaningful and useful social convention.
In 1964, the biologists Paul Ehrlich and Holm pointed out cases where two or more clines are distributed discordantly—for example, melanin is distributed in a decreasing pattern from the equator north and south; frequencies for the haplotype for beta-S hemoglobin, on the other hand, radiate out of specific geographical points in Africa. As the anthropologists Leonard Lieberman and Fatimah Linda Jackson observed, "Discordant patterns of heterogeneity falsify any description of a population as if it were genotypically or even phenotypically homogeneous".
Patterns such as those seen in human physical and genetic variation as described above, have led to the consequence that the number and geographic location of any described races is highly dependent on the importance attributed to, and quantity of, the traits considered. Scientists discovered a skin-lighting mutation that partially accounts for the appearance of Light skin in humans (people who migrated out of Africa northward into what is now Europe) which they estimate occurred 20,000 to 50,000 years ago. The East Asians owe their relatively light skin to different mutations. On the other hand, the greater the number of traits (or alleles) considered, the more subdivisions of humanity are detected, since traits and gene frequencies do not always correspond to the same geographical location. Or as put it:
More recent genetic studies indicate that skin color may change radically over as few as 100 generations, or about 2,500 years, given the influence of the environment.
argued for smooth, clinal genetic variation in ancestral populations even in regions previously considered racially homogeneous, with the apparent gaps turning out to be artifacts of sampling techniques. disputed this and argued that using more data showed that there were small discontinuities in the smooth genetic variation for ancestral populations at the location of geographic barriers such as the Sahara, the Oceans, and the Himalayas.
Genetically differentiated populations
Another way to look at differences between populations is to measure genetic differences rather than physical differences between groups. The mid-20th-century anthropologist William C. Boyd defined race as: "A population which differs significantly from other populations in regard to the frequency of one or more of the genes it possesses. It is an arbitrary matter which, and how many, gene loci we choose to consider as a significant 'constellation'". Leonard Lieberman and Rodney Kirk have pointed out that "the paramount weakness of this statement is that if one gene can distinguish races then the number of races is as numerous as the number of human couples reproducing." Moreover, the anthropologist Stephen Molnar has suggested that the discordance of clines inevitably results in a multiplication of races that renders the concept itself useless. The Human Genome Project states "People who have lived in the same geographic region for many generations may have some alleles in common, but no allele will be found in all members of one population and in no members of any other."
Fixation index
The population geneticist Sewall Wright developed one way of measuring genetic differences between populations known as the Fixation index, which is often abbreviated to FST. This statistic is often used in taxonomy to compare differences between any two given populations by measuring the genetic differences among and between populations for individual genes, or for many genes simultaneously. It is often stated that the fixation index for humans is about 0.15. This translates to an estimated 85% of the variation measured in the overall human population is found within individuals of the same population, and about 15% of the variation occurs between populations. These estimates imply that any two individuals from different populations are almost as likely to be more similar to each other than either is to a member of their own group. Richard Lewontin, who affirmed these ratios, thus concluded neither "race" nor "subspecies" were appropriate or useful ways to describe human populations. However, others have noticed that group variation was relatively similar to the variation observed in other mammalian species.
Wright himself believed that values >0.25 represent very great genetic variation and that an FST of 0.15–0.25 represented great variation. However, about 5% of human variation occurs between populations within continents, therefore FST values between continental groups of humans (or races) of as low as 0.1 (or possibly lower) have been found in some studies, suggesting more moderate levels of genetic variation. Graves (1996) has countered that FST should not be used as a marker of subspecies status, as the statistic is used to measure the degree of differentiation between populations, although see also Wright (1978).
In an ongoing debate, some geneticists argue that race is neither a meaningful concept nor a useful heuristic device, and even that genetic differences among groups are biologically meaningless, because more genetic variation exists within such races than among them, and that racial traits overlap without discrete boundaries.
Jeffrey Long and Rick Kittles give a long critique of the application of FST to human populations in their 2003 paper "Human Genetic Diversity and the Nonexistence of Biological Races". They find that the figure of 85% is misleading because it implies that all human populations contain on average 85% of all genetic diversity. They claim that this does not correctly reflect human population history, because it treats all human groups as independent. A more realistic portrayal of the way human groups are related is to understand that some human groups are parental to other groups and that these groups represent paraphyletic groups to their descent groups. For example, under the recent African origin theory the human population in Africa is paraphyletic to all other human groups because it represents the ancestral group from which all non-African populations derive, but more than that, non-African groups only derive from a small non-representative sample of this African population. This means that all non-African groups are more closely related to each other and to some African groups (probably east Africans) than they are to others, and further that the migration out of Africa represented a genetic bottleneck, with much of the diversity that existed in Africa not being carried out of Africa by the emigrating groups. This view produces a version of human population movements that do not result in all human populations being independent; but rather, produces a series of dilutions of diversity the further from Africa any population lives, each founding event representing a genetic subset of its parental population. Long and Kittles find that rather than 85% of human genetic diversity existing in all human populations, about 100% of human diversity exists in a single African population, whereas only about 70% of human genetic diversity exists in a population derived from New Guinea. Long and Kittles argued that this still produces a global human population that is genetically homogeneous compared to other mammalian populations.
Cluster analysis
thumb|300px|right|World map based on a genetic principal component analysis of human populations from Luigi Luca Cavalli-Sforza's book The History and Geography of Human Genes (1994).
In his 2003 paper, "Human Genetic Diversity: Lewontin's Fallacy", A. W. F. Edwards argued that rather than using a locus-by-locus analysis of variation to derive taxonomy, it is possible to construct a human classification system based on characteristic genetic patterns, or clusters inferred from multilocus genetic data. Geographically based human studies since have shown that such genetic clusters can be derived from analyzing of a large number of loci which can assort individuals sampled into groups analogous to traditional continental racial groups. Joanna Mountain and Neil Risch cautioned that while genetic clusters may one day be shown to correspond to phenotypic variations between groups, such assumptions were premature as the relationship between genes and complex traits remains poorly understood. However, Risch denied such limitations render the analysis useless: "Perhaps just using someone's actual birth year is not a very good way of measuring age. Does that mean we should throw it out? ... Any category you come up with is going to be imperfect, but that doesn't preclude you from using it or the fact that it has utility."
Early human genetic cluster analysis studies were conducted with samples taken from ancestral population groups living at extreme geographic distances from each other. It was thought that such large geographic distances would maximize the genetic variation between the groups sampled in the analysis and thus maximize the probability of finding cluster patterns unique to each group. In light of the historically recent acceleration of human migration (and correspondingly, human gene flow) on a global scale, further studies were conducted to judge the degree to which genetic cluster analysis can pattern ancestrally identified groups as well as geographically separated groups. One such study looked at a large multiethnic population in the United States, and "detected only modest genetic differentiation between different current geographic locales within each race/ethnicity group. Thus, ancient geographic ancestry, which is highly correlated with self-identified race/ethnicity—as opposed to current residence—is the major determinant of genetic structure in the U.S. population." ()
have argued that even when individuals can be reliably assigned to specific population groups, it may still be possible for two randomly chosen individuals from different populations/clusters to be more similar to each other than to a randomly chosen member of their own cluster. They found that many thousands of genetic markers had to be used in order for the answer to the question "How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?" to be "never". This assumed three population groups separated by large geographic ranges (European, African and East Asian). The entire world population is much more complex and studying an increasing number of groups would require an increasing number of markers for the same answer. The authors conclude that "caution should be used when using geographic or genetic ancestry to make inferences about individual phenotypes." Witherspoon, et al. concluded that, "The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population."
Anthropologists such as C. Loring Brace, the philosophers Jonathan Kaplan and Rasmus Winther,Kaplan, Jonathan Michael Winther, Rasmus Grønfeldt (2014). 'Realism, Antirealism, and Conventionalism About Race' Philosophy of Science http://philpapers.org/rec/KAPRAAKaplan, Jonathan Michael, Winther, Rasmus Grønfeldt (2012). Prisoners of Abstraction? The Theory and Measure of Genetic Variation, and the Very Concept of 'Race' Biological Theory http://philpapers.org/archive/KAPPOA.14.pdf and the geneticist Joseph Graves, have argued that while there it is certainly possible to find biological and genetic variation that corresponds roughly to the groupings normally defined as "continental races", this is true for almost all geographically distinct populations. The cluster structure of the genetic data is therefore dependent on the initial hypotheses of the researcher and the populations sampled. When one samples continental groups, the clusters become continental; if one had chosen other sampling patterns, the clustering would be different. Weiss and Fullerton have noted that if one sampled only Icelanders, Mayans and Maoris, three distinct clusters would form and all other populations could be described as being clinally composed of admixtures of Maori, Icelandic and Mayan genetic materials. Kaplan and Winther therefore argue that, seen in this way, both Lewontin and Edwards are right in their arguments. They conclude that while racial groups are characterized by different allele frequencies, this does not mean that racial classification is a natural taxonomy of the human species, because multiple other genetic patterns can be found in human populations that crosscut racial distinctions. Moreover, the genomic data underdetermines whether one wishes to see subdivisions (i.e., splitters) or a continuum (i.e., lumpers). Under Kaplan and Winther's view, racial groupings are objective social constructions (see Mills 1998Mills CW (1988) "But What Are You Really? The Metaphysics of Race" in Blackness visible: essays on philosophy and race, pp. 41-66. Cornell University Press, Ithaca, NY) that have conventional biological reality only insofar as the categories are chosen and constructed for pragmatic scientific reasons. In earlier work, Winther had identified "diversity partitioning" and "clustering analysis" as two separate methodologies, with distinct questions, assumptions, and protocols. Each is also associated with opposing ontological consequences vis-a-vis the metaphysics of race.Winther, Rasmus Grønfeldt (2014/2011). The Genetic Reification of Race? A Story of Two Mathematical Methods. Critical Philosophy of Race http://philpapers.org/archive/WINTGR.pdf
Guido Barbujani has written that human genetic variation is generally distributed continuously in gradients across much of Earth, and that there is no evidence that genetic boundaries between human populations exist as would be necessary for human races to exist.
Social constructions
As anthropologists and other evolutionary scientists have shifted away from the language of race to the term population to talk about genetic differences, historians, cultural anthropologists and other social scientists re-conceptualized the term "race" as a cultural category or social construct— i.e. a way among many possible ways in which a society chooses to divide its members into categories.
Many social scientists have replaced the word race with the word "ethnicity" to refer to self-identifying groups based on beliefs concerning shared culture, ancestry and history. Alongside empirical and conceptual problems with "race", following the Second World War, evolutionary and social scientists were acutely aware of how beliefs about race had been used to justify discrimination, apartheid, slavery, and genocide. This questioning gained momentum in the 1960s during the U.S. civil rights movement and the emergence of numerous anti-colonial movements worldwide. They thus came to believe that race itself is a social construct, a concept that was believed to correspond to an objective reality but which was believed in because of its social functions.
Craig Venter and Francis Collins of the National Institute of Health jointly made the announcement of the mapping of the human genome in 2000. Upon examining the data from the genome mapping, Venter realized that although the genetic variation within the human species is on the order of 1–3% (instead of the previously assumed 1%), the types of variations do not support notion of genetically defined races. Venter said, "Race is a social concept. It's not a scientific one. There are no bright lines (that would stand out), if we could compare all the sequenced genomes of everyone on the planet." "When we try to apply science to try to sort out these social differences, it all falls apart."
Stephan Palmié asserted that race "is not a thing but a social relation"; or, in the words of Katya Gibel Mevorach, "a metonym", "a human invention whose criteria for differentiation are neither universal nor fixed but have always been used to manage difference." As such, the use of the term "race" itself must be analyzed. Moreover, they argue that biology will not explain why or how people use the idea of race: History and social relationships will.
Imani Perry has argued that race "is produced by social arrangements and political decision making."Imani Perry, More Beautiful and More Terrible: The Embrace and Transcendence of Racial Inequality in the United States (New York, NY: New York University Press, 2011), 23. Perry explains race more in stating, "race is something that happens, rather than something that is. It is dynamic, but it holds no objective truth."Imani Perry, More Beautiful and More Terrible: The Embrace and Transcendence of Racial Inequality in the United States (New York, NY: New York University Press, 2011), 24.
Some scholars have challenged the notion that race is primarily a social construction by argeuing that race has a biological basis. One of the researchers, Neil Risch, noted: "we looked at the correlation between genetic structure [based on microsatellite markers] versus self-description, we found 99.9% concordance between the two. We actually had a higher discordance rate between self-reported sex and markers on the X chromosome! So you could argue that sex is also a problematic category. And there are differences between sex and gender; self-identification may not be correlated with biology perfectly. And there is sexism."
Brazil
thumb|190px|Portrait "Redenção do Can" (1895), showing a Brazilian family each generation becoming "whiter".
Compared to 19th-century United States, 20th-century Brazil was characterized by a perceived relative absence of sharply defined racial groups. According to anthropologist Marvin Harris, this pattern reflects a different history and different social relations.
Basically, race in Brazil was "biologized", but in a way that recognized the difference between ancestry (which determines genotype) and phenotypic differences. There, racial identity was not governed by rigid descent rule, such as the one-drop rule, as it was in the United States. A Brazilian child was never automatically identified with the racial type of one or both parents, nor were there only a very limited number of categories to choose from, to the extent that full siblings can pertain to different racial groups.
Over a dozen racial categories would be recognized in conformity with all the possible combinations of hair color, hair texture, eye color, and skin color. These types grade into each other like the colors of the spectrum, and not one category stands significantly isolated from the rest. That is, race referred preferentially to appearance, not heredity, and appearance is a poor indication of ancestry, because only a few genes are responsible for someone's skin color and traits: a person who is considered white may have more African ancestry than a person who is considered black, and the reverse can be also true about European ancestry.BBC delves into Brazilians' roots accessed July 13, 2009 The complexity of racial classifications in Brazil reflects the extent of miscegenation in Brazilian society, a society that remains highly, but not strictly, stratified along color lines. These socioeconomic factors are also significant to the limits of racial lines, because a minority of pardos, or brown people, are likely to start declaring themselves white or black if socially upward,RIBEIRO, Darcy. O Povo Brasileiro, Companhia de Bolso, fourth reprint, 2008 (2008). and being seen as relatively "whiter" as their perceived social status increases (much as in other regions of Latin America).Levine-Rasky, Cynthia. 2002. "Working through whiteness: international perspectives. SUNY Press (p. 73) "Money whitens" If any phrase encapsulates the association of whiteness and the modern in Latin America, this is it. It is a cliché formulated and reformulated throughout the region, a truism dependant upon the social experience that wealth is associated with whiteness, and that in obtaining the former one may become aligned with the latter (and vice versa)"."
Self-reported ancestry of people fromRio de Janeiro, by race or skin color (2000 survey) Ancestry brancos pardos pretos European only 48% 6%- African only – 12%25% Amerindian only – 2%- African and European 23% 34%31% Amerindian and European 14% 6%- African and Amerindian – 4%9% African, Amerindian and European 15% 36%35% Total 100% 100%100% Any African 38% 86%100%
Fluidity of racial categories aside, the "biologification" of race in Brazil referred above would match contemporary concepts of race in the United States quite closely, though, if Brazilians are supposed to choose their race as one among, Asian and Indigenous apart, three IBGE's census categories. While assimilated Amerindians and people with very high quantities of Amerindian ancestry are usually grouped as caboclos, a subgroup of pardos which roughly translates as both mestizo and hillbilly, for those of lower quantity of Amerindian descent a higher European genetic contribution is expected to be grouped as a pardo. In several genetic tests, people with less than 60-65% of European descent and 5-10% of Amerindian descent usually cluster with Afro-Brazilians (as reported by the individuals), or 6.9% of the population, and those with about 45% or more of Subsaharan contribution most times do so (in average, Afro-Brazilian DNA was reported to be about 50% Subsaharan African, 37% European and 13% Amerindian).Negros de origem européia. afrobras.org.br
If a more consistent report with the genetic groups in the gradation of miscegenation is to be considered (e.g. that would not cluster people with a balanced degree of African and non-African ancestry in the black group instead of the multiracial one, unlike elsewhere in Latin America where people of high quantity of African descent tend to classify themselves as mixed), more people would report themselves as white and pardo in Brazil (47.7% and 42.4% of the population as of 2010, respectively), because by research its population is believed to have between 65 and 80% of autosomal European ancestry, in average (also >35% of European mt-DNA and >95% of European Y-DNA).Brazilian DNA is nearly 80% European, indicates study.NMO Godinho O impacto das migrações na constituição genética de populações latino-americanas. PhD Thesis, Universidade de Brasília (2008).
Ethnic groups in Brazil (census data)Ethnic group white black pardo 1872 3,787,289 1,954,452 4,188,737 1940 26,171,778 6,035,869 8,744,365 1991 75,704,927 7,335,136 62,316,064
Ethnic groups in Brazil (1872 and 1890) Years whites pardos blacks Indians Total 1872 38.1% 38.3% 19.7% 3.9% 100% 1890 44.0% 32.4% 14.6% 9% 100%
This is not surprising, though: While the greatest number of slaves imported from Africa were sent to Brazil, totalizing roughly 3.5 million people, they lived in such miserable conditions that male African Y-DNA there is significantly rare due to the lack of resources and time involved with raising of children, so that most African descent originarily came from relations between white masters and female slaves. From the last decades of the Empire until the 1950s, the proportion of the white population increased significantly while Brazil welcomed 5.5 million immigrants between 1821 and 1932, not much behind its neighbor Argentina with 6.4 million,Argentina. by Arthur P. Whitaker. New Jersey: Prentice Hall Inc, 1984. Cited in Yale immigration study and it received more European immigrants in its colonial history than the United States. Between 1500 and 1760, 700.000 Europeans settled in Brazil, while 530.000 Europeans settled in the United States for the same given time.Renato Pinto Venâncio, "Presença portuguesa: de colonizadores a imigrantes" i.e. Portuguese presence: from colonizers to immigrants, chap. 3 of Brasil: 500 anos de povoamento (IBGE). Relevant extract available here Thus, the historical construction of race in Brazilian society dealt primarily with gradations between persons of majoritarily European ancestry and little minority groups with otherwise lower quantity therefrom in recent times.
European Union
According to European Council:
The European Union uses the terms racial origin and ethnic origin synonymously in its documents and according to it "the use of the term 'racial origin' in this directive does not imply an acceptance of such [racial] theories". Haney López warns that using "race" as a category within the law tends to legitimize its existence in the popular imagination. In the diverse geographic context of Europe, ethnicity and ethnic origin are arguably more resonant and are less encumbered by the ideological baggage associated with "race". In European context, historical resonance of "race" underscores its problematic nature. In some states, it is strongly associated with laws promulgated by the Nazi and Fascist governments in Europe during the 1930s and 1940s. Indeed, in 1996, the European Parliament adopted a resolution stating that "the term should therefore be avoided in all official texts".
The concept of racial origin relies on the notion that human beings can be separated into biologically distinct "races", an idea generally rejected by the scientific community. Since all human beings belong to the same species, the ECRI (European Commission against Racism and Intolerance) rejects theories based on the existence of different "races". However, in its Recommendation ECRI uses this term in order to ensure that those persons who are generally and erroneously perceived as belonging to "another race" are not excluded from the protection provided for by the legislation. The law claims to reject the existence of "race", yet penalize situations where someone is treated less favourably on this ground.
France
Since the end of the Second World War, France has become an ethnically diverse country. Today, approximately five percent of the French population is non-European and non-white. This does not approach the number of non-white citizens in the United States (roughly 28–37%, depending on how Latinos are classified (see Demographics of the United States). Nevertheless, it amounts to at least three million people, and has forced the issues of ethnic diversity onto the French policy agenda. France has developed an approach to dealing with ethnic problems that stands in contrast to that of many advanced, industrialized countries. Unlike the United States, Britain, or even the Netherlands, France maintains a "color-blind" model of public policy. This means that it targets virtually no policies directly at racial or ethnic groups. Instead, it uses geographic or class criteria to address issues of social inequalities. It has, however, developed an extensive anti-racist policy repertoire since the early 1970s. Until recently, French policies focused primarily on issues of hate speech—going much further than their American counterparts—and relatively less on issues of discrimination in jobs, housing, and in provision of goods and services.Race Policy in France by Erik Bleich, Middlebury College, 2012-05-01
United States
In the United States, views of race that see racial groups as defined genetically are common in the biological sciences although controversial, whereas the social constructionist view is dominant in the social sciences.
The immigrants to the Americas came from every region of Europe, Africa, and Asia. They mixed among themselves and with the indigenous inhabitants of the continent. In the United States most people who self-identify as African–American have some European ancestors, while many people who identify as European American have some African or Amerindian ancestors.
Since the early history of the United States, Amerindians, African–Americans, and European Americans have been classified as belonging to different races. Efforts to track mixing between groups led to a proliferation of categories, such as mulatto and octoroon. The criteria for membership in these races diverged in the late 19th century. During Reconstruction, increasing numbers of Americans began to consider anyone with "one drop" of known "Black blood" to be Black, regardless of appearance.3 By the early 20th century, this notion was made statutory in many states.4 Amerindians continue to be defined by a certain percentage of "Indian blood" (called blood quantum). To be White one had to have perceived "pure" White ancestry. The one-drop rule or hypodescent rule refers to the convention of defining a person as racially black if he or she has any known African ancestry. This rule meant that those that were mixed race but with some discernible African ancestry were defined as black. The one-drop rule is specific to not only those with African ancestry but to the United States, making it a particularly African-American experience.
The decennial censuses conducted since 1790 in the United States created an incentive to establish racial categories and fit people into these categories.
The term "Hispanic" as an ethnonym emerged in the 20th century with the rise of migration of laborers from the Spanish-speaking countries of Latin America to the United States. Today, the word "Latino" is often used as a synonym for "Hispanic". The definitions of both terms are non-race specific, and include people who consider themselves to be of distinct races (Black, White, Amerindian, Asian, and mixed groups). However, there is a common misconception in the US that Hispanic/Latino is a raceHorsman, Reginald, Race and Manifest Destiny: The Origins of American Radial Anglo-Saxonism, Harvard University Press, Cambridge, Massachusetts, 1981 p. 210. This reference is speaking in historic terms bt there is not reason to think that this perception has altered much or sometimes even that national origins such as Mexican, Cuban, Colombian, Salvadoran, etc. are races. In contrast to "Latino" or "Hispanic", "Anglo" refers to non-Hispanic White Americans or non-Hispanic European Americans, most of whom speak the English language but are not necessarily of English descent.
Views across disciplines over time
In Poland, the race concept was rejected by 25 percent of anthropologists in 2001, although: "Unlike the U.S. anthropologists, Polish anthropologists tend to regard race as a term without taxonomic value, often as a substitute for population."
Wang, Štrkalj et al. (2003) examined the use of race as a biological concept in research papers published in China's only biological anthropology journal, Acta Anthropologica Sinica. The study showed that the race concept was widely used among Chinese anthropologists. In a 2007 review paper, Štrkalj suggested that the stark contrast of the racial approach between the United States and China was due to the fact that race is a factor for social cohesion among the ethnically diverse people of China, whereas "race" is a very sensitive issue in America and the racial approach is considered to undermine social cohesion - with the result that in the socio-political context of US academics scientists are encouraged not to use racial categories, whereas in China they are encouraged to use them.
Lieberman et al. in a 2004 study researched the acceptance of race as a concept among anthropologists in the United States, Canada, the Spanish speaking areas, Europe, Russia and China. Rejection of race ranged from high to low, with the highest rejection rate in the United States and Canada, a moderate rejection rate in Europe, and the lowest rejection rate in Russia and China. Methods used in the studies reported included questionnaires and content analysis.
Kaszycka et al. (2009) in 2002–2003 surveyed European anthropologists' opinions toward the biological race concept. Three factors, country of academic education, discipline, and age, were found to be significant in differentiating the replies. Those educated in Western Europe, physical anthropologists, and middle-aged persons rejected race more frequently than those educated in Eastern Europe, people in other branches of science, and those from both younger and older generations." The survey shows that the views on race are sociopolitically (ideologically) influenced and highly dependent on education."
United States
One result of debates over the meaning and validity of the concept of race is that the current literature across different disciplines regarding human variation lacks consensus, though within some fields, such as some branches of anthropology, there is strong consensus. Some studies use the word race in its early essentialist taxonomic sense. Many others still use the term race, but use it to mean a population, clade, or haplogroup. Others eschew the concept of race altogether, and use the concept of population as a less problematic unit of analysis.
Eduardo Bonilla-Silva, Sociology professor at Duke University, remarks,Race, Class, and Gender in the United States (text only) 7th (Seventh) edition by P. S. Rothenberg p131 "I contend that racism is, more than anything else, a matter of group power; it is about a dominant racial group (whites) striving to maintain its systemic advantages and minorities fighting to subvert the racial status quo."Eduardo Bonilla-Silva, Racism Without Racists (Second Edition) (2006), Rowman and Littlefield The types of practices that take place under this new color-blind racism is subtle, institutionalized, and supposedly not racial. Color-blind racism thrives on the idea that race is no longer an issue in the United States. There are contradictions between the alleged color-blindness of most whites and the persistence of a color-coded system of inequality.
U.S. anthropology
The concept of biological race has declined significantly in frequency of use in physical anthropology in the United States during the 20th century. A majority of physical anthropologists in the United States have rejected the concept of biological races.The decline of race in American physical anthropology Leonard Lieberman, Rodney C. Kirk, Michael Corcoran. 2003. Department of Sociology and Anthropology, Central Michigan University, Mt. Pleasant, MI. 48859, USA Since 1932, an increasing number of college textbooks introducing physical anthropology have rejected race as a valid concept: from 1932 to 1976, only seven out of thirty-two rejected race; from 1975 to 1984, thirteen out of thirty-three rejected race; from 1985 to 1993, thirteen out of nineteen rejected race. According to one academic journal entry, where 78 percent of the articles in the 1931 Journal of Physical Anthropology employed these or nearly synonymous terms reflecting a bio-race paradigm, only 36 percent did so in 1965, and just 28 percent did in 1996.
The "Statement on 'Race'" (1998) composed by a select committee of anthropologists and issued by the executive board of the American Anthropological Association as a statement they "believe [...] represents generally the contemporary thinking and scholarly positions of a majority of anthropologists", declares:
A survey, taken in 1985 , asked 1,200 American scientists how many disagree with the following proposition: "There are biological races in the species Homo sapiens." The responses were for anthropologists:
physical anthropologists 41%
cultural anthropologists 53%
The figure for physical anthropologists at PhD granting departments was slightly higher, rising from 41% to 42%, with 50% agreeing. Lieberman's study also showed that more women reject the concept of race than men. This survey, however, did not specify any particular definition of race (although it did clearly specify biological race within the species Homo sapiens); it is difficult to say whether those who supported the statement thought of race in taxonomic or population terms.
The same survey, taken in 1999, showed the following changing results for anthropologists:
physical anthropologists 69%
cultural anthropologists 80%
However, a line of research conducted by Cartmill (1998) seemed to limit the scope of Lieberman's finding that there was "a significant degree of change in the status of the race concept". Goran Štrkalj has argued that this may be because Lieberman and collaborators had looked at all the members of the American Anthropological Association irrespective of their field of research interest, while Cartmill had looked specifically at biological anthropologists interested in human variation.
According to the 2000 edition of a popular physical anthropology textbook, forensic anthropologists are overwhelmingly in support of the idea of the basic biological reality of human races. Forensic physical anthropologist and professor George W. Gill has said that the idea that race is only skin deep "is simply not true, as any experienced forensic anthropologist will affirm" and "Many morphological features tend to follow geographic boundaries coinciding often with climatic zones. This is not surprising since the selective forces of climate are probably the primary forces of nature that have shaped human races with regard not only to skin color and hair form but also the underlying bony structures of the nose, cheekbones, etc. (For example, more prominent noses humidify air better.)" While he can see good arguments for both sides, the complete denial of the opposing evidence "seems to stem largely from socio-political motivation and not science at all". He also states that many biological anthropologists see races as real yet "not one introductory textbook of physical anthropology even presents that perspective as a possibility. In a case as flagrant as this, we are not dealing with science but rather with blatant, politically motivated censorship".
In partial response to Gill's statement, Professor of Biological Anthropology C. Loring Brace argues that the reason laymen and biological anthropologists can determine the geographic ancestry of an individual can be explained by the fact that biological characteristics are clinally distributed across the planet, and that does not translate into the concept of race. He states:
"Race" is still sometimes used within forensic anthropology (when analyzing skeletal remains), biomedical research, and race-based medicine. Brace has criticized this, the practice of forensic anthropologists for using the controversial concept "race" out of convention when they in fact should be talking about regional ancestry. He argues that while forensic anthropologists can determine that a skeletal remain comes from a person with ancestors in a specific region of Africa, categorizing that skeletal as being "black" is a socially constructed category that is only meaningful in the particular context of the United States, and which is not itself scientifically valid.
Other fields
In the same 1985 survey , 16% of the surveyed biologists and 36% of the surveyed developmental psychologists disagreed with the proposition: "There are biological races in the species Homo sapiens."
The authors of the study also examined 77 college textbooks in biology and 69 in physical anthropology published between 1932 and 1989. Physical anthropology texts argued that biological races exist until the 1970s, when they began to argue that races do not exist. In contrast, biology textbooks did not undergo such a reversal but many instead dropped their discussion of race altogether. The authors attributed this to biologists trying to avoid discussing the political implications of racial classifications, instead of discussing them, and to the ongoing discussions in biology about the validity of the concept "subspecies". The authors also noted that some widely used textbooks in biology such as Douglas J. Futuyma's 1986 "Evolutionary Biology" had abandoned the race concept, "The concept of race, masking the overwhelming genetic similarity of all peoples and the mosaic patterns of variation that do not correspond to racial divisions, is not only socially dysfunctional but is biologically indefensible as well (pp. 5 18-5 19)."
A 1994 examination of 32 English sport/exercise science textbooks found that 7 (21.9%) claimed that there are biophysical differences due to race that might explain differences in sports performance, 24 (75%) did not mention nor refute the concept, and 1 (3.12%) expressed caution with the idea.
In February 2001, the editors of Archives of Pediatrics and Adolescent Medicine asked "authors to not use race and ethnicity when there is no biological, scientific, or sociological reason for doing so." The editors also stated that "analysis by race and ethnicity has become an analytical knee-jerk reflex." Nature Genetics now ask authors to "explain why they make use of particular ethnic groups or populations, and how classification was achieved."
Morning (2008) looked at high school biology textbooks during the 1952-2002 period and initially found a similar pattern with only 35% directly discussing race in the 1983–92 period from initially 92% doing so. However, this has increased somewhat after this to 43%. More indirect and brief discussions of race in the context of medical disorders have increased from none to 93% of textbooks. In general, the material on race has moved from surface traits to genetics and evolutionary history. The study argues that the textbooks' fundamental message about the existence of races has changed little.
Gissis (2008) examined several important American and British journals in genetics, epidemiology and medicine for their content during the 1946-2003 period. He wrote that "Based upon my findings I argue that the category of race only seemingly disappeared from scientific discourse after World War II and has had a fluctuating yet continuous use during the time span from 1946 to 2003, and has even become more pronounced from the early 1970s on".
33 health services researchers from differing geographic regions were interviewed in a 2008 study. The researchers recognized the problems with racial and ethnic variables but the majority still believed these variables were necessary and useful.
A 2010 examination of 18 widely used English anatomy textbooks found that they all represented human biological variation in superficial and outdated ways, many of them making use of the race concept in ways that were current in 1950s anthropology. The authors recommended that anatomical education should describe human anatomical variation in more detail and rely on newer research that demonstrates the inadequacies of simple racial typologies.
Political and practical uses
Biomedicine
In the United States, federal government policy promotes the use of racially categorized data to identify and address health disparities between racial or ethnic groups. In clinical settings, race has sometimes been considered in the diagnosis and treatment of medical conditions. Doctors have noted that some medical conditions are more prevalent in certain racial or ethnic groups than in others, without being sure of the cause of those differences. Recent interest in race-based medicine, or race-targeted pharmacogenomics, has been fueled by the proliferation of human genetic data which followed the decoding of the human genome in the first decade of the twenty-first century. There is an active debate among biomedical researchers about the meaning and importance of race in their research. Proponents of the use of racial categories in biomedicine argue that continued use of racial categorizations in biomedical research and clinical practice makes possible the application of new genetic findings, and provides a clue to diagnosis.
Other researchers point out that finding a difference in disease prevalence between two socially defined groups does not necessarily imply genetic causation of the difference. They suggest that medical practices should maintain their focus on the individual rather than an individual's membership to any group. They argue that overemphasizing genetic contributions to health disparities carries various risks such as reinforcing stereotypes, promoting racism or ignoring the contribution of non-genetic factors to health disparities. International epidemiological data show that living conditions rather than race make the biggest difference in health outcomes even for diseases that have "race-specific" treatments. Some studies have found that patients are reluctant to accept racial categorization in medical practice.
Law enforcement
In an attempt to provide general descriptions that may facilitate the job of law enforcement officers seeking to apprehend suspects, the United States FBI employs the term "race" to summarize the general appearance (skin color, hair texture, eye shape, and other such easily noticed characteristics) of individuals whom they are attempting to apprehend. From the perspective of law enforcement officers, it is generally more important to arrive at a description that will readily suggest the general appearance of an individual than to make a scientifically valid categorization by DNA or other such means. Thus, in addition to assigning a wanted individual to a racial category, such a description will include: height, weight, eye color, scars and other distinguishing characteristics.
Criminal justice agencies in England and Wales use at least two separate racial/ethnic classification systems when reporting crime, as of 2010. One is the system used in the 2001 Census when individuals identify themselves as belonging to a particular ethnic group: W1 (White-British), W2 (White-Irish), W9 (Any other white background); M1 (White and black Caribbean), M2 (White and black African), M3 (White and Asian), M9 (Any other mixed background); A1 (Asian-Indian), A2 (Asian-Pakistani), A3 (Asian-Bangladeshi), A9 (Any other Asian background); B1 (Black Caribbean), B2 (Black African), B3 (Any other black background); O1 (Chinese), O9 (Any other). The other is categories used by the police when they visually identify
someone as belonging to an ethnic group, e.g. at the time of a stop and search or an arrest: White – North European (IC1), White – South European (IC2), Black (IC3), Asian (IC4), Chinese, Japanese, or South East Asian (IC5), Middle Eastern (IC6), and Unknown (IC0). "IC" stands for "Identification Code;" these items are also referred to as Phoenix classifications."Statistics on Race and the Criminal Justice System 2010, Appendix C: Classifications of ethnicity ". BBC News. 15 June 2007. Retrieved 24 September 2014. Officers are instructed to "record the response that has been given" even if the person gives an answer which may be incorrect; their own perception of the person's ethnic background is recorded separately."Suffolk Constabulary Policies & Procedures: Encounter and Stop and Search" Retrieved 24 September 2014. Comparability of the information being recorded by officers was brought into question by the Office for National Statistics (ONS) in September 2007, as part of its Equality Data Review; one problem cited was the number of reports that contained an ethnicity of "Not Stated.""Office of National Statistics: Review of equality data: audit report" Retrieved 24 September 2014.
In many countries, such as France, the state is legally banned from maintaining data based on race, which often makes the police issue wanted notices to the public that include labels like "dark skin complexion", etc.
In the United States, the practice of racial profiling has been ruled to be both unconstitutional and a violation of civil rights. There is active debate regarding the cause of a marked correlation between the recorded crimes, punishments meted out, and the country's populations. Many consider de facto racial profiling an example of institutional racism in law enforcement. The history of misuse of racial categories to impact adversely one or more groups and/or to offer protection and advantage to another has a clear impact on debate of the legitimate use of known phenotypical or genotypical characteristics tied to the presumed race of both victims and perpetrators by the government.
Mass incarceration in the United States disproportionately impacts African American and Latino communities. Michelle Alexander, author of The New Jim Crow: Mass Incarceration in the Age of Colorblindness (2010), argues that mass incarceration is best understood as not only a system of overcrowded prisons. Mass incarceration is also, "the larger web of laws, rules, policies, and customs that control those labeled criminals both in and out of prison."Michelle Alexander, The New Jim Crow: Mass Incarceration in the Age of Colorblindness (New York, NY: The New Press, 2010), 13. She defines it further as "a system that locks people not only behind actual bars in actual prisons, but also behind virtual bars and virtual walls", illustrating the second-class citizenship that is imposed on a disproportionate number of people of color, specifically African-Americans. She compares mass incarceration to Jim Crow laws, stating that both work as racial caste systems.Michelle Alexander, The New Jim Crow: Mass Incarceration in the Age of Colorblindness (New York, NY: The New Press, 2010), 12.
Recent work using DNA cluster analysis to determine race background has been used by some criminal investigators to narrow their search for the identity of both suspects and victims. Proponents of DNA profiling in criminal investigations cite cases where leads based on DNA analysis proved useful, but the practice remains controversial among medical ethicists, defense lawyers and some in law enforcement.
Forensic anthropology
Similarly, forensic anthropologists draw on highly heritable morphological features of human remains (e.g. cranial measurements) to aid in the identification of the body, including in terms of race. In a 1992 article, anthropologist Norman Sauer noted that anthropologists had generally abandoned the concept of race as a valid representation of human biological diversity, except for forensic anthropologists. He asked, "If races don't exist, why are forensic anthropologists so good at identifying them?" He concluded:
In a different approach, anthropologist C. Loring Brace said:
In association with a NOVA program in 2000 about race, he wrote an essay opposing use of the term.
A 2002 study found that about 13% of human craniometric variation existed between regions, while 81% existed within regions (the other 6% existed between local populations within the same region). In contrast, the opposite pattern of genetic variation was observed for skin color (which is often used to define race), with 88% of variation between regions. The study concluded that "The apportionment of genetic diversity in skin color is atypical, and cannot be used for purposes of classification."
Commercial determination of ancestry
New research in molecular genetics, and the marketing of genetic identities through the analysis of one's Y chromosome, mtDNA, or autosomal DNA to the general public in the form of "Personalized Genetic Histories" (PGH) has caused debate.
Typically, a consumer of a commercial PGH service sends in a sample of DNA, which is analyzed by molecular biologists, and receives a report on ancestry. Shriver and Kittles remarked:
They noted that the general public was increasingly interested in such tests despite their lack of knowledge in some cases of what the results represent.
Through these reports, advances in molecular genetics are used to create or confirm stories individuals have about social identities. Abu el-Haj argued that genetic lineages, like older notions of race, suggest some idea of biological relatedness. But, unlike older notions of race, they are not directly connected to claims about human behaviour or character. She said that "postgenomics does seem to be giving race a new lease on life."
Abu el-Haj argues that genomics and the mapping of lineages and clusters liberates "the new racial science from the older one by disentangling ancestry from culture and capacity." As an example, she refers to recent work by Hammer et al., which aimed to test the claim that present-day Jews are more closely related to one another than to neighbouring non-Jewish populations. Hammer et al. found that the degree of genetic similarity among Jews shifted depending on the locus investigated, and suggested that this was the result of natural selection acting on particular loci. They focused on the non-recombining Y-chromosome to "circumvent some of the complications associated with selection".
As another example, she points to work by Thomas et al., who sought to distinguish between the Y chromosomes of Jewish priests (Kohanim), (in Judaism, membership in the priesthood is passed on through the father's line) and the Y chromosomes of non-Jews. Abu el-Haj concluded that this new "race science" calls attention to the importance of "ancestry" (narrowly defined, as it does not include all ancestors) in some religions and in popular culture, and people's desire to use science to confirm their claims about ancestry; this "race science", she argues, is fundamentally different from older notions of race that were used to explain differences in human behaviour or social status:
Stephan Palmié has responded to Abu el-Haj's claim that genetic lineages make possible a new, politically, economically, and socially benign notion of race and racial difference by suggesting that efforts to link genetic history and personal identity will inevitably "ground present social arrangements in a time-hallowed past", that is, use biology to explain cultural differences and social inequalities.
One problem with these assignments is admixture. Many people have a highly varied ancestry. For example, in the United States, colonial and early federal history were periods of numerous interracial relationships, both outside and inside slavery. This has resulted in a majority of people who identify as African American having some European ancestors. Similarly, many people who identify as white have some African ancestors. In a survey in a northeastern U.S. university of college students who identified as "white", about 30% were estimated to have up to 10% African ancestry.
On the other hand, there are tests that rely on correlations between allele frequencies; often when allele frequencies correlate, these are called clusters. These sorts of tests use informative alleles called Ancestry-informative marker (AIM). These tests use contemporary people sampled from certain parts of the world as references to determine the likely proportion of ancestry for any given individual.
In a recent Public Broadcasting Service (PBS) programme on the subject of genetic ancestry testing, the academic Henry Louis Gates, who identifies as African American, said that he "wasn't thrilled with the AIM results (it turns out that 50 percent of his ancestors are likely European)." He said there had been family stories of white ancestors, but this was a higher proportion than he expected.
In 2003, Charles Rotimi, of Howard University's National Human Genome Center, argued that "the nature or appearance of genetic clustering (grouping) of people is a function of how populations are sampled, of how criteria for boundaries between clusters are set, and of the level of resolution used." As these decisions may each bias the results, he concluded that people should be very cautious about relating genetic lineages or clusters to their personal sense of identity.
On the other hand, Rosenberg (2005) argued that if enough genetic markers and subjects are analyzed, then the clusters found are consistent. How many genetic markers a commercial service uses likely varies, although new technology has continually allowed increasing numbers to be analyzed. In the end, people usually base their individual identity more on family and personal relationships of community than data.
See also
Breed
Clan
Cultural identity
Epicanthic fold
Ethnic nationalism
Ethnic stereotype
Eugenics
Human skin color
List of contemporary ethnic groups
Melanism
Multiracial
Nationalism
Nomen dubium – a scientific name that is of unknown or doubtful application.
Pre-Adamite
Race and health
Race and ethnicity in censuses (US)
Race of the future
Racialization
Raciolinguistics
Racism
Supremacism
Scientific racism
Races of Mankind for the Field Museum of Natural History exhibition by sculptor Malvina Hoffman
The Race Question
References
Bibliography
Joseph, Celucien L. Race, Religion, and The Haitian Revolution: Essays on Faith, Freedom, and Decolonization (CreateSpace Independent Publishing Platform, 2012)
Joseph, Celucien L. From Toussaint to Price-Mars: Rhetoric, Race, and Religion in Haitian Thought (CreateSpace Independent Publishing Platform, 2013)
Further reading
This review of current research includes chapters by Ian Whitmarsh, David S. Jones, Jonathan Kahn, Pamela Sankar, Steven Epstein, Simon M. Outram, George T. H. Ellison, Richard Tutton, Andrew Smart, Richard Ashcroft, Paul Martin, George T. H. Ellison, Amy Hinterberger, Joan H. Fujimura, Ramya Rajagopalan, Pilar N. Ossorio, Kjell A. Doksum, Jay S. Kaufman, Richard S. Cooper, Angela C. Jenks, Nancy Krieger, and Dorothy Roberts.
External links
Race: the Power of an Illusion companion website to California Newsreel feature, 2003, PBS
James, Michael (2008) "Race", Stanford Encyclopedia of Philosophy.
"Understanding Race", American Anthropological Association's educational website, with links for primary school educators and researchers
Official statements and standards
"The Race Question", UNESCO, 1950
US Census Bureau: Definition of Race
"Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity", Federal Register, 1997, Department of Interior
RACE: Are we so different?, a public education program by the American Anthropological Association.
Popular press
"Race (human)", Encyclopædia Britannica Online.
"The Myth of Race", Medicine Magazine, 2007.
Is Race "Real"?, forum by Social Science Research Council, includes A.M. Leroi, 2005 op-ed article, New York Times, advocating biological conceptions of race; responses from scholars in various fields More from Leori with responses
Richard Dawkins: "Race and creation", (extract from The Ancestor's Tale: A Pilgrimage to the Dawn of Life), in Prospect Magazine'', October 2004
Category:Kinship and descent
Category:Social constructionism
Category:Social inequality
Category:Anthropology | 25,614 | 2017-01 |
New Haven, Connecticut | New Haven ( ),Kyff, Rob (2008). "Pronunciation For Novice Nutmeggers." Hartford Courant. Hartford, CT. in the U.S. state of Connecticut, is the principal municipality in Greater New Haven, which had a total population of 862,477 in 2010. It is located on New Haven Harbor on the northern shore of Long Island Sound in New Haven County, Connecticut, and is part of the New York metropolitan area. It is the second-largest city in Connecticut (after Bridgeport), with a population of 129,779 people as of the 2010 United States Census. According to a census of 1 July 2012, by the Census Bureau, the city had a population of 130,741.
New Haven was founded in 1638 by English Puritans, and a year later eight streets were laid out in a four-by-four grid, creating what is now commonly known as the "Nine Square Plan", now recognized by the American Institute of Certified Planners as a National Planning Landmark. The central common block is New Haven Green, a square, now a National Historic Landmark and the center of Downtown New Haven.
New Haven is the home of Yale University. The university is an integral part of the city's economy, being New Haven's biggest taxpayer and employer. Health care (hospitals and biotechnology), professional services (legal, architectural, marketing, and engineering), financial services, and retail trade also help to form an economic base for the city.
The city served as co-capital of Connecticut from 1701 until 1873, when sole governance was transferred to the more centrally located city of Hartford. New Haven has since billed itself as the "Cultural Capital of Connecticut" for its supply of established theaters, museums, and music venues.
New Haven had the first public tree planting program in America, producing a canopy of mature trees (including some large elms) that gave New Haven the nickname "The Elm City".
History
Pre-colonial and colonial
Before Europeans arrived, the New Haven area was the home of the Quinnipiac tribe of Native Americans, who lived in villages around the harbor and subsisted off local fisheries and the farming of maize. The area was briefly visited by Dutch explorer Adriaen Block in 1614. Dutch traders set up a small trading system of beaver pelts with the local inhabitants, but trade was sporadic and the Dutch did not settle permanently in the area.
In 1637 a small party of Puritans reconnoitered the New Haven harbor area and wintered over. In April 1638, the main party of five hundred Puritans who left the Massachusetts Bay Colony under the leadership of the Reverend John Davenport and the London merchant Theophilus Eaton sailed into the harbor. These settlers were hoping to establish a (in their mind) better theological community, with the government more closely linked to the church than the one they left in Massachusetts and sought to take advantage of the excellent port capabilities of the harbor. The Quinnipiacs, who were under attack by neighboring Pequots, sold their land to the settlers in return for protection.
thumb|left|The 1638 nine-square plan, with the extant New Haven Green at its center, continues to define New Haven's downtown
By 1640, the town's theocratic government and nine-square grid plan were in place, and the town was renamed Newhaven from Quinnipiac. However, the area north of New Haven remained Quinnipiac until 1678, when it was renamed Hamden. The settlement became the headquarters of the New Haven Colony. At the time, the New Haven Colony was separate from the Connecticut Colony, which had been established to the north centering on Hartford. One of the principal differences between the two colonies was that the New Haven colony was an intolerant theocracy that did not permit other churches to be established, while the Connecticut colony permitted the establishment of other churches.
Economic disaster struck the colony in 1646, however, when the town sent its first fully loaded ship of local goods back to England. This ship never reached the Old World, and its disappearance stymied New Haven's development in the face of the rising trade power of Boston and New Amsterdam. In 1660, founder John Davenport's wishes were fulfilled, and Hopkins School was founded in New Haven with money from the estate of Edward Hopkins.
thumb|right|250px|New Haven as it appeared in a 1786 engraving
In 1661, the judges who had signed the death warrant of Charles I of England were pursued by Charles II. Two judges, Colonel Edward Whalley and Colonel William Goffe, fled to New Haven to seek refuge from the king's forces. John Davenport arranged for these "Regicides" to hide in the West Rock hills northwest of the town. A third judge, John Dixwell, joined the other regicides at a later time.
New Haven became part of the Connecticut Colony in 1664, when the two colonies were merged under political pressure from England, according to folklore as punishment for harboring the three judges (in reality, done in order to strengthen the case for the takeover of nearby New Amsterdam, which was rapidly losing territory to migrants from Connecticut). Some members of the New Haven Colony seeking to establish a new theocracy elsewhere went on to establish Newark, New Jersey.
thumb|left|Connecticut Hall, built 1750–1756, is the oldest extant building at Yale
It was made co-capital of Connecticut in 1701, a status it retained until 1873. In 1716, the Collegiate School relocated from Old Saybrook to New Haven and established New Haven as a center of learning. In 1718, the name of the Collegiate School was changed to Yale College in response to a large donation from British East India Company merchant Elihu Yale, former Governor of Madras.
For over a century, New Haven citizens had fought in the colonial militia alongside regular British forces, as in the French and Indian War. As the American Revolution approached, General David Wooster and other influential residents hoped that the conflict with the government in Britain could be resolved short of rebellion. On 23 April 1775, which is still celebrated in New Haven as Powder House Day, the Second Company, Governor's Foot Guard, of New Haven entered the struggle against the governing British parliament. Under Captain Benedict Arnold, they broke into the powder house to arm themselves and began a three-day march to Cambridge, Massachusetts. Other New Haven militia members were on hand to escort George Washington from his overnight stay in New Haven on his way to Cambridge. Contemporary reports, from both sides, remark on the New Haven volunteers' professional military bearing, including uniforms.
On July 5, 1779, 2,600 loyalists and British regulars under General William Tryon, governor of New York, landed in New Haven Harbor and raided the 3,500-person town. A militia of Yale students had been prepping for battle, and former Yale president and Yale Divinity School professor Naphtali Daggett rode out to confront the Redcoats. Yale president Ezra Stiles recounted in his diary that while he moved furniture in anticipation of battle, he still couldn't quite believe the revolution had begun. New Haven was not torched as the invaders did with Danbury in 1777, or Fairfield and Norwalk a week after the New Haven raid, so many of the town's colonial features were preserved.
Post-colonial
New Haven was incorporated as a city in 1784, and Roger Sherman, one of the signers of the Constitution and author of the "Connecticut Compromise", became the new city's first mayor.
Towns created from the original New Haven ColonyConnecticut Register and Manual New town Split from Incorporated Wallingford New Haven 1670 Cheshire Wallingford 1780 Meriden Wallingford 1806 Branford New Haven 1685 North Branford Branford 1831 Woodbridge New Haven and Milford 1784 Bethany Woodbridge 1832 East Haven New Haven 1785 Hamden New Haven 1786 North Haven New Haven 1786 Orange New Haven and Milford 1822 West Haven Orange 1921thumb|left|New Haven's harbor and long wharf as seen from Depot Tower, ca. 1849
The city struck fortune in the late 18th century with the inventions and industrial activity of Eli Whitney, a Yale graduate who remained in New Haven to develop the cotton gin and establish a gun-manufacturing factory in the northern part of the city near the Hamden town line. That area is still known as Whitneyville, and the main road through both towns is known as Whitney Avenue. The factory is now the Eli Whitney Museum, which has a particular emphasis on activities for children and exhibits pertaining to the A. C. Gilbert Company. His factory, along with that of Simeon North, and the lively clock-making and brass hardware sectors, contributed to making early Connecticut a powerful manufacturing economy; so many arms manufacturers sprang up that the state became known as "The Arsenal of America". It was in Whitney's gun-manufacturing plant that Samuel Colt invented the automatic revolver in 1836. Many other talented machinists and firearms designers would go on to found successful firearms manufacturing companies in New Haven, including Oliver Winchester and O.F. Mossberg & Sons.
The Farmington Canal, created in the early 19th century, was a short-lived transporter of goods into the interior regions of Connecticut and Massachusetts, and ran from New Haven to Northampton, Massachusetts.
thumb|left|Site of the Winchester Repeating Arms Company, which has since 1981 been converted to Science Park at Yale, a complex for start-ups and technological firms
New Haven was home to one of the important early events in the burgeoning anti-slavery movement when, in 1839, the trial of mutineering Mende tribesmen being transported as slaves on the Spanish slaveship Amistad was held in New Haven's United States District Court. There is a statue of Joseph Cinqué, the informal leader of the slaves, beside City Hall. See "Museums" below for more information. Abraham Lincoln delivered a speech on slavery in New Haven in 1860, shortly before he secured the Republican nomination for President.
The American Civil War boosted the local economy with wartime purchases of industrial goods, including that of the New Haven Arms Company, which would later become the Winchester Repeating Arms Company. (Winchester would continue to produce arms in New Haven until 2006, and many of the buildings that were a part of the Winchester plant are now a part of the Winchester Repeating Arms Company Historic District.) After the war, population grew and doubled by the start of the 20th century, most notably due to the influx of immigrants from southern Europe, particularly Italy. Today, roughly half the populations of East Haven, West Haven, and North Haven are Italian-American. Jewish immigration to New Haven has left an enduring mark on the city. Westville was the center of Jewish life in New Haven, though today many have fanned out to suburban communities such as Woodbridge and Cheshire.
Late 20th century
250px|left|thumb|The historic New Haven Green, ca. 1919
New Haven's expansion continued during the two World Wars, with most new inhabitants being African Americans from the American South and Puerto Ricans. The city reached its peak population after World War II. The area of New Haven is only , encouraging further development of new housing after 1950 in adjacent, suburban towns. Moreover, as in other U.S. cities in the 1950s, New Haven began to suffer from an exodus of middle-class workers.
In 1954, then-mayor Richard C. Lee began some of the earliest major urban renewal projects in the United States. Certain sections of downtown New Haven were redeveloped to include museums, new office towers, a hotel, and large shopping complexes. Other parts of the city were affected by the construction of Interstate 95 along the Long Wharf section, Interstate 91, and the Oak Street Connector. The Oak Street Connector (Route 34), running between Interstate 95, downtown, and The Hill neighborhood, was originally intended as a highway to the city's western suburbs but was only completed as a highway to the downtown area, with the area to the west becoming a boulevard (See "Redevelopment" below).
In 1970, a series of criminal prosecutions against various members of the Black Panther Party took place in New Haven, inciting mass protests on the New Haven Green involving twelve thousand demonstrators and many well-known New Left political activists. (See "Political Culture" below for more information).
From the 1960s through the late 1990s, central areas of New Haven continued to decline both economically and in terms of population despite attempts to resurrect certain neighborhoods through renewal projects. In conjunction with its declining population, New Haven experienced a steep rise in its crime rate. In 2010, New Haven ranked as the 18th most dangerous city in America, albeit with crime rating under the significant safety benchmark of 200.00.
Urban redevelopment
thumb|125px|right|The Connecticut Financial Center, completed in 1990, is the tallest building in New Haven
Since approximately 2000, many parts of downtown New Haven have been revitalized with new restaurants, nightlife, and small retail stores. In particular, the area surrounding the New Haven Green has experienced an influx of apartments and condominiums. In recent years, downtown retail options have increased with the opening of new stores such as Urban Oufitters, J Crew, Origins, American Apparel, Gant Clothing, and an Apple Store, joining older stores such as Barnes & Noble and Raggs Clothing. In addition, two new supermarkets opened to serve downtown's growing residential population. A Stop & Shop opened just west of downtown, while Elm City Market, located one block from the Green, opened in 2011. The recent turnaround of downtown New Haven has received positive press from various periodicals.
thumb|left|Whitney Avenue, one of downtown New Haven's principal commercial corridors
Major projects include the current construction of a new campus for Gateway Community College downtown, and also a 32-story, 500-unit apartment/retail building called 360 State Street. The 360 State Street project is now occupied and is the largest residential building in Connecticut. A new boathouse and dock is planned for New Haven Harbor, and the linear park Farmington Canal Trail is set to extend into downtown New Haven within the coming year.http://www.cityofnewhaven.com/uploads/Gateway%20Community%20College.jpg Additionally, foundation and ramp work to widen I-95 to create a new harbor crossing for New Haven, with an extradosed bridge to replace the 1950s-era Q Bridge, has begun. The city still hopes to redevelop the site of the New Haven Coliseum, which was demolished in 2007.
thumb|right|Recent decades have brought increased commercial activity to much of New Haven, including this stretch of upper State Street
In April 2009, the United States Supreme Court agreed to hear a suit over reverse discrimination brought by 18 white firefighters against the city. The suit involved the 2003 promotion test for the New Haven Fire Department. After the tests were scored, no black firefighters scored high enough to qualify for consideration for promotion, so the city announced that no one would be promoted. In the subsequent Ricci v. DeStefano decision the court found 5-4 that New Haven's decision to ignore the test results violated Title VII of the Civil Rights Act of 1964. As a result, a district court subsequently ordered the city to promote 14 of the white firefighters."New Haven Firefighter Should Have Intervened In Ricci Suit." Connecticut Law Tribune (2010).
In 2010 and 2011, state and federal funds were awarded to Connecticut (and Massachusetts) to construct the Hartford Line, with a southern terminus at New Haven's Union Station and a northern terminus at Springfield's Union Station. According to the White House, "This corridor [currently] has one train per day connecting communities in Connecticut and Massachusetts to the Northeast Corridor and Vermont. The vision for this corridor is to restore the alignment to its original route via the Knowledge Corridor in western Massachusetts, improving trip time and increasing the population base that can be served." Set for construction in 2013, the "Knowledge Corridor high speed intercity passenger rail" project will cost approximately $1 billion, and the ultimate northern terminus for the project is reported to be Montreal in Canada. Train speeds between will reportedly exceed and increase both cities' rail traffic exponentially.
Timeline of notable firsts
thumb|The world's first phonebook was made in New Haven in 1878.
1638: New Haven becomes the first planned city in America. (This is disputed.)
1776: Yale student David Bushnell invents the first American submarine.
1787: John Fitch builds the first steamboat.
1793: Eli Whitney invents the cotton gin.
1836: Samuel Colt invents the automatic revolver in Whitney's factory.
1839: Charles Goodyear of New Haven discovers the process of vulcanizing rubber in Woburn, Massachusetts, and later perfects it and patents the process in nearby Springfield, Massachusetts.
1860: Philios P. Blake patents the first corkscrew.
1877: New Haven hosts the first Bell PSTN (telephone) switch office.
1878–1880: The District Telephone Company of New Haven creates the world's first telephone exchange and the first telephone directory and installs the first public phone. The company expanded and became the Connecticut Telephone Company, then the Southern New England Telephone Company (now part of AT&T).
1882: The Knights of Columbus are founded in New Haven. The city still serves as the world headquarters of the organization, which maintains a museum downtown.Pushing Boundaries – A History of the Knights of Columbus
1892: Local confectioner George C. Smith of the Bradley Smith Candy Co. invents the first lollipops.ConnTact.com Connecticut Business News Journal "Dates of Our Lives"
Late 19th century-early 20th century: The first public tree planting program takes place in New Haven, at the urging of native James Hillhouse.
1900: Louis Lassen, owner of Louis' Lunch, is credited with inventing the hamburger, as well as the steak sandwich.
1911: The Erector Set, the popular and culturally important construction toy, is invented in New Haven by A.C. Gilbert. It was manufactured by the A. C. Gilbert Company at Erector Square from 1913 until the company's bankruptcy in 1967.
1920: In competition with competing explanations, the Frisbee is said to have originated on the Yale campus, based on the tin pans of the Frisbie Pie Company which were tossed around by students on the New Haven Green.
1977: The first memorial to victims of the Holocaust on public land in America stands in New Haven's Edgewood Park at the corner of Whalley and West Park avenues. It was built with funds collected from the communityShifre Zamkov on the New Haven Holocaust Memorial and is maintained by Greater New Haven Holocaust Memory, Inc. The ashes of victims killed and cremated at Auschwitz are buried under the memorial.
The Greater New Haven Convention and Visitors Bureau has a more extensive list of New Haven firsts.
Geography
thumb|left|250px|View of the Quinnipiac River from Fair Haven
thumb|right|250px|Map of towns in the New Haven area
According to the United States Census Bureau, the city has a total area of , of which is land and , or 6.67%, is water.
New Haven's best-known geographic features are its large deep harbor, and two reddish basalt trap rock ridges which rise to the northeast and northwest of the city core. These trap rocks are known respectively as East Rock and West Rock, and both serve as extensive parks. West Rock has been tunneled through to make way for the east-west passage of the Wilbur Cross Parkway (the only highway tunnel through a natural obstacle in Connecticut), and once served as the hideout of the "Regicides" (see: Regicides Trail). Most New Haveners refer to these men as "The Three Judges". East Rock features the prominent Soldiers and Sailors war monument on its peak as well as the "Great/Giant Steps" which run up the rock's cliffside.
The city is drained by three rivers; the West, Mill, and Quinnipiac, named in order from west to east. The West River discharges into West Haven Harbor, while the Mill and Quinnipiac rivers discharge into New Haven Harbor. Both harbors are embayments of Long Island Sound. In addition, several smaller streams flow through the city's neighborhoods, including Wintergreen Brook, the Beaver Ponds Outlet, Wilmot Brook, Belden Brook, and Prospect Creek. Not all of these small streams have continuous flow year-round.
Climate
New Haven lies in the transition between a humid continental climate (Köppen climate classification: Dfa) and humid subtropical climate (Köppen Cfa), typical of much of the New York metropolitan area. Summers are hot and often humid, with average temperatures exceeding on 70 days per year. In summer, the Bermuda High creates as southern flow of warm and humid air, with frequent thundershowers. Fall and spring are normally mild and of equal length. Winters are moderately cold and both rain and snow fall in winter. The weather patterns that affect New Haven result from a primarily offshore direction, thus reducing the marine influence of Long Island Sound—although, like other marine areas, differences in temperature between areas right along the coastline and areas a mile or two inland can be large at times. During summer heat waves, temperatures may reach or higher on occasion with heat-index values of over
Streetscape
thumb|250px|right|Aerial view of New Haven, looking northward with The Hill in the foreground
New Haven has a long tradition of urban planning and a purposeful design for the city's layout. The city could be argued to have some of the first preconceived layouts in the country. Upon founding, New Haven was laid out in a grid plan of nine square blocks; the central square was left open, in the tradition of many New England towns, as the city green (a commons area). The city also instituted the first public tree planting program in America. As in other cities, many of the elms that gave New Haven the nickname "Elm City" perished in the mid-20th century due to Dutch Elm disease, although many have since been replanted. The New Haven Green is currently home to three separate historic churches which speak to the original theocratic nature of the city. The Green remains the social center of the city today. It was named a National Historic Landmark in 1970.
Downtown New Haven, occupied by nearly 7,000 residents, has a more residential character than most downtowns.CityOfNewHaven.com Comprehensive Report: New Haven pg3 The downtown area provides about half of the city's jobs and half of its tax base and in recent years has become filled with dozens of new upscale restaurants, several of which have garnered national praise (such as Ibiza, recognized by Esquire and Wine Spectator magazines as well as the New York Times as the best Spanish food in the country), in addition to shops and thousands of apartments and condominium units which subsequently help overall growth of the city.ConnTact.com
Neighborhoods
thumb|right|250px|The Quinnipiac River Historic District, located in the Fair Haven neighborhood, is one of dozens of listed historic districts in New Haven
The city has many distinct neighborhoods. In addition to Downtown, centered on the central business district and the Green, are the following neighborhoods: the west central neighborhoods of Dixwell and Dwight; the southern neighborhoods of The Hill, historic water-front City Point (or Oyster Point), and the harborside district of Long Wharf; the western neighborhoods of Edgewood, West River, Westville, Amity, and West Rock-Westhills; East Rock, Cedar Hill, Prospect Hill, and Newhallville in the northern side of town; the east central neighborhoods of Mill River and Wooster Square, an Italian-American neighborhood; Fair Haven, an immigrant community located between the Mill and Quinnipiac rivers; Quinnipiac Meadows and Fair Haven Heights across the Quinnipiac River; and facing the eastern side of the harbor, The Annex and East Shore (or Morris Cove).Harrison's illustrated guide to greater New Haven, (H2 Company, New Haven, 1995).Maps of the New Haven Neighborhoods (PDF) are available from the City of New Haven's City Plan Department. There are also quick traces from the above PDFs in Google Earth/Map Shapes of the New Haven Neighborhoods (KML).
Economy
thumb|280px|left|Data from City-Data.com
thumb|250px|right|Aerial view of the Port of New Haven
New Haven's economy originally was based in manufacturing, but the postwar period brought rapid industrial decline; the entire Northeast was affected, and medium-sized cities with large working-class populations, like New Haven, were hit particularly hard. Simultaneously, the growth and expansion of Yale University further affected the economic shift. Today, over half (56%) of the city's economy is now made up of services, in particular education and health care; Yale is the city's largest employer, followed by Yale – New Haven Hospital. Other large employers include Southern Connecticut State University, Assa Abloy Manufacturing, the Knights of Columbus headquarters, Higher One, Alexion Pharmaceuticals, Covidien and United Illuminating. Yale and Yale-New Haven are also among the largest employers in the state, and provide more $100,000+-salaried positions than any other employer in Connecticut.
Industry sectors: Agriculture (.6%), Construction and Mining (4.9%), Manufacturing (2.9%), Transportation and Utilities (2.9%), Trade (21.7%), Finance and Real Estate (7.1%), Services (55.9%), Government (4.0%)
Headquarters
The Knights of Columbus, the world's largest Catholic fraternal service organization and a Fortune 1000 company, is headquartered in New Haven. Two more Fortune 1000 companies are based in Greater New Haven: the electrical equipment producers Hubbell, based in Orange, and Amphenol, based in Wallingford. Eight Courant 100 companies are based in Greater New Haven, with four headquartered in New Haven proper. New Haven-based companies traded on stock exchanges include NewAlliance Bank, the second largest bank in Connecticut and fourth-largest in New England (NYSE: NAL), Higher One Holdings (NYSE: ONE), a financial services firm United Illuminating, the electricity distributor for southern Connecticut (NYSE: UIL), Achillion Pharmaceuticals (), Alexion Pharmaceuticals (NasdaqGS: ALXN), and Transpro Inc. (AMEX: TPR). Vion Pharmaceuticals is traded OTC (OTC BB: VIONQ.OB). Other notable companies based in the city include the Peter Paul Candy Manufacturing Company (the candy-making division of the Hershey Company), and the American division of Assa Abloy (one of the world's leading manufacturers of locks). The Southern New England Telephone Company (SNET) began operations in the city as the District Telephone Company of New Haven in 1878; the company remains headquartered in New Haven as a subsidiary of Frontier Communications and provides telephone service for all but two municipalities in Connecticut.AT&T SNET Fairfield County White Pages, Customer Service Guide page 14, "Local Toll-free Calling Areas", August 2006 edition
Demographics
Census data
right|thumb|Graph of New Haven demographics from the US Census, 1790–2010.
The U.S. Census Bureau reports a 2010 population of 129,779, with 47,094 households and 25,854 families within the city of New Haven. The population density is 6,859.8 people per square mile (2,648.6/km²). There are 52,941 housing units at an average density of 2,808.5 per square mile (1,084.4/km²). The racial makeup of the city is 42.6% White, 35.4% African American, 0.5% Native American, 4.6% Asian, 0.1% Pacific Islander, 12.9% from other races, and 3.9% from two or more races. Hispanic or Latino residents of any race were 27.4% of the population. Non-Hispanic Whites were 31.8% of the population in 2010, down from 69.6% in 1970. The city's demography is shifting rapidly: New Haven has always been a city of immigrants and currently the Latino population is growing rapidly. Previous influxes among ethnic groups have been African-Americans in the postwar era, and Irish, Italian and (to a lesser degree) Slavic peoples in the prewar period.
As of the 2010 census, of the 47,094 households, 29.3% have children under the age of 18 living with them, 27.5% include married couples living together, 22.9% have a female householder with no husband present, and 45.1% are non-families. 36.1% of all households are made up of individuals and 10.5% have someone living alone who is 65 years of age or older. The average household size is 2.40 and the average family size 3.19.
The ages of New Haven's residents are 25.4% under the age of 18, 16.4% from 18 to 24, 31.2% from 25 to 44, 16.7% from 45 to 64, and 10.2% who were 65 years of age or older. The median age is 29 years, which is significantly lower than the national average. There are 91.8 males per 100 females. For every 100 females age 18 and over, there are 87.6 males.
The median income for a household in the city is $29,604, and the median income for a family is $35,950. Median income for males is $33,605, compared with $28,424 for females. The per capita income for the city is $16,393. About 20.5% of families and 24.4% of the population live below the poverty line, including 32.2% of those under age 18 and 17.9% of those age 65 or over.
Other data
It is estimated that 14% of New Haven residents are pedestrian commuters, ranking it number four by highest percentage in the United States.List of U.S. cities with most pedestrian commuters This is primarily due to New Haven's small area and the presence of Yale University.
New Haven is a predominantly Roman Catholic city, as the city's Dominican, Irish, Italian, Mexican, Ecuadorian, and Puerto Rican populations are overwhelmingly Catholic. The city is part of the Archdiocese of Hartford. Jews also make up a considerable portion of the population, as do Black Baptists. There is a growing number of (mostly Puerto Rican) Pentecostals as well. There are churches for all major branches of Christianity within the city, multiple store-front churches, ministries (especially in working-class Latino and Black neighborhoods), a mosque, many synagogues (including two yeshivas), and other places of worship; the level of religious diversity in the city is high.
A study of New Haven's demographics, based on age, educational attainment, and race and ethnicity, found that its demographics were the closest of any American city to the national average.http://fivethirtyeight.com/features/normal-america-is-not-a-small-town-of-white-people/
Government
Political structure
right|thumb|Statue of Roman orator Cicero at the New Haven County Courthouse
New Haven is governed via the mayor-council system. Connecticut municipalities (like those of neighboring states Massachusetts and Rhode Island) provide nearly all local services (such as fire and rescue, education, snow removal, etc.), as county government has been abolished since 1960.
thumb|left|New Haven City HallNew Haven County merely refers to a grouping of towns and a judicial district, not a governmental entity. New Haven is a member of the South Central Connecticut Regional Council of Governments (SCRCOG), a regional agency created to facilitate coordination between area municipal governments and state and federal agencies, in the absence of county government.
Toni Harp is the mayor of New Haven. She was sworn in as the 50th mayor of New Haven on January 1, 2014 and is the first woman to hold that office.
The city council, called the Board of Alders, consists of thirty members, each elected from single-member wards. New Haven is served by the New Haven Police Department and the New Haven Fire Department.
New Haven lies within Connecticut's 3rd congressional district and has been represented by Rosa DeLauro since 1991. Martin Looney and Gary Holder-Winfield represent New Haven in the Connecticut State Senate, and the city lies within six districts (numbers 92 through 97) of the Connecticut House of Representatives.A map of Connecticut's House Districts can be found here and a list of members representing New Haven can be found here .
The Greater New Haven area is served by the New Haven Judicial District Court and the New Haven Superior Court, both headquartered at the New Haven County Courthouse. The federal District Court for the District of Connecticut has a New Haven facility, the Richard C. Lee United States Courthouse.
Political history
thumb|A portrait of Roger Sherman, signer of the Declaration of Independence and the U.S. Constitution, author of the Connecticut Compromise, and the first mayor of New Haven
New Haven is the birthplace of former president George W. Bush, who was born when his father, former president George H. W. Bush, was living in New Haven while a student at Yale. In addition to being the site of the college educations of both Presidents Bush, as Yale students, New Haven was also the temporary home of former presidents William Howard Taft, Gerald Ford, and Bill Clinton, as well as Secretary of State John Kerry. President Clinton met his wife, former U.S. Secretary of State Hillary Clinton, while the two were students at Yale Law School. Former vice presidents John C. Calhoun and Dick Cheney also studied in New Haven (although the latter did not graduate from Yale). Before the 2008 election, the last time there was not a person with ties to New Haven and Yale on either major party's ticket was 1968. James Hillhouse, a New Haven native, served as President pro tempore of the United States Senate in 1801.
A predominantly Democratic city, New Haven voters overwhelmingly supported Al Gore in the 2000 election, Yale graduate John Kerry in 2004, and Barack Obama in 2008 and 2012. In the 2008 election, New Haven County was third among all Connecticut counties in campaign contributions, after Fairfield and Hartford counties. (Connecticut, in turn, was ranked 14th among all states in total campaign contributions.)
New Haven was the subject of Who Governs? Democracy and Power in An American City, a very influential book in political science by preeminent Yale professor Robert A. Dahl, which includes an extensive history of the city and thorough description of its politics in the 1950s. New Haven's theocratic history is also mentioned several times by Alexis de Tocqueville in his classic volume on 19th-century American political life, Democracy in America.Tocqueville, Alexis. 2004. Democracy in America. Translated by Arthur Goldhammer. New York: The Library of America, pp. 39n, 41, 43. New Haven was the residence of conservative thinker William F. Buckley, Jr., in 1951, when he wrote his influential God and Man at Yale. William Lee Miller's The Fifteenth Ward and the Great Society (1966) similarly explores the relationship between local politics in New Haven and national political movements, focusing on Lyndon Johnson's Great Society and urban renewal.
George Williamson Crawford, a Yale Law School graduate, served as the city's first black corporation counsel from 1954 to 1962, under Mayor Richard C. Lee.
In 1970, the New Haven Black Panther trials took place, the largest and longest trials in Connecticut history. Black Panther Party co-founder Bobby Seale and ten other Party members were tried for murdering an alleged informant. Beginning on May Day, the city became a center of protest for 12,000 Panther supporters, college students, and New Left activists (including Jean Genet, Benjamin Spock, Abbie Hoffman, Jerry Rubin, and John Froines), who amassed on the New Haven Green, across the street from where the trials were being held. Violent confrontations between the demonstrators and the New Haven police occurred, and several bombs were set off in the area by radicals. The event became a rallying point for the New Left and critics of the Nixon Administration.
During the summer of 2007, New Haven was the center of protests by anti-immigration groups who opposed the city's program of offering municipal ID cards, known as the Elm City Resident Card, to illegal immigrants. In 2008, the country of Ecuador opened a consulate in New Haven to serve the large Ecuadorean immigrant population in the area. It is the first foreign mission to open in New Haven since Italy opened a consulate (now closed) in the city in 1910.
In April 2009, the United States Supreme Court agreed to hear a suit over reverse discrimination brought by 20 white and Hispanic firefighters against the city. The suit involved the 2003 promotion test for the New Haven Fire Department. After the tests were scored, no blacks scored high enough to qualify for consideration for promotion, so the city announced that no one would be promoted. On 29 June 2009, the United States Supreme Court ruled in favor of the firefighters, agreeing that they were improperly denied promotion because of their race.Williams, Joseph (2009-06-30). Supreme Court rules in favor of Conn. firefighters. The Boston Globe. Retrieved on 2009-07-06 from Boston.com, "Supreme court rules in favor of conn firefighters" The case, Ricci v. DeStefano, became highly publicized and brought national attention to New Haven politics due to the involvement of then-Supreme Court nominee (and Yale Law School graduate) Sonia Sotomayor in a lower court decision.
Garry Trudeau, creator of the political Doonesbury comic strip, attended Yale University. There he met fellow student and later Green Party candidate for Congress Charles Pillsbury, a long-time New Haven resident for whom Trudeau's comic strip is named. During his college years, Pillsbury was known by the nickname "The Doones". A theory of international law, which argues for a sociological normative approach in regards to jurisprudence, is named the New Haven Approach, after the city. Connecticut US senator Richard Blumenthal is a Yale graduate, as is former Connecticut US Senator Joe Lieberman who also was a New Haven resident for many years, before moving back to his hometown of Stamford.Op-Ed: Joe Lieberman Supports McCain, Becomes an Enigma to Democrats. Digitaljournal.com (2008-07-20). Retrieved on 2013-07-15.
Crime
Crime increased in the 1990s, with New Haven having one of the ten highest violent crime rates per capita in the United States. In the late 1990s New Haven's crime began to stabilize. The city, adopting a policy of community policing, saw crime rates drop during the 2000s.
Violent crime levels vary dramatically between New Haven's neighborhoods, with some areas having crime rates in line with the state of Connecticut average, and others having extremely high rates of crime. A 2011 New Haven Health Department report identifies these issues in greater detail.
In 2010, New Haven ranked as the 18th most dangerous city in the United States (albeit below the safety benchmark of 200.00 for the second year in a row). However, according to a completely different analysis conducted by the "24/7 Wall Street Blog", in 2011 New Haven had risen to become the fourth most dangerous city in the United States, and was widely cited in the press as such.
However, an analysis by the Regional Data Cooperative for Greater New Haven, Inc., has shown that due to issues of comparative denominators and other factors, such municipality-based rankings can be considered inaccurate. For example, two cities of identical population can cover widely differing land areas, making such analyses irrelevant. The research organization called for comparisons based on neighborhoods, blocks, or standard methodologies (similar to those used by Brookings, DiversityData, and other established institutions), not based on municipalities.
Education
Colleges and universities
New Haven is a notable center for higher education. Yale University, at the heart of downtown, is one of the city's best known features and its largest employer. New Haven is also home to Southern Connecticut State University, part of the Connecticut State University System, and Albertus Magnus College, a private institution. Gateway Community College has a campus in downtown New Haven, formerly located in the Long Wharf district; Gateway consolidated into one campus downtown into a new state-of-the-art campus (on the site of the old Macy's building) and was open for the Fall 2012 semester.Gateway Community College waiting for its new start. WTNH.com (2009-03-31). Retrieved on 2013-07-15. Bellmorembellmore, Michael. (2013-05-23) Much to look forward to for Gateway Community College grads- The New Haven Register – Serving New Haven, Connecticut. Nhregister.com. Retrieved on 2013-07-15.
There are several institutions immediately outside of New Haven, as well. Quinnipiac University and the Paier College of Art are located just to the north, in the town of Hamden. The University of New Haven is located not in New Haven but in neighboring West Haven.
thumb|The 1911 student body of the Hopkins School, the fifth-oldest educational institution in the United States
Primary and secondary schools
New Haven Public Schools is the school district serving the city. Wilbur Cross High School and Hillhouse High School are New Haven's two largest public secondary schools.
Hopkins School, a private school, was founded in 1660 and is the fifth-oldest educational institution in the United States. New Haven is home to a number of other private schools as well as public magnet schools, including Metropolitan Business Academy, High School in the Community, Hill Regional Career High School, Co-op High School, New Haven Academy, ACES Educational Center for the Arts, the Foote School and the Sound School, all of which draw students from New Haven and suburban towns. New Haven is also home to two Achievement First charter schools, Amistad Academy and Elm City College Prep, and to Common Ground, an environmental charter school.
New Haven Promise
The city is home to New Haven Promise, a scholarship funded by Yale University for students who meet the requirements. Students must be enrolled in a public high school (charters included) for four years, be a resident of the city during that time, carry a 3.0 cumulative grade-point average, have a 90-percent attendance rate and perform 40 hours of service to the city.http://newhavenpromise.org/college-affordability-resource-center/ The initiative was launched in 2010 and there are currently more than 500 Scholars enrolled in qualifying Connecticut colleges and universities. There are more than 60 cities in the country that have a Promise-type program for their students.http://citiesofpromise.com/promise-programs-listed-by-state/
Culture
Cuisine
Livability.com named New Haven as the Best Foodie City in the country in 2014. There are 56 Zagat-rated restaurants in New Haven, the most in Connecticut and the third most in New England (after Boston and Cambridge). More than 120 restaurants are located within two blocks of the New Haven Green. The city is home to an eclectic mix of ethnic restaurants and small markets specializing in various foreign foods.Travel News You Can Use – Spotlight on New Haven, CT: A College Town Vacation. Petergreenberg.com (2010-03-02). Retrieved on 2013-07-15. Represented cuisines include Malaysian, Ethiopian, Spanish, Belgian, French, Greek, Latin American, Mexican, Italian, Thai, Chinese, Japanese, Vietnamese, Korean, Indian, Jamaican, Cuban, Peruvian, Syrian/Lebanese, and Turkish.
thumb| White clam pie from Pepe's, the classic New Haven-style pizza
New Haven's greatest culinary claim to fame may be its pizza, which has been claimed to be among the best in the country,25 best pizzas around the country – today > food – TODAY.com. Today.msnbc.msn.com (2009-05-22). Retrieved on 2013-07-15.Some Say New Haven Has America's Best Pizza. The Paupered Chef (2007-09-13). Retrieved on 2013-07-15.New Haven Pizza Wars. Real American Stories. Retrieved on 2013-07-15.15 Best Pizzas in America. Gridskipper (2006-10-27). Retrieved on 2013-08-02. or even in the world.Best Pizzas From Around The World | Luxury Travel Nightlife & Restaurant Reviews. Journeypod.wordpress.com (2009-04-18). Retrieved on 2013-07-15. New Haven-style pizza, called "apizza" (pronounced , in the original Italian dialect), made its debut at the iconic Frank Pepe Pizzeria Napoletana (known as Pepe's) in 1925."American Eats: Pizza", The History Channel, 29 June 2006 Apizza is baked in coal- or wood-fired brick ovens, and is notable for its thin crust. Apizza may be red (with a tomato-based sauce) or white (with a sauce of garlic and olive oil), and pies ordered "plain" are made without the otherwise customary mozzarella cheese (originally smoked mozzarella, known as "scamorza" in Italian). A white clam pie is a well-known specialty of the restaurants on Wooster Street in the Little Italy section of New Haven, including Pepe's and Sally's Apizza (which opened in 1938). Modern Apizza on State Street, which opened in 1934, is also well-known.Taste tests prove that brand does matter. Yale Daily News (2008-11-06). Retrieved on 2013-07-15.
thumb|left| Louis' Lunch, where the hamburger was reputedly invented in 1900
A second New Haven gastronomical claim to fame is Louis' Lunch, which is located in a small brick building on Crown Street and has been serving fast food since 1895.Price & Lee's New Haven (New Haven County, Conn.) City Directory, 1899, page 375 Though fiercely debated, the restaurant's founder Louis Lassen is credited by the Library of Congress with inventing the hamburger and steak sandwich.Library of Congress retrieved on 2010-04-22Local Legacies American Folklife Center retrieved on 2009-05-04 Louis' Lunch broils hamburgers, steak sandwiches and hot dogs vertically in original antique 1898 cast iron stoves using gridirons, patented by local resident Luigi Pieragostini in 1939, that hold the meat in place while it cooks.
New Haven is home to Miya's Sushi, the first sustainable sushi restaurant in the world. Miya's offers the largest vegetarian sushi menu in the world.
During weekday lunchtime, over 150 lunch carts and food trucks from neighborhood restaurants cater to different student populations throughout Yale's campus.Burritos, Bubble Tea and Burgers | YDN Magazine. Yaledailynews.com (2004-11-17). Retrieved on 2013-07-15. The carts cluster at three main points: by Yale – New Haven Hospital in the center of the Hospital Green (Cedar and York streets), by Yale's Trumbull College (Elm and York streets), and on the intersection of Prospect and Sachem streets by the Yale School of Management.Spiegel, Jan Ellen (2010-4-21). "From Common Food Carts, Exotic Tastes" The New York Times. Retrieved 2010-07-18. Popular farmers' markets, managed by the local non-profit CitySeed, set up shop weekly in several neighborhoods, including Westville/Edgewood Park, Fair Haven, Upper State Street, Wooster Square, and Downtown/New Haven Green.
A large grocery store, the Elm City Market, opened on 360 State Street in New Haven in early fall 2011 and served local produce and groceries to the community. Originally, the market was a member-owned co-op,Elm City Market. Elmcitymarket.coop. Retrieved on 2013-07-15. but debt defaults in August 2014 forced a sale of the business. It is now an employee-owned business; the co-op's previous owners received no equity in the new business.Elm City Market Sold. Yale Daily News. Retrieved on 2016-11-06
In the past several years, two separate Downtown food tour companies have started offering popular restaurant tours on weekends.
Taste of New Haven Tours offers several different weekly restaurant/bar tours and a popular pizza, bike, and pints tour. Culinary Walking Tours offers monthly restaurant tours and sponsors an annual Elm City Iron Chef competition.
Theatre and film
The city hosts numerous theatres and production houses, including the Yale Repertory Theatre, the Long Wharf Theatre, and the Shubert Theatre. There is also theatre activity from the Yale School of Drama, which works through the Yale University Theatre and the student-run Yale Cabaret. Southern Connecticut State University hosts the Lyman Center for the Performing Arts. The shuttered Palace Theatre (opposite the Shubert Theatre) is being renovated and will reopen as the College Street Music Hall in May, 2015. Smaller theatres include the Little Theater on Lincoln Street. Cooperative Arts and Humanities High School also boasts a state-of-the-art theatre on College Street. The theatre is used for student productions as well as the home to weekly services to a local non-denominational church, the City Church New Haven."Weekly Gatherings". City Church New Haven. Retrieved March 14, 2012.
The Shubert Theatre once premiered many major theatrical productions before their Broadway debuts. Productions that premiered at the Shubert include Oklahoma! (which was also written in New HavenThe Taft Apartment Building—New Haven, Connecticut. Morganreed.com. Retrieved on 2013-07-15.), Carousel, South Pacific, My Fair Lady, The King and I, and The Sound of Music, as well as the Tennessee Williams play A Streetcar Named Desire.
Bow Tie Cinemas owns and operates the Criterion Cinemas, the first new movie theater to open in New Haven in over 30 years and the first luxury movie complex in the city's history. The Criterion has seven screens and opened in November 2004, showing a mix of upscale first run commercial and independent film.Upscale movie theater opens doors downtown. Yale Daily News (2004-11-08). Retrieved on 2013-07-15.
Museums
right|thumb|The historic Peabody Museum of Natural History
New Haven has a variety of museums, many of them associated with Yale. The Beinecke Rare Book and Manuscript Library features an original copy of the Gutenberg Bible. There is also the Connecticut Children's Museum; the Knights of Columbus museum near that organization's world headquarters; the Peabody Museum of Natural History; the Yale University Collection of Musical Instruments; the Eli Whitney Museum (across the town line in Hamden, Connecticut, on Whitney Avenue); the Yale Center for British Art, which houses the largest collection of British art outside the U.K.,YCBA Home Page | britishart.yale.edu. Ycba.yale.edu. Retrieved on 2013-07-15. and the Yale University Art Gallery, the nation's oldest college art museum. New Haven is also home to the New Haven Museum and Historical Society on Whitney Avenue, which has a library of many primary source treasures dating from Colonial times to the present.
Artspace on Orange Street is one of several contemporary art galleries around the city, showcasing the work of local, national, and international artists. Others include City Gallery and A. Leaf Gallery in the downtown area. Westville galleries include Kehler Liddell, Jennifer Jane Gallery, and The Hungry Eye. The Erector Square complex in the Fair Haven neighborhood houses the Parachute Factory gallery along with numerous artist studios, and the complex serves as an active destination during City-Wide Open Studios held yearly in October.
New Haven is the home port of a life-size replica of the historical Freedom Schooner Amistad, which is open for tours at Long Wharf pier at certain times during the summer. Also at Long Wharf pier is the Quinnipiack schooner, offering sailing cruises of the harbor area throughout the summer. The Quinnipiack also functions as a floating classroom for hundreds of local students.
Music
thumb|The band Kings of Leon performs at Toad's Place
The New Haven Green is the site of many free music concerts, especially during the summer months. These have included the New Haven Symphony Orchestra, the July Free Concerts on the Green in July, and the New Haven Jazz Festival in August. The Jazz Festival, which began in 1982, is one of the longest-running free outdoor festivals in the U.S., until it was canceled for 2007. Headliners such as The Breakfast, Dave Brubeck, Ray Charles and Celia Cruz have historically drawn 30,000 to 50,000 fans, filling up the New Haven Green to capacity. The New Haven Jazz Festival was revived in 2008 and has been sponsored since by Jazz Haven.Jazz Haven, Inc. Jazzhaven.org. Retrieved on 2013-07-15.
New Haven is home to the concert venue Toad's Place, and a new venue, College Street Music Hall. The city has retained an alternative art and music underground that has helped to influence post-punk era music movements such as indie, college rock and underground hip-hop. Other local venues include Cafe Nine, BAR, Pacific Standard Tavern, Stella Blues, Three Sheets, Firehouse 12, and Rudy's.
The Yale School of Music contributes to the city's music scene by offering hundreds of free concerts throughout the year at venues in and around the Yale campus. Large performances are held in the 2,700-seat Woolsey Hall auditorium, which contains the world's largest symphonic organs, while chamber music and recitals are performed in Sprague Hall.
Hardcore band Hatebreed are from Wallingford, but got their start in New Haven under the name Jasta 14. The band Miracle Legion formed in New Haven in 1983.
Festivals
In addition to the Jazz Festival (described above), New Haven serves as the home city of the annual International Festival of Arts and Ideas. New Haven's Saint Patrick's Day parade, which began in 1842, is New England's oldest St. Patty's Day parade and draws the largest crowds of any one-day spectator event in Connecticut. The St. Andrew the Apostle Italian Festival has taken place in the historic Wooster Square neighborhood every year since 1900. Other parishes in the city celebrate the Feast of Saint Anthony of Padua and a carnival in honor of St. Bernadette Soubirous.Italian Festival in Wooster Square | connecticut style. wtnh.com (2009-06-24). Retrieved on 2013-07-15. New Haven celebrates Powder House Day every April on the New Haven Green to commemorate the city's entrance into the Revolutionary War. The annual Wooster Square Cherry Blossom Festival commemorates the 1973 planting of 72 Yoshino Japanese Cherry Blossom trees by the New Haven Historic Commission in collaboration with the New Haven Parks Department and residents of the neighborhood. The Festival now draws well over 5,000 visitors. The Film Fest New Haven has been held annually since 1995.
Nightlife
In the past decade downtown has seen an influx of new restaurants, bars, and nightclubs. Large crowds are drawn to the Crown Street area downtown on weekends where many of the restaurants and bars are located. Crown Street between State and High streets has dozens of establishments, as do nearby Temple and College streets. Away from downtown, Upper State Street has a number of restaurants and bars popular with local residents and weekend visitors.
Newspapers and media
New Haven is served by the daily New Haven Register, the weekly "alternative" New Haven Advocate (which is run by Tribune, the corporation owning the Hartford Courant), the online daily New Haven Independent, and the monthly Grand News Community Newspaper. Downtown New Haven is covered by an in-depth civic news forum, Design New Haven. The Register also backs PLAY magazine, a weekly entertainment publication. The city is also served by several student-run papers, including the Yale Daily News, the weekly Yale Herald and a humor tabloid, Rumpus Magazine.
WTNH Channel 8, the ABC affiliate for Connecticut, WCTX Channel 59, the MyNetworkTV affiliate for the state, and Connecticut Public Television station WEDY channel 65, a PBS affiliate, broadcast from New Haven. All New York City news and sports team stations broadcast to New Haven County.
Sports and athletics
right|thumb|Yale Bowl during "The Game" in 2001
New Haven has a history of professional sports franchises dating back to the 19th centuryThe 1875 New Haven Elm Citys. Retrosheet.org. Retrieved on 2013-08-02. and has been the home to professional baseball, basketball, football, hockey, and soccer teams—including the New York Giants of the National Football League from 1973 to 1974, who played at the Yale Bowl. Throughout the second half of the 20th century, New Haven consistently had minor league hockey and baseball teams, which played at the New Haven Arena (built in 1926, demolished in 1972), New Haven Coliseum (1972–2002), and Yale Field (1928–present).
When John DeStefano, Jr., became mayor of New Haven in 1995, he outlined a plan to transform the city into a major cultural and arts center in the Northeast, which involved investments in programs and projects other than sports franchises. As nearby Bridgeport built new sports facilities, the brutalist New Haven Coliseum rapidly deteriorated. Believing the upkeep on the venue to be a drain of tax dollars, the DeStefano administration closed the Coliseum in 2002; it was demolished in 2007. New Haven's last professional sports team, the New Haven County Cutters, left in 2009. The DeStefano administration did, however, see the construction of the New Haven Athletic Center in 1998, a indoor athletic facility with a seating capacity of over 3,000. The NHAC, built adjacent to Hillhouse High School, is used for New Haven public schools athletics, as well as large-scale area and state sporting events; it is the largest high school indoor sports complex in the state.Orzechowski, Brett. (2006-07-23) Nightmare in the Elm City- The New Haven Register – Serving New Haven, Connecticut. Nhregister.com. Retrieved on 2013-08-02. Hillhouse High School's Indoor Track Facility. Cga.ct.gov (2007-03-28). Retrieved on 2013-08-02.
New Haven was the host of the 1995 Special Olympics World Summer Games; then-President Bill Clinton spoke at the opening ceremonies.Remarks: Opening Ceremonies of the Special Olympics World Games in New Haven, Connecticut. Eunice Kennedy Shriver (1995-07-01). Retrieved on 2013-08-02. The city is home to the Pilot Pen International tennis event, which takes place every August at the Connecticut Tennis Center, one of the largest tennis venues in the world.Yale University Bulldogs, Official Athletic Site New Haven biannually hosts "The Game" between Yale and Harvard, the country's second-oldest college football rivalry. Numerous road races take place in New Haven, including the USA 20K Championship during the New Haven Road Race.Stratton Faxon. New Haven Roadrace. Retrieved on 2013-08-02.
Greater New Haven is home to a number of college sports teams. The Yale Bulldogs play Division I college sports, as do the Quinnipiac Bobcats in neighboring Hamden. Division II athletics are played by Southern Connecticut State University and the University of New Haven (actually located in neighboring West Haven), while Albertus Magnus College athletes perform at the Division III level.
New Haven is home to many New York Yankees fans due to the proximity of New York City.
Walter Camp, deemed the "father of American football," was a New Havener.
The New Haven Warriors rugby league team play in the AMNRL. They have a large number of Pacific Islanders playing for them. Their field is located at the West Haven High School's Ken Strong Stadium. They won the 2008 AMNRL Grand Final.
Structures
Architecture
thumb|A view of the buildings around Yale University in New Haven, with its distinctive architecture
New Haven has many architectural landmarks dating from every important time period and architectural style in American history. The city has been home to a number of architects and architectural firms that have left their mark on the city including Ithiel Town and Henry Austin in the 19th century and Cesar Pelli, Warren Platner, Kevin Roche, Herbert Newman and Barry Svigals in the 20th. The Yale School of Architecture has fostered this important component of the city's economy. Cass Gilbert, of the Beaux-Arts school, designed New Haven's Union Station and the New Haven Free Public Library and was also commissioned for a City Beautiful plan in 1919. Frank Lloyd Wright, Marcel Breuer, Alexander Jackson Davis, Philip C. Johnson, Gordon Bunshaft, Louis Kahn, James Gamble Rogers, Frank Gehry, Charles Willard Moore, Stefan Behnisch, James Polshek, Paul Rudolph, Eero Saarinen and Robert Venturi all have designed buildings in New Haven. Yale's 1950s-era Ingalls Rink, designed by Eero Saarinen, was included on the America's Favorite Architecture list created in 2007. The American Institute of Architects. Favoritearchitecture.org. Retrieved on 2013-07-15.
Many of the city's neighborhoods are well-preserved as walkable "museums" of 19th- and 20th-century American architecture, particularly by the New Haven Green, Hillhouse Avenue and other residential sections close to Downtown New Haven. Overall, a large proportion of the city's land area is National (NRHP) historic districts. One of the best sources on local architecture is New Haven: Architecture and Urban Design, by Elizabeth Mills Brown.
The five tallest buildings in New Haven are:
Connecticut Financial Center 383 ft (117 m) 26 floors
360 State Street 338 ft (103 m) 32 floors
Knights of Columbus Building 321 ft (98 m) 23 floors
Kline Biology Tower 250 ft (76 m) 16 floors
Crown Towers 233 ft (71 m) 22 floors
Historic points of interest
thumb|The Graves-Dwight mansion on Hillhouse Avenue
Many historical sites exist throughout the city, including 59 properties listed on the National Register of Historic Places. Of these, nine are among the 60 U.S. National Historic Landmarks in Connecticut. The New Haven Green, one of the National Historic Landmarks, was formed in 1638, and is home to three 19th-century churches. Below one of the churches (referred to as the Center Church on-the-Green) lies a 17th-century crypt, which is open to visitors. Some of the more famous burials include the first wife of Benedict Arnold and the aunt and grandmother of President Rutherford B. Hayes; Hayes visited the crypt while President in 1880.Center Church on-the-Green – The Crypt. Newhavencenterchurch.org. Retrieved on 2013-08-02. The Old Campus of Yale University is located next to the Green, and includes Connecticut Hall, Yale's oldest building and a National Historic Landmark. The Hillhouse Avenue area, which is listed on the National Register of Historic Places and is also a part of Yale's campus, has been called a walkable museum, due to its 19th-century mansions and street scape; Charles Dickens is said to have called Hillhouse Avenue "the most beautiful street in America" when visiting the city in 1868.
thumb|left| The restored Black Rock Fort
In 1660, Edward Whalley (a cousin and friend of Oliver Cromwell) and William Goffe, two English Civil War generals who signed the death warrant of King Charles I, hid in a rock formation in New Haven after having fled England upon the restoration of Charles II to the English throne.http://www.jstor.org/pss/20084256 They were later joined by a third regicide, John Dixwell. The rock formation, which is now a part of West Rock Park, is known as Judges' Cave, and the path leading to the cave is called the Regicides Trail.
After the American Revolutionary War broke out in 1776, the Connecticut colonial government ordered the construction of Black Rock Fort (to be built on top of an older 17th-century fort) to protect the port of New Haven. In 1779, during the Battle of New Haven, British soldiers captured Black Rock Fort and burned the barracks to the ground. The fort was reconstructed in 1807 by the federal government (on orders from the Thomas Jefferson administration), and rechristened Fort Nathan Hale, after the Revolutionary War hero who had lived in New Haven. The cannons of Fort Nathan Hale were successful in defying British war ships during the War of 1812. In 1863, during the Civil War, a second Fort Hale was built next to the original, complete with bomb-resistant bunkers and a moat, to defend the city should a Southern raid against New Haven be launched. The United States Congress deeded the site to the state in 1921, and all three versions of the fort have been restored. The site is now listed on the National Register of Historic Places and receives thousands of visitors each year. Connecticut Forts. Northamericanforts.com (2013-04-01). Retrieved on 2013-08-02.
thumb|right|The 19th-century Five Mile Point Lighthouse at Lighthouse Point Park
Grove Street Cemetery, a National Historic Landmark which lies adjacent to Yale's campus, contains the graves of Roger Sherman, Eli Whitney, Noah Webster, Josiah Willard Gibbs, Charles Goodyear and Walter Camp, among other notable burials.Grove Street Cemetery, New Haven, Connecticut, USA. Grovestreetcemetery.org. Retrieved on 2013-08-02. The cemetery is known for its grand Egyptian Revival gateway. The Union League Club of New Haven building, located on Chapel Street, is notable for not only being a historic Beaux-Arts building, but also is built on the site where Roger Sherman's home once stood; George Washington is known to have stayed at the Sherman residence while President in 1789 (one of three times Washington visited New Haven throughout his lifetime).Historic Buildings of Connecticut » Blog Archive » Union League Club of New Haven (1902). Historicbuildingsct.com (2010-01-26). Retrieved on 2013-08-02.
Two sites pay homage to the time President and Chief Justice William Howard Taft lived in the city, as both a student and later Professor at Yale: a plaque on Prospect Street marks the site where Taft's home formerly stood,Stannard, Ed. (2009-02-08) Photography exhibit reveals 'lost New Haven'- The New Haven Register – Serving New Haven, Connecticut. The New Haven Register. Retrieved on 2013-08-02. and downtown's Taft Apartment Building (formerly the Taft Hotel) bears the name of the former President who resided in the building for eight years before becoming Chief Justice of the United States.
Lighthouse Point Park, a public beach run by the city, was a popular tourist destination during the Roaring Twenties, attracting luminaries of the period such as Babe Ruth and Ty Cobb.Welcome to Department of Parks, Recreation and Trees. Cityofnewhaven.com. Retrieved on 2013-08-02. The park remains popular among New Haveners, and is home to the Five Mile Point Lighthouse, constructed in 1847, and the Lighthouse Point Carousel, constructed in 1916.Welcome to Department of Parks, Recreation and Trees. Cityofnewhaven.com. Retrieved on 2013-08-02. Five Mile Point Light was decommissioned in 1877 following the construction of Southwest Ledge Light at the entrance of the harbor, which remains in service to this day. Both of the lighthouses and the carousel are listed on the National Register of Historic Places.
Other historic sites in the city include the Soldiers and Sailors Monument, which stands at the summit of East Rock, the Marsh Botanical Garden, Wooster Square, Dwight Street, Louis' Lunch, and the Farmington Canal, all of which date back to the 19th century. Other historic parks besides the Green include Edgerton Park, Edgewood Park, and East Rock Park, each of which is included on the National Register of Historic Places.
Transportation
Rail
New Haven is connected to New York City by commuter rail, regional rail and inter-city rail, provided by Metro-North Railroad (commuter rail), Shore Line East (commuter rail), and Amtrak (regional and intercity rail) respectively, allowing New Haven residents to commute to work in New York City (just under two hours by train).
The city's main railroad station is the historic Beaux-arts Union Station, which serves Metro-North trains to New York and Shore Line East commuter trains to New London. An additional station was opened in 2002, named State Street Station, to provide Shore Line East and a few peak-hour Metro-North passengers easier access to and from Downtown.
left|thumb|Amtrak railroad service at New Haven
Union Station is further served by four Amtrak lines: the Northeast Regional and the high-speed Acela Express provide service to New York, Washington, D.C. and Boston, and rank as the first and second busiest routes in the country; the New Haven–Springfield Line provides service to Hartford and Springfield, Massachusetts; and the Vermonter provides service to both Washington, D.C., and Vermont, from the Canada–US border. Amtrak also codeshares with United Airlines for travel to any airport serviced by United Airlines, via Newark Airport (EWR) originating from or terminating at Union Station, .
Metro-North has the third highest daily ridership among commuter rails in the country, with an average weekday ridership of 276,000 in 2009. Of the 276,000 Metro-North riders, 112,000 rode the New Haven Line each day, which would make the New Haven Line seventh in the country in daily ridership if it were alone an entire commuter rail system. Shore Line East ranked nineteenth in the country, with an average daily ridership of 2,000.http://www.apta.com/resources/statistics/Documents/Ridership/2009_q3_ridership_APTA.pdf
Additionally, the Connecticut Department of Transportation plans to add a new commuter service called the Hartford Line in collaboration with Amtrak and the federal government that will run between New Haven and Springfield, Massachusetts with a terminus at Union Station in Downtown New Haven. As of late 2015, funding had been secured and the service is scheduled to begin operation in early 2018.
Bus
thumb|A New Haven Division bus in Downtown New Haven, near the Green
The New Haven Division of Connecticut Transit (CT Transit), the state's bus system, is the second largest division in the state with 24 routes. All routes originate from the New Haven Green, making it the central transfer hub of the city. Service is provided to 19 different municipalities throughout Greater New Haven.
CT Transit's Union Station Shuttle provides free service from Union Station to the New Haven Green and several New Haven parking garages. Peter Pan and Greyhound bus lines have scheduled stops at Union Station, and connections downtown can be made via the Union Station Shuttle. A private company operates the New Haven/Hartford Express which provides commuter bus service to Hartford. The Yale University Shuttle provides free transportation around New Haven for Yale students, faculty, and staff.
The New Haven Division buses follow routes that had originally been covered by trolley service. Horse-drawn steetcars began operating in New Haven in the 1860s, and by the mid-1890s all the lines had become electric. In the 1920s and 1930s, some of the trolley lines began to be replaced by bus lines, with the last trolley route converted to bus in 1948. The City of New Haven is in the very early stages of considering the restoration of streetcar (light-rail) service, which has been absent since the postwar period.TranSystems/Stone Consulting & Design, "New Haven Streetcar Assessment", April 2008.
Bicycle
The Farmington Canal Trail is a rail trail that will eventually run continuously from downtown New Haven to Northampton, Massachusetts. The scenic trail follows the path of the historic New Haven and Northampton Company and the Farmington Canal. Currently, there is a continuous stretch of the trail from downtown, through Hamden and into Cheshire, making bicycle commuting between New Haven and those suburbs possible. The trail is part of the East Coast Greenway, a proposed bike path that would link every major city on the East Coast from Florida to Maine.
In 2004, the first bike lane in the city was added to Orange Street, connecting East Rock Park and the East Rock neighborhood to downtown. Since then, bike lanes have also been added to sections of Howard Ave, Elm St, Dixwell Avenue, Water Street, Clinton Avenue and State Street. The city has created recommended bike routes for getting around New Haven, including use of the Canal Trail and the Orange Street lane.A bike map of the city entire can be seen here, and bike maps broken down by area here. As of the end of 2012, bicycle lanes have also been added in both directions on Dixwell Avenue along most of the street from downtown to the Hamden town line, as well as along Howard Avenue from Yale New Haven Hospital to City Point.
The city has plans to create two additional bike lanes connecting Union Station with downtown, and the Westville neighborhood with downtown. The city has added dozens of covered bike parking spots at Union Station, in order to facilitate more bike commuting to the station.
Roads
thumb|The Wilbur Cross Parkway passes through West Rock via Heroes Tunnel, the only highway tunnel in Connecticut.
New Haven lies at the intersection of Interstate 95 on the coast—which provides access southwards and/or westwards to the western coast of Connecticut and to New York City, and eastwards to the eastern Connecticut shoreline, Rhode Island, and eastern Massachusetts—and Interstate 91, which leads northward to the interior of Massachusetts and Vermont and the Canada–US border. I-95 is infamous for traffic jams increasing with proximity to New York City; on the east side of New Haven it passes over the Quinnipiac River via the Pearl Harbor Memorial, or "Q Bridge", which often presents a major bottleneck to traffic. I-91, however, is relatively less congested, except at the intersection with I-95 during peak travel times.
The Oak Street Connector (Connecticut Route 34) intersects I-91 at exit 1, just south of the I-95/I-91 interchange, and runs northwest for a few blocks as an expressway spur into downtown before emptying onto surface roads. The Wilbur Cross Parkway (Connecticut Route 15) runs parallel to I-95 west of New Haven, turning northwards as it nears the city and then running northwards parallel to I-91 through the outer rim of New Haven and Hamden, offering an alternative to the I-95/I-91 journey (restricted to non-commercial vehicles). Route 15 in New Haven is the site of the only highway tunnel in the state (officially designated as Heroes Tunnel), running through West Rock, home to West Rock Park and the Three Judges Cave.
The city also has several major surface arteries. U.S. Route 1 (Columbus Avenue, Union Avenue, Water Street, Forbes Avenue) runs in an east-west direction south of downtown serving Union Station and leading out of the city to Milford, West Haven, East Haven and Branford. The main road from downtown heading northwest is Whalley Avenue (partly signed as Route 10 and Route 63) leading to Westville and Woodbridge. Heading north towards Hamden, there are two major thoroughfares, Dixwell Avenue and Whitney Avenue. To the northeast are Middletown Avenue (Route 17), which leads to the Montowese section of North Haven, and Foxon Boulevard (Route 80), which leads to the Foxon section of East Haven and to the town of North Branford. To the west is Route 34, which leads to the city of Derby. Other major intracity arteries are Ella Grasso Boulevard (Route 10) west of downtown, and College Street, Temple Street, Church Street, Elm Street, and Grove Street in the downtown area.
Traffic safety is a major concern for drivers, pedestrians and cyclists in New Haven. In addition to many traffic-related fatalities in the city each year, since 2005, over a dozen Yale students, staff and faculty have been killed or injured in traffic collisions on or near the campus.
Airport
Tweed New Haven Regional Airport is located within the city limits east of the business district, and provides daily service to Philadelphia through American Eagle. Bus service between Downtown New Haven and Tweed is available via the CT Transit New Haven Division Bus "G". Taxi service and rental cars (including service by Hertz, Avis, Enterprise and Budget) are available at the airport. Travel time from Tweed to downtown takes less than 15 minutes by car.
Seaport
thumb|250px|Port of New Haven
New Haven Harbor is home to the Port of New Haven, a deep-water seaport with three berths capable of hosting vessels and barges as well as the facilities required to handle break bulk cargo. The port has the capacity to load 200 trucks a day from the ground or via loading docks. Rail transportation access is available, with a private switch engine for yard movements and private siding for loading and unloading. Approximately of inside storage and of outside storage are available at the site. Five shore cranes with a 250-ton capacity and 26 forklifts, each with a 26-ton capacity, are also available.
In June 17, 2013, the city commissioned the Nathan Hale, a Port Security vessel capable of serving search and rescue, firefighting, constabulary roles.
Infrastructure
thumb|Yale's Sterling Memorial Library, Interior.
Hospitals and medicine
The New Haven area supports several medical facilities that are considered some of the best hospitals in the country. There are two major medical centers downtown: Yale – New Haven Hospital has four pavilions, including the Yale – New Haven Children's Hospital and the Smilow Cancer Hospital; the Hospital of Saint Raphael is several blocks north, and touts its excellent cardiac emergency care program. Smaller downtown health facilities are the Temple Medical Center located downtown on Temple Street, Connecticut Mental Health Center/Connecticut Mental Health Center across Park Street from Y-NHH, and the Hill Health Center, which serves the working-class Hill neighborhood. A large Veterans Affairs hospital is located in neighboring West Haven. To the west in Milford is Milford Hospital, and to the north in Meriden is the MidState Medical Center.
Yale and New Haven are working to build a medical and biotechnology research hub in the city and Greater New Haven region, and are succeeding to some extent. The city, state and Yale together run Science Park, a large site three blocks northwest of Yale's Science Hill campus.[citation forthcoming] This multi-block site, approximately bordered by Mansfield Street, Division Street, and Shelton Avenue, is the former home of Winchester's and Olin Corporation's 45 large-scale factory buildings. Currently, sections of the site are large-scale parking lots or abandoned structures, but there is also a large remodeled and functioning area of buildings (leased primarily by a private developer) with numerous Yale employees, financial service and biotech companies.
thumb|Mansfield St. New Haven, home to the Marsh Botanical Garden and this abandoned building.
thumb|Marsh Botanical Garden XIV
A second biotechnology district is being planned for the median strip on Frontage Road, on land cleared for the never-built Route 34 extension. As of late 2009, a Pfizer drug-testing clinic, a medical laboratory building serving Yale – New Haven Hospital, and a mixed-use structure containing parking, housing and office space, have been constructed on this corridor. A former SNET telephone building at 300 George Street is being converted into lab space, and has been so far quite successful in attracting biotechnology and medical firms.
Power supply facilities
Electricity for New Haven is generated by a 448 MW oil and gas-fired generating station located on the shore at New Haven Harbor.The New Haven Harbor Generating Station In addition, Pennsylvania Power and Light (PPL) Inc. operates a 220 MW peaking natural gas turbine plant in nearby Wallingford.
Near New Haven there is the static inverter plant of the HVDC Cross Sound Cable. There are three PureCell Model 400 fuel cells placed in the city of New Haven—one at the New Haven Public Schools and newly constructed Roberto Clemente School, one at the mixed-use 360 State Street building, and one at City Hall. According to Giovanni Zinn of the city's Office of Sustainability, each fuel cell may save the city up to $1 million in energy costs over a decade. The fuel cells were provided by ClearEdge Power, formerly UTC Power.
In popular culture
thumb|left|Harrison Ford and Shia LaBeouf in 2007 filming Indiana Jones and the Kingdom of the Crystal SkullSeveral recent movies have been filmed in New Haven, including Mona Lisa Smile (2003), with Julia Roberts, The Life Before Her Eyes (2007), with Uma Thurman, and Indiana Jones and the Kingdom of the Crystal Skull (2008) directed by Steven Spielberg and starring Harrison Ford, Cate Blanchett and Shia LaBeouf.NHregister.com The filming of Crystal Skull involved an extensive chase sequence through the streets of New Haven. Several downtown streets were closed to traffic and received a "makeover" to look like streets of 1957, when the film is set. 500 locals were cast as extras for the film. In Everybody's Fine (2009), Robert De Niro has a close encounter in what is supposed to be the Denver train station; the scene was filmed in New Haven's Union Station.
thumb|right|Union Station tunnel as seen in Everybody's Fine (2009)
Notable people
Sister cities
thumb|right|Five Mile Point Lighthouse in 1991
Taichung, Taiwan
Afula-Gilboa, Israel
Amalfi, Italy
Avignon, France
Freetown, Sierra Leone
Huế, Vietnam
León, Nicaragua
Some of these were selected because of historical connection—Freetown because of the Amistad trial. Others, such as Amalfi and Afula-Gilboa, reflect ethnic groups in New Haven.
In 1990, the United Nations named New Haven a "Peace Messenger City".
See also
Other articles about people and places in New Haven, CT
National Register of Historic Places listings in New Haven, Connecticut
New Haven Fire Department
New Haven Police Department
Coast Guard Station New Haven
References
Further reading
Leonard Bacon, Thirteen Historical Discourses (New Haven, 1839)
C. H. Hoadley (editor), Records of the Colony of New Haven, 1638–1665 (two volumes, Hartford, 1857–58)
J. W. Barber, History and Antiquities of New Haven (third edition, New Haven, 1870)
C. H. Levermore, Town and City Government of New Haven (Baltimore, 1886)
C. H. Levermore, Republic of New Haven: A History of Municipal Evolution (Baltimore, 1886)
E. S. Bartlett, Historical Sketches of New Haven (New Haven, 1897)
F. H. Cogswell, "New Haven" in L. P. Powell (editor), Historic Towns of New England (New York, 1898)
H. T. Blake, Chronicles of New Haven Green (New Haven, 1898)
E. E. Atwater, History of the Colony of New Haven (New edition, New Haven, 1902)
Robert A. Dahl, Who Governs? Democracy and Power in An American City (Yale University Press, New Haven, 1961)
William Lee Miller, The Fifteenth Ward and the Great Society (Houghton Mifflin/Riverside, 1966)
Douglas W. Rae, City: Urbanism and Its End (New Haven, 2003)
New Haven City Yearbooks
Michael Sletcher, New Haven: From Puritanism to the Age of Terrorism (Charleston, 2004)
Preston C. Maynard and Majorey B. Noyes, (editors), "Carriages and Clocks, Corsets and Locks: the Rise and Fall of an Industrial City—New Haven, Connecticut" (University Press of New England, 2005)
Mandi Isaacs Jackson, Model City Blues: Urban Space and Organized Resistance in New Haven (Temple University Press, 2008)
James Cersonsky, "Whose New Haven? Reversing the Slant of the Knowledge Economy" (Dissent, February 15, 2011)
Paul Bass, "New Hope for New Haven, Connecticut" (Nation, January 25, 2012)
External links
City of New Haven official website
City of New Haven Economic Development
New Haven Free Public Library
DataHaven, regional data cooperative for Greater New Haven
New Haven CT Guide
Historical New Haven Digital Collection
Tweed New Haven Regional Airport
Design New Haven
Category:Cities in Connecticut
Category:Cities in New Haven County, Connecticut
Category:Cities in the New York metropolitan area
Connecticut
Category:New England Puritanism
Category:Populated coastal places in Connecticut
Category:Populated places established in 1638
Category:Port cities and towns of the United States Atlantic coast
Category:University towns in the United States
Category:1638 establishments in Connecticut | 53,825 | 2017-01 |
Symbiosis | thumb|250px|right|In a symbiotic mutualistic relationship, the clownfish feeds on small invertebrates that otherwise have potential to harm the sea anemone, and the fecal matter from the clownfish provides nutrients to the sea anemone. The clownfish is additionally protected from predators by the anemone's stinging cells, to which the clownfish is immune. The clownfish also emits a high pitched sound that deters butterfly fish, which would otherwise eat the anemone.
Symbiosis (from Greek συμβίωσις "living together", from σύν "together" and βίωσις "living"), , is a close and often long-term interaction between two different biological species. In 1877, Albert Bernhard Frank used the word symbiosis (which previously had been used to depict people living together in community) to describe the mutualistic relationship in lichens. In 1879, the German mycologist Heinrich Anton de Bary defined it as "the living together of unlike organisms."
The definition of symbiosis has varied among scientists. Some advocated that the term "symbiosis" should only refer to persistent mutualisms, while others thought it should apply to any type of persistent biological interaction (in other words mutualistic, commensalistic, or parasitic). After 130 years of debate, current biology and ecology textbooks now use the latter "de Bary" definition or an even broader definition (where symbiosis means all species interactions), and the restrictive definition (where symbiosis means mutualism only) is no longer used.
Some symbiotic relationships are obligatory, which means that one or both of the symbionts entirely depend on each other for survival. For example, in lichens, which consist of fungal and photosynthetic symbionts, the fungal partners cannot live on their own. The algal or cyanobacterial symbionts in lichens, such as Trentepohlia, can generally live independently, and their symbiosis is, therefore, facultative (optional).
Symbiotic relationships include those associations in which one organism lives on another (ectosymbiosis, such as mistletoe), or where one partner lives inside the other (endosymbiosis, such as lactobacilli and other bacteria in humans or Symbiodinium in corals). Symbiosis is also classified by physical attachment of the organisms; symbiosis in which the organisms have bodily union is called conjunctive symbiosis, and symbiosis in which they are not in union is called disjunctive symbiosis."symbiosis." Dorland's Illustrated Medical Dictionary. Philadelphia: Elsevier Health Sciences, 2007. Credo Reference. Web. 17 September 2012
Very often, symbiosis is considered a type of mutualism.
Physical interaction
left|250px|thumb|Alder tree root nodule
Endosymbiosis is any symbiotic relationship in which one symbiont lives within the tissues of the other, either within the cells or extracellularly. Examples include diverse microbiomes, rhizobia, nitrogen-fixing bacteria that live in root nodules on legume roots; actinomycete nitrogen-fixing bacteria called Frankia, which live in alder root nodules; single-celled algae inside reef-building corals; and bacterial endosymbionts that provide essential nutrients to about 10%–15% of insects.
Ectosymbiosis, also referred to as exosymbiosis, is any symbiotic relationship in which the symbiont lives on the body surface of the host, including the inner surface of the digestive tract or the ducts of exocrine glands. Examples of this include ectoparasites such as lice, commensal ectosymbionts such as the barnacles that attach themselves to the jaw of baleen whales, and mutualist ectosymbionts such as cleaner fish.
Mutualism
Hermit crab, Calcinus laevimanus, with sea anemone.|thumb|300px
Mutualism or interspecies reciprocal altruism is a relationship between individuals of different species where both individuals benefit. In general, only lifelong interactions involving close physical and biochemical contact can properly be considered symbiotic. Mutualistic relationships may be either obligate for both species, obligate for one but facultative for the other, or facultative for both. Many biologists restrict the definition of symbiosis to close mutualist relationships.
thumb|Bryoliths document a mutualistic symbiosis between a hermit crab and encrusting bryozoans; Banc d'Arguin, Mauritania
A large percentage of herbivores have mutualistic gut flora that help them digest plant matter, which is more difficult to digest than animal prey. This gut flora is made up of cellulose-digesting protozoans or bacteria living in the herbivores' intestines."symbiosis." The Columbia Encyclopedia. New York: Columbia University Press, 2008. Credo Reference. Web. 17 September 2012. Coral reefs are the result of mutualisms between coral organisms and various types of algae that live inside them. Most land plants and land ecosystems rely on mutualisms between the plants, which fix carbon from the air, and mycorrhyzal fungi, which help in extracting water and minerals from the ground.
An example of mutual symbiosis is the relationship between the ocellaris clownfish that dwell among the tentacles of Ritteri sea anemones. The territorial fish protects the anemone from anemone-eating fish, and in turn the stinging tentacles of the anemone protect the clownfish from its predators. A special mucus on the clownfish protects it from the stinging tentacles.
A further example is the goby fish, which sometimes lives together with a shrimp. The shrimp digs and cleans up a burrow in the sand in which both the shrimp and the goby fish live. The shrimp is almost blind, leaving it vulnerable to predators when outside its burrow. In case of danger the goby fish touches the shrimp with its tail to warn it. When that happens both the shrimp and goby fish quickly retreat into the burrow. Different species of gobies (Elacatinus spp.) also exhibit mutualistic behavior through cleaning up ectoparasites in other fish.
Another non-obligate symbiosis is known from encrusting bryozoans and hermit crabs that live in a close relationship. The bryozoan colony (Acanthodesia commensale) develops a cirumrotatory growth and offers the crab (Pseudopagurus granulimanus) a helicospiral-tubular extension of its living chamber that initially was situated within a gastropod shell.
One of the most spectacular examples of obligate mutualism is between the siboglinid tube worms and symbiotic bacteria that live at hydrothermal vents and cold seeps. The worm has no digestive tract and is wholly reliant on its internal symbionts for nutrition. The bacteria oxidize either hydrogen sulfide or methane, which the host supplies to them. These worms were discovered in the late 1980s at the hydrothermal vents near the Galapagos Islands and have since been found at deep-sea hydrothermal vents and cold seeps in all of the world's oceans.
There are also many types of tropical and sub-tropical ants that have evolved very complex relationships with certain tree species.Piper, Ross (2007), Extraordinary Animals: An Encyclopedia of Curious and Unusual Animals, Greenwood Press.
Mutualism and endosymbiosis
During mutualistic symbioses, the host cell lacks some of the nutrients, which are provided by the endosymbiont. As a result, the host favors endosymbiont's growth processes within itself by producing some specialized cells. These cells affect the genetic composition of the host in order to regulate the increasing population of the endosymbionts and ensuring that these genetic changes are passed onto the offspring via vertical transmission (heredity).
Adaptation of the endosymbiont to the host's lifestyle leads to many changes in the endosymbiont–the foremost being drastic reduction in its genome size. This is due to many genes being lost during the process of metabolism, and DNA repair and recombination. While important genes participating in the DNA to RNA transcription, protein translation and DNA/RNA replication are retained. That is, a decrease in genome size is due to loss of protein coding genes and not due to lessening of inter-genic regions or open reading frame (ORF) size. Thus, species that are naturally evolving and contain reduced sizes of genes can be accounted for an increased number of noticeable differences between them, thereby leading to changes in their evolutionary rates. As the endosymbiotic bacteria related with these insects are passed on to the offspring strictly via vertical genetic transmission, intracellular bacteria goes through many hurdles during the process, resulting in the decrease in effective population sizes when compared to the free living bacteria. This incapability of the endosymbiotic bacteria to reinstate its wild type phenotype via a recombination process is called as Muller's ratchet phenomenon. Muller's ratchet phenomenon together with less effective population sizes has led to an accretion of deleterious mutations in the non-essential genes of the intracellular bacteria. This could have been due to lack of selection mechanisms prevailing in the rich environment of the host.
Commensalism
right|140px|thumb|Phoretic mites on a fly (Pseudolynchia canariensis).
Commensalism is a kind of inter-species relation, but it is not a type of symbiosis.
Commensalism describes a relationship between two living organisms where one benefits and the other is not significantly harmed or helped. It is derived from the English word commensal, which is used of human social interaction. The word derives from the medieval Latin word, formed from com- and mensa, meaning "sharing a table."
Commensal relationships may involve one organism using another for transportation (phoresy) or for housing (inquilinism), or it may also involve one organism using something another created, after its death (metabiosis). Examples of metabiosis are hermit crabs using gastropod shells to protect their bodies and spiders building their webs on plants.
Parasitism
right|thumb|Flea bites on a human is an example of parasitism.
Parasitism is a kind of inter-species relation, but it is not a type of symbiosis.
A parasitic relationship is one in which one member of the association benefits while the other is harmed. This is also known as antagonistic or antipathetic symbiosis. Parasitic symbioses take many forms, from endoparasites that live within the host's body to ectoparasites that live on its surface. In addition, parasites may be necrotrophic, which is to say they kill their host, or biotrophic, meaning they rely on their host's surviving. Biotrophic parasitism is an extremely successful mode of life. Depending on the definition used, as many as half of all animals have at least one parasitic phase in their life cycles, and it is also frequent in plants and fungi. Moreover, almost all free-living animals are host to one or more parasite taxa. An example of a biotrophic relationship would be a tick feeding on the blood of its host.
Amensalism
Amensalism is a kind of inter-species relation, but it is not a type of symbiosis.
Amensalism is the type of relationship that exists where one species is inhibited or completely obliterated and one is unaffected by the other. There are two types of amensalism, competition and antibiosis. Competition is where a larger or stronger organisms deprives a smaller or weaker one from a resource. Antibiosis occurs when one organism is damaged or killed by another through a chemical secretion. An example of competition is a sapling growing under the shadow of a mature tree. The mature tree can rob the sapling of necessary sunlight and, if the mature tree is very large, it can take up rainwater and deplete soil nutrients. Throughout the process, the mature tree is unaffected by the sapling. Indeed, if the sapling dies, the mature tree gains nutrients from the decaying sapling. Note that these nutrients become available because of the sapling's decomposition, rather than from the living sapling, which would be a case of parasitism. An example of antibiosis is Juglans nigra (black walnut), secreting juglone, a substance which destroys many herbaceous plants within its root zone.The Editors of Encyclopædia Britannica. (n.d.). Amensalism (biology). Retrieved September 30, 2014, from http://www.britannica.com/EBchecked/topic/19211/amensalism
Amensalism is an interaction where an organism inflicts harm to another organism without any costs or benefits received by the other. A clear case of amensalism is where sheep or cattle trample grass. Whilst the presence of the grass causes negligible detrimental effects to the animal's hoof, the grass suffers from being crushed. Amensalism is often used to describe strongly asymmetrical competitive interactions, such as has been observed between the Spanish ibex and weevils of the genus Timarcha which feed upon the same type of shrub. Whilst the presence of the weevil has almost no influence on food availability, the presence of ibex has an enormous detrimental effect on weevil numbers, as they consume significant quantities of plant matter and incidentally ingest the weevils upon it.
Synnecrosis
Synnecrosis is a kind of inter-species relation, but it is not a type of symbiosis.
Synnecrosis is a rare type of symbiosis in which the interaction between species is detrimental to both organisms involved. It is a short-lived condition, as the interaction eventually causes death. Because of this, evolution selects against synnecrosis and it is uncommon in nature. An example of this is the relationship between some species of bees and victims of the bee sting. Species of bees who die after stinging their prey inflict pain on themselves (albeit to protect the hive) as well as on the victim. This term is rarely used.
Symbiosis and evolution
thumb|180px|Leafhoppers protected by meat ants
While historically, symbiosis has received less attention than other interactions such as predation or competition, it is increasingly recognized as an important selective force behind evolution,
with many species having a long history of interdependent co-evolution. In fact, the evolution of all eukaryotes (plants, animals, fungi, and protists) is believed under the endosymbiotic theory to have resulted from a symbiosis between various sorts of bacteria. This theory is supported by certain organelles dividing independently of the cell, and the observation that some organelles seem to have their own nucleic acid."Symbiosis." Bloomsbury Guide to Human Thought. London: Bloomsbury Publishing Ltd, 1993. Credo Reference. Web. 17 September 2012.
Vascular plants
About 80% of vascular plants worldwide form symbiotic relationships with fungi, for example, in arbuscular mycorrhizas.
Symbiogenesis
The biologist Lynn Margulis, famous for her work on endosymbiosis, contends that symbiosis is a major driving force behind evolution. She considers Darwin's notion of evolution, driven by competition, to be incomplete and claims that evolution is strongly based on co-operation, interaction, and mutual dependence among organisms. According to Margulis and Dorion Sagan, "Life did not take over the globe by combat, but by networking."
Co-evolution
Symbiosis played a major role in the co-evolution of flowering plants and the animals that pollinate them. Many plants that are pollinated by insects, bats, or birds have highly specialized flowers modified to promote pollination by a specific pollinator that is also correspondingly adapted. The first flowering plants in the fossil record had relatively simple flowers. Adaptive speciation quickly gave rise to many diverse groups of plants, and, at the same time, corresponding speciation occurred in certain insect groups. Some groups of plants developed nectar and large sticky pollen, while insects evolved more specialized morphologies to access and collect these rich food sources. In some taxa of plants and insects the relationship has become dependent, where the plant species can only be pollinated by one species of insect.
List of symbioses
Some of the following symbioses have been discussed on this page, details on the others may be found on the linked pages.
Thalassiosira pseudonana (diatom) and Ruegeria pomeroyi (alphaproteobacterium)
Ocellaris clownfish and Ritteri sea anemones
Goby fish and shrimp
Bryozoans and hermit crabs
See also
Anagenesis
Aposymbiotic
Aquaponics
Cheating (biology)
Cleaning symbiosis
Human Microbiome Project
Interspecies friendship
List of symbiotic organisms
List of symbiotic relationships
Microbial consortium
Multigenomic organism
Symbiosis (chemical)
References
Bibliography
External links
TED-Education video - Symbiosis: a surprising tale of species cooperation.
Category:Symbiosis | 39,626 | 2017-01 |
Military history of the United States | thumb|500px|right|U.S. military personnel and expenditures, 1790–2006. Personnel is shown in orange (left axis); expenditures are in teal (right axis). The two axes are scaled to visually align for World War II, thus showing the difference between the cost per soldier before and after President Dwight D. Eisenhower's "New Look" policy of the mid-1950s.
The military history of the United States spans a period of over two centuries. During those years, the United States evolved from a new nation fighting Great Britain for independence (1775–83), through the monumental American Civil War (1861–65) and, after collaborating in triumph during World War II (1941–45), to the world's sole remaining superpower from the late 20th century to present.John Whiteclay Chambers, ed., The Oxford Guide to American Military History (1999)
The Continental Congress in 1775 established the Continental Army and named General George Washington its commander. This newly formed army, along with state militia forces, the French Army and Navy, and the Spanish Navy defeated the British in 1781. The new Constitution in 1789 made the president the commander in chief, with authority for the Congress to levy taxes, make the laws, and declare war.Jeremy Black, America as a Military Power: From the American Revolution to the Civil War (2002)
As of 2015, the U.S. military consists of the Army, Marine Corps, Navy and Air Force, all under the command of the United States Department of Defense. There also is the United States Coast Guard, which is controlled by the Department of Homeland Security.
The President of the United States is the commander in chief, and exercises the authority through the Secretary of Defense and the Chairman of the Joint Chiefs of Staff, which supervises combat operations. Governors have control of each state's Army and Air National Guard units for limited purposes. The president has the ability to federalize National Guard units, bringing them under the sole control of the Department of Defense.Fred Anderson, ed. The Oxford Companion to American Military History (2000)
Colonial wars (1620–1774)
The beginning of the United States military lies in civilian frontier settlers, armed for hunting and basic survival in the wilderness. These were organized into local militias for small military operations, mostly against Native American tribes but also to resist possible raids by the small military forces of neighboring European colonies. They relied on the British regular Army and Navy for any serious military operation.Spencer C. Tucker, James Arnold, and Roberta Wiener eds. The Encyclopedia of North American Colonial Conflicts to 1775: A Political, Social, and Military History (2008) excerpt and text search
In major operations outside the locality involved, the militia was not employed as a fighting force. Instead the colony asked for (and paid) volunteers, many of whom were also militia members.James Titus, The Old Dominion at War: Society, Politics and Warfare in Late Colonial Virginia (1991)
200px|thumb|right|Siege of Louisbourg (1758)
In the early years of the British colonization of North America, military action in the thirteen colonies that would become the United States were the result of conflicts with Native Americans, such as in the Pequot War of 1637, King Philip's War in 1675, the Yamasee War in 1715 and Father Rale's War in 1722.
Beginning in 1689, the colonies became involved in a series of wars between Great Britain and France for control of North America, the most important of which were Queen Anne's War, in which the British conquered French colony Acadia, and the final French and Indian War (1754–63) when Britain was victorious over all the French colonies in North America. This final war was to give thousands of colonists, including Virginia colonel George Washington, military experience which they put to use during the American Revolutionary War.Fred Anderson, The War That Made America: A Short History of the French and Indian War (2006)
War of Independence (1775–83)
thumb|200px|right|Detail from Washington and his Generals at Yorktown (c. 1781) by Charles Willson Peale. Lafayette (far left) is at Washington's right, the Comte de Rochambeau to his immediate left.
Ongoing political tensions between Great Britain and the thirteen colonies reached a crisis in 1774 when the British placed the province of Massachusetts under martial law after the Patriots protested taxes they regarded as a violation of their constitutional rights as Englishmen. When shooting began at Lexington and Concord in April 1775, militia units from across New England rushed to Boston and bottled up the British in the city. The Continental Congress appointed George Washington as commander-in-chief of the newly created Continental Army, which was augmented throughout the war by colonial militia. He drove the British out of Boston but in late summer 1776 they returned to New York and nearly captured Washington's army. Meanwhile, the revolutionaries expelled British officials from the 13 states, and declared themselves an independent nation on July 4, 1776.Don Higginbotham, The war of American independence: military attitudes, policies, and practice, 1763–1789 (1983)
thumb|right|Washington's surprise crossing of the Delaware River in December 1776 was a major comeback after the loss of New York City; his army defeated the British in two battles and recaptured New Jersey.
The British, for their part, lacked both a unified command and a clear strategy for winning. With the use of the Royal Navy, the British were able to capture coastal cities, but control of the countryside eluded them. A British sortie from Canada in 1777 ended with the disastrous surrender of a British army at Saratoga. With the coming in 1777 of General von Steuben, the training and discipline along Prussian lines began, and the Continental Army began to evolve into a modern force. France and Spain then entered the war against Great Britain as Allies of the US, ending its naval advantage and escalating the conflict into a world war. The Netherlands later joined France, and the British were outnumbered on land and sea in a world war, as they had no major allies apart from Indian tribes.
A shift in focus to the southern American states in 1779 resulted in a string of victories for the British, but General Nathanael Greene engaged in guerrilla warfare and prevented them from making strategic headway. The main British army was surrounded by Washington's American and French forces at Yorktown in 1781, as the French fleet blocked a rescue by the Royal Navy. The British then sued for peace.
George Washington
General George Washington (1732–99) proved an excellent organizer and administrator, who worked successfully with Congress and the state governors, selecting and mentoring his senior officers, supporting and training his troops, and maintaining an idealistic Republican Army. His biggest challenge was logistics, since neither Congress nor the states had the funding to provide adequately for the equipment, munitions, clothing, paychecks, or even the food supply of the soldiers. As a battlefield tactician Washington was often outmaneuvered by his British counterparts. As a strategist, however, he had a better idea of how to win the war than they did. The British sent four invasion armies. Washington's strategy forced the first army out of Boston in 1776, and was responsible for the surrender of the second and third armies at Saratoga (1777) and Yorktown (1781). He limited the British control to New York and a few places while keeping Patriot control of the great majority of the population. The Loyalists, on whom the British had relied too heavily, comprised about 20% of the population but were never well organized. As the war ended, Washington watched proudly as the final British army quietly sailed out of New York City in November 1783, taking the Loyalist leadership with them. Washington astonished the world when, instead of seizing power, he retired quietly to his farm in Virginia.Lesson Plan on "What Made George Washington a Good Military Leader?" NEH EDSITEMENTEdward G. Lengel, General George Washington: A Military Life (2007)
Patriots had a strong distrust of a permanent "standing army", so the Continental Army was quickly demobilized, with land grants to veterans. General Washington, who throughout the war deferred to elected officials, averted a potential coup d'état and resigned as commander-in-chief after the war, establishing a tradition of civil control of the U.S. military.
Early national period (1783–1812)
Following the American Revolutionary War, the United States faced potential military conflict on the high seas as well as on the western frontier. The United States was a minor military power during this time, having only a modest army, Marine corps, and navy. A traditional distrust of standing armies, combined with faith in the abilities of local militia, precluded the development of well-trained units and a professional officer corps. Jeffersonian leaders preferred a small army and navy, fearing that a large military establishment would involve the United States in excessive foreign wars, and potentially allow a domestic tyrant to seize power.Richard H. Kohn, Eagle and Sword: The Federalists and the Creation of the Military Establishment in America, 1783–1802 (1975)
thumb|210px|Stephen Decatur boarding the Tripolitan gunboat, 3 August 1804, the First Barbary War
In the Treaty of Paris after the Revolution, the British had ceded the lands between the Appalachian Mountains and the Mississippi River to the United States, without consulting the Shawnee, Cherokee, Choctaw and other smaller tribes who lived there. Because many of the tribes had fought as allies of the British, the United States compelled tribal leaders to sign away lands in postwar treaties, and began dividing these lands for settlement. This provoked a war in the Northwest Territory in which the U.S. forces performed poorly; the Battle of the Wabash in 1791 was the most severe defeat ever suffered by the United States at the hands of American Indians. President Washington dispatched a newly trained army to the region led by General Anthony Wayne, which decisively defeated the Indian confederacy at the Battle of Fallen Timbers in 1794.William B. Kessel and Robert Wooster, eds. Encyclopedia of Native American wars and warfare (2005) pp 50, 123, 186, 280
When revolutionary France declared war on Great Britain in 1793, the United States sought to remain neutral, but the Jay Treaty, which was favorable to Great Britain, angered the French government, which viewed it as a violation of the 1778 Treaty of Alliance. French privateers began to seize U.S. vessels, which led to an undeclared "Quasi-War" between the two nations. Fought at sea from 1798 to 1800, the United States won a string of victories in the Caribbean. George Washington was called out of retirement to head a "provisional army" in case of invasion by France, but President John Adams managed to negotiate a truce, in which France agreed to terminate the prior alliance and cease its attacks.Michael A. Palmer, Stoddert's war: naval operations during the quasi-war with France (1999)
Barbary Wars
The Berbers along the Barbary Coast (modern day Libya) sent pirates to capture merchant ships and hold the crews for ransom. The U.S. paid protection money until 1801, when President Thomas Jefferson refused to pay and sent in the Navy to challenge the Barbary States, the First Barbary War followed. After the U.S.S. Philadelphia was captured in 1803, Lieutenant Stephen Decatur led a raid which successfully burned the captured ship, preventing Tripoli from using or selling it. In 1805, after William Eaton captured the city of Derna, Tripoli agreed to a peace treaty. The other Barbary states continued to raid U.S. shipping, until the Second Barbary War in 1815 ended the practice.Frank Lambert, The Barbary Wars: American Independence in the Atlantic World (2007)
War of 1812
right|200px|thumb|"We have met the enemy and they are ours." Commodore Oliver Hazard Perry's victory on Lake Erie in 1813 was an important turning point in the War of 1812. (Painting by William H. Powell, 1865)
By far the largest military action in which the United States engaged during this era was the War of 1812.J. C. A. Stagg, The War of 1812: Conflict for a Continent (2012) With Britain locked in a major war with Napoleon's France, its policy was to block American shipments to France. The United States sought to remain neutral while pursuing overseas trade. Britain cut the trade and impressed seamen on American ships into the Royal Navy, despite intense protests. Britain supported an Indian insurrection in the American Midwest, with the goal of creating an Indian state there that would block American expansion. The United States finally declared war on the United Kingdom in 1812, the first time the U.S. had officially declared war. Not hopeful of defeating the Royal Navy, the U.S. attacked the British Empire by invading British Canada, hoping to use captured territory as a bargaining chip. The invasion of Canada was a debacle, though concurrent wars with Native Americans on the western front (Tecumseh's War and the Creek War) were more successful. After defeating Napoleon in 1814, Britain sent large veteran armies to invade New York, raid Washington and capture the key control of the Mississippi River at New Orleans. The New York invasion was a fiasco after the much larger British army retreated to Canada. The raiders succeeded in the burning of Washington on 25 August 1814, but were repulsed in their Chesapeake Bay Campaign at the Battle of Baltimore and the British commander killed. The major invasion in Louisiana was stopped by a one-sided military battle that killed the top three British generals and thousands of soldiers. The winners were the commanding general of the Battle of New Orleans, Major General Andrew Jackson, who became president and the Americans who basked in a victory over a much more powerful nation. The peace treaty proved successful, and the U.S. and Britain never again went to war. The losers were the Indians, who never gained the independent territory in the Midwest promised by Britain.Walter R. Borneman, 1812: The War That Forged a Nation (2005) is an American perspective; Mark Zuehlke, For Honour's Sake: The War of 1812 and the Brokering of an Uneasy Peace (2006) provides a Canadian perspective.
War with Mexico (1846–48)
thumb|200px|American forces storming the "Halls of Montezuma"
With the rapid expansion of the farming population, Democrats looked to the west for new lands, an idea which became known as "Manifest Destiny." In the Texas Revolution (1835–36), the settlers declared independence and defeated the Mexican army, but Mexico was determined to reconquer the lost province and threatened war with the U.S. if it annexed Texas. The U.S., much larger and more powerful, did annex Texas in 1845 and war broke out in 1846 over boundary issues.Robert W. Merry, A Country of Vast Designs: James K. Polk, the Mexican War and the Conquest of the American Continent (2009) excerpt and text searchJustin Harvey Smith, The War with Mexico, Vol 1. (2 vol 1919), full text online; Smith, The War with Mexico, Vol 2. (1919). full text online of Pulitzer prize winning history.
In the Mexican–American War 1846–48, the U.S. Army under Generals Zachary Taylor and Winfield Scott and others, invaded and after a series of victorious battles (and no major defeats) seized New Mexico and California, and also blockaded the coast, invaded northern Mexico, and invaded central Mexico, capturing the national capital. The peace terms involved American purchase of the area from California to New Mexico for $10 million.K. Jack Bauer, The Mexican War, 1846–1848 (1974); David S. Heidler, and Jeanne T. Heidler, The Mexican War. (2005)
American Civil War (1861–65)
right|thumb|200px|Dead soldiers lie where they fell at Antietam, the bloodiest day in American history. Abraham Lincoln issued the Emancipation Proclamation after this battle.
Sectional tensions had long existed between the states located north of the Mason–Dixon line and those south of it, primarily centered on the "peculiar institution" of slavery and the ability of states to overrule the decisions of the national government. During the 1840s and 1850s, conflicts between the two sides became progressively more violent. After the election of Abraham Lincoln in 1860 (who southerners thought would work to end slavery) states in the South seceded from the United States, beginning with South Carolina in late 1860. On April 12, 1861, forces of the South (known as the Confederate States of America or simply the Confederacy) opened fire on Fort Sumter, whose garrison was loyal to the Union.Louis P. Masur, The Civil War: A Concise History (2011)
The American Civil War caught both sides unprepared. The Confederacy hoped to win by getting Britain and France to intervene, or else by wearing down the North's willingness to fight. The U.S. sought a quick victory focused on capturing the Confederate capital at Richmond, Virginia. The Confederates under Robert E. Lee tenaciously defended their capital until the very end. The war spilled across the continent, and even to the high seas. Most of the material and personnel of the South were used up, while the North prospered.
The American Civil War is sometimes called the "first modern war" due to the mobilization (and destruction) of the civilian base. It also is characterized by many technical innovations involving railroads, telegraphs, rifles, trench warfare, and ironclad warships with turret guns.Benjamin Bacon, Sinews of War: How Technology, Industry, and Transportation Won the Civil War (1997)
Post-Civil War era (1865–1917)
Indian Wars (1865–91)
After the Civil War, population expansion, railroad construction, and the disappearance of the buffalo herds heightened military tensions on the Great Plains. Several tribes, especially the Sioux and Comanche, fiercely resisted confinement to reservations. The main role of the Army was to keep indigenous peoples on reservations and to end their wars against settlers and each other, William Tecumseh Sherman and Philip Sheridan were in charge. A famous victory for the Plains Nations was the Battle of the Little Big Horn in 1876, when Col. George Armstrong Custer and two hundred plus members of the 7th Cavalry were killed by a force consisting of Native Americans from the Lakota, Northern Cheyenne, and Arapaho nations. The last significant conflict came in 1891.Utley, (1984)
Spanish–American War (1898)
thumb|left|200px|Charge by the Rough Riders
The Spanish–American War was a short decisive war marked by quick, overwhelming American victories at sea and on land against Spain. The Navy was well-prepared and won laurels, even as politicians tried (and failed) to have it redeployed to defend East Coast cities against potential threats from the feeble Spanish fleet.Jim Leeke, Manila And Santiago: The New Steel Navy in the Spanish–American War (2009) The Army performed well in combat in Cuba. However, it was too oriented to small posts in the West and not as well-prepared for an overseas conflict.Graham A. Cosmas, An Army for Empire: The United States Army in the Spanish–American War (1998) It relied on volunteers and state militia units, which faced logistical, training and food problems in the staging areas in Florida.Richard W. Stewart, "Emergence to World Power 1898–1902" Ch. 15, in "American Military History, Volume I: The United States Army and the Forging of a Nation, 1775–1917", (2004) The United States freed Cuba (after an occupation by the U.S. Army). By the peace treaty Spain ceded to the United States its colonies of Puerto Rico, Guam, and the Philippines. The Navy set up coaling stations there and in Hawaii (which voluntarily joined the U.S. in 1898). The U.S. Navy now had a major forward presence across the Pacific and (with the lease of Guantánamo Bay Naval Base in Cuba) a major base in the Caribbean guarding the approaches to the Gulf Coast and the Panama Canal.William Braisted, United States Navy in the Pacific, 1897–1909 (2008)
Philippine–American War (1899–1902)
The Philippine–American War (1899–1902) was an armed conflict between a group of Filipino revolutionaries and the American forces following the ceding of the Philippines to the United States after the defeat of Spanish forces in the Battle of Manila. The Army sent in 100,000 soldiers (mostly from the National Guard) under General Elwell Otis. Defeated in the field and losing its capital in March 1899, the poorly armed and poorly led rebels broke into armed bands. The insurgency collapsed in March 1901 when the leader Emilio Aguinaldo was captured by General Frederick Funston and his Macabebe allies. Casualties included 1,037 Americans killed in action and 3,340 who died from disease; 20,000 rebels were killed.Brian McAllister Linn, The Philippine War 1899–1902 (University Press of Kansas, 2000). ISBN 0-7006-0990-3
Modernization
The Navy was modernized in the 1880s, and by the 1890s had adopted the naval power strategy of Captain Alfred Thayer Mahan—as indeed did every major navy. The old sailing ships were replaced by modern steel battleships, bringing them in line with the navies of Britain and Germany. In 1907, most of the Navy's battleships, with several support vessels, dubbed the Great White Fleet, were featured in a 14-month circumnavigation of the world. Ordered by President Theodore Roosevelt, it was a mission designed to demonstrate the Navy's capability to extend to the global theater.Henry J. Hendrix, Theodore Roosevelt's Naval Diplomacy: The U.S. Navy and the Birth of the American Century (2009)
Secretary of War Elihu Root (1899–1904) led the modernization of the Army. His goal of a uniformed chief of staff as general manager and a European-type general staff for planning was stymied by General Nelson A. Miles but did succeed in enlarging West Point and establishing the U.S. Army War College as well as the General Staff. Root changed the procedures for promotions and organized schools for the special branches of the service. He also devised the principle of rotating officers from staff to line. Root was concerned about the Army's role in governing the new territories acquired in 1898 and worked out the procedures for turning Cuba over to the Cubans, and wrote the charter of government for the Philippines.James E. Hewes, Jr. From Root to McNamara: Army Organization and Administration, 1900–1963 (1975)
Rear Admiral Bradley A. Fiske was at the vanguard of new technology in naval guns and gunnery, thanks to his innovations in fire control 1890–1910. He immediately grasped the potential for air power, and called for the development of a torpedo plane. Fiske, as aide for operations in 1913–15 to Assistant Secretary Franklin D. Roosevelt, proposed a radical reorganization of the Navy to make it a war-fighting instrument. Fiske wanted to centralize authority in a chief of naval operations and an expert staff that would develop new strategies, oversee the construction of a larger fleet, coordinate war planning including force structure, mobilization plans, and industrial base, and ensure that the US Navy possessed the best possible war machines. Eventually, the Navy adopted his reforms and by 1915 started to reorganize for possible involvement in the World War then underway.Paolo Coletta, Admiral Bradley A. Fiske and the American Navy (1979)
Banana Wars (1898–1935)
"Banana Wars" is an informal term for the minor intervention in Latin America from 1898 until 1934. These include military presence in Cuba, Panama with the Panama Canal Zone, Haiti (1915–1935), Dominican Republic (1916–1924) and Nicaragua (1912–1925) & (1926–1933). The U.S. Marine Corps began to specialize in long-term military occupation of these countries, primarily to safeguard customs revenues which were the cause of local civil wars.Lester D. Langley, The Banana Wars: United States Intervention in the Caribbean, 1898–1934 (2001)
Moro Rebellion (1899–1913)
The Moro Rebellion was an armed insurgency between Muslim Filipino tribes in the southern Philippines between 1899 and 1913. Pacification was never complete as sporadic antigovernment insurgency continues into the 21st century, with American advisors helping the Philippine government forces.Charles Byler, "Pacifying the Moros: American Military Government in the Southern Philippines, 1899–1913" Military Review (May–June 2005) pp 41–45. online
Mexico (1910–19)
thumb|U.S. enters Mexico in 1916 to punish Pancho Villa
The Mexican Revolution involved a civil war with hundreds of thousands of deaths and large numbers fleeing combat zones. Tens of thousands fled to the U.S. President Wilson sent U.S. forces to occupy the Mexican city of Veracruz for six months in 1914. It was designed to show the U.S. was keenly interested in the civil war and would not tolerate attacks on Americans, especially the April 9, 1914, "Tampico Affair", which involved the arrest of American sailors by soldiers of the regime of Mexican President Victoriano Huerta.John S. D. Eisenhower, Intervention!: The United States and the Mexican Revolution, 1913–1917 (1995) In early 1916 Pancho Villa a Mexican general ordered 500 soldiers on a murderous raid on the American city of Columbus New Mexico, with the goal of robbing banks to fund his army.E. Bruce White and Francisco Villa, "The Muddied Waters of Columbus, New Mexico," The Americas 32#1 (July 1975), pp. 72–98 in JSTOR The German Secret Service encouraged Pancho Villa in his attacks to involve the United States in an intervention in Mexico which would distract the United States from its growing involvement in the war and divert aid from Europe to support the intervention.Friedrich Katz, The Secret War in Mexico: Europe, the United States, and the Mexican Revolution (1984) Wilson called up the state militias (National Guard) and sent them and the U.S. Army under General John J. Pershing to punish Villa in the Pancho Villa Expedition. Villa fled, with the Americans in pursuit deep into Mexico, thereby arousing Mexican nationalism. By early 1917 President Venustiano Carranza had contained Villa and secured the border, so Wilson ordered Pershing to withdraw.James W. Hurst, Pancho Villa and Black Jack Pershing: The Punitive Expedition in Mexico (2007)Friedrich Katz, "Pancho Villa and the Attack on Columbus, New Mexico," American Historical Review 83#1 (1978), pp. 101–130 in JSTOR
World War I (1917–18)
thumb|200px|Men of the 69th Infantry Regiment parading upon returning to New York City.
The United States originally wished to remain neutral when World War I broke out in August 1914. However, it insisted on its right as a neutral party to immunity from German submarine attack, even though its ships carried food and raw materials to Britain. In 1917 the Germans resumed submarine attacks, knowing that it would lead to American entry. When the United States declared war in early April 1917, the United States Army was still small by European standards (most of which had conscription) and mobilization would take at least a year. Meanwhile, the United States continued to provide supplies and money to Britain and France, and initiated the first peacetime draft.Kendrick A. Clements, "Woodrow Wilson and World War I," Presidential Studies Quarterly 34:1 (2004). pp 62+. online edition Industrial mobilization took longer than expected, so divisions were sent to Europe without equipment, relying instead on the British and French to supply them.Anne Venzon, ed., The United States in the First World War: An Encyclopedia (1995)
By summer 1918, a million American soldiers, or "doughboys" as they were often called, of the American Expeditionary Force (AEF) were in Europe, serving on the Western Front under the command of General John Pershing, with 25,000 more arriving every week. The failure of the German Army's Spring Offensive exhausted its manpower reserves and they were unable to launch new offensives. The Imperial German Navy and home front then revolted and a new German government signed a conditional surrender, the Armistice, ending the war on the Western Front on November 11, 1918.Edward M. Coffman, The War to End All Wars: The American Military Experience in World War I (1998)
Russian Revolution (1918–19)
The so-called Polar Bear Expedition was the involvement of 5,000 U.S. troops, during the Russian Revolution, in blocking the Bolsheviks in Arkhangelsk, Russia as part of the greater Allied military expedition in the Russian Civil War.Robert L. Willett, "Russian Sideshow" (Washington, D.C., Brassey's Inc., 2003), page 267
1920s: Naval disarmament
The U.S. sponsored a major world conference to limit the naval armaments of world powers, including the U.S., Britain, Japan, and France, plus smaller nations.Germany and the Soviet Union were not invited. Secretary of State Charles Evans Hughes made the key proposal of each country to reduce its number of warships by a formula that was accepted. The conference enabled the great powers to reduce their navies and avoid conflict in the Pacific. The treaties remained in effect for ten years, but were not renewed as tensions escalated.Emily O. Goldman, Sunken Treaties (1994)
1930s: Neutrality Acts
After the costly U.S. involvement in World War I, isolationism grew within the nation. Congress refused membership in the League of Nations, and in response to the growing turmoil in Europe and Asia, the gradually more restrictive Neutrality Acts were passed, which were intended to prevent the U.S. from supporting either side in a war. President Franklin D. Roosevelt sought to support Britain, however, and in 1940 signed the Lend-Lease Act, which permitted an expansion of the "cash and carry" arms trade to develop with Britain, which controlled the Atlantic sea lanes.
Roosevelt favored the Navy (he was in effective charge in World War I), and used relief programs such as the PWA to support Navy yards and build warships. For example, in 1933 he authorized $238 million in PWA funds for thirty-two new ships. The Army Air Corps received only $11 million, which barely covered replacements and allowed no expansion.Jeffery S. Underwood, The Wings of Democracy: The Influence of Air Power on the Roosevelt Administration, 1933–1941 (1991) pp 34–35
World War II (1941–45)
Starting in 1940 (18 months before Pearl Harbor), the nation mobilized, giving high priority to air power. American involvement in World War II in 1940–41 was limited to providing war material and financial support to Britain, the Soviet Union, and the Republic of China. The U.S. entered officially on 8 December 1941 following the Japanese attack on Pearl Harbor, Hawaii. Japanese forces soon seized American, Dutch, and British possessions across the Pacific and Southeast Asia, except for Australia, which became a main American forward base along with Hawaii.
left|thumb|200px|The explosion aboard the USS Arizona during the attack on Pearl Harbor
The loss of eight battleships and 2,403 Americans at Pearl Harbor forced the U.S. to rely on its remaining aircraft carriers, which won a major victory over Japan at Midway just six months into the war, and on its growing submarine fleet. The Navy and Marine Corps followed this up with an island hopping campaign across the central and south Pacific in 1943–45, reaching the outskirts of Japan in the Battle of Okinawa. During 1942 and 1943, the U.S. deployed millions of men and thousands of planes and tanks to the UK, beginning with the strategic bombing of Nazi Germany and occupied Europe and leading up to the Allied invasions of occupied North Africa in November 1942, Sicily and Italy in 1943, France in 1944, and the invasion of Germany in 1945, parallel with the Soviet invasion from the east. That led to the surrender of Nazi Germany in May 1945. In the Pacific, the U.S. experienced much success in naval campaigns during 1944, but bloody battles at Iwo Jima and Okinawa in 1945 led the U.S. to look for a way to end the war with minimal loss of American lives. The U.S. used atomic bombs on Hiroshima and Nagasaki to destroy the Japanese war effort and to shock the Japanese leadership, which quickly caused the surrender of Japan.
The United States was able to mobilize quickly, eventually becoming the dominant military power in most theaters of the war (excepting only eastern Europe), and the industrial might of the U.S. economy became a major factor in the Allies' mobilization of resources. Strategic and tactical lessons learned by the U.S., such as the importance of air superiority and the dominance of the aircraft carrier in naval actions, continue to guide U.S. military doctrine into the 21st century.
right|thumb|200px|General of the Army MacArthur signs on behalf of the Allies
World War II holds a special place in the American psyche as the country's greatest triumph, and the U.S. military personnel of World War II are frequently referred to as "the Greatest Generation." Over 16 million served (about 11% of the population), and over 400,000 died during the war. The U.S. emerged as one of the two undisputed superpowers along with the Soviet Union, and unlike the Soviet Union, the U.S. homeland was virtually untouched by the ravages of war. During and following World War II, the United States and Britain developed an increasingly strong defense and intelligence relationship. Manifestations of this include extensive basing of U.S. forces in the UK, shared intelligence, shared military technology (e.g. nuclear technology), and shared procurement.
Cold War era (1945–91)
Following World War II, the United States emerged as a global superpower vis-a-vis the Soviet Union in the Cold War. In this period of some forty years, the United States provided foreign military aid and direct involvement in proxy wars against the Soviet Union. It was the principal foreign actor in the Korean War and Vietnam War during this era. Nuclear weapons were held in ready by the United States under a concept of mutually assured destruction with the Soviet Union.
Postwar military reorganization (1947)
The National Security Act of 1947, meeting the need for a military reorganization to complement the U.S. superpower role, combined and replaced the former Department of the Navy and War Department with a single cabinet-level Department of Defense. The act also created the National Security Council, the Central Intelligence Agency, and the Air Force.
Korean War (1950–53)
thumb|200 px|Beachhead at Inchon
The Korean War was a conflict between the United States and its United Nations allies and the communist powers under influence of the Soviet Union (also a UN member nation) and the People's Republic of China (which later also gained UN membership). The principal combatants were North and South Korea. Principal allies of South Korea included the United States, Canada, Australia, the United Kingdom, although many other nations sent troops under the aegis of the United Nations. Allies of North Korea included the People's Republic of China, which supplied military forces, and the Soviet Union, which supplied combat advisors and aircraft pilots, as well as arms, for the Chinese and North Korean troops.Allan R. Millett, "A Reader's Guide To The Korean War," Journal of Military History (1997) Vol. 61 No. 3; p. 583+ online version
The war started badly for the US and UN. North Korean forces struck massively in the summer of 1950 and nearly drove the outnumbered US and ROK defenders into the sea. However the United Nations intervened, naming Douglas MacArthur commander of its forces, and UN-US-ROK forces held a perimeter around Pusan, gaining time for reinforcement. MacArthur, in a bold but risky move, ordered an amphibious invasion well behind the front lines at Inchon, cutting off and routing the North Koreans and quickly crossing the 38th Parallel into North Korea. As UN forces continued to advance toward the Yalu River on the border with Communist China, the Chinese crossed the Yalu River in October and launched a series of surprise attacks that sent the UN forces reeling back across the 38th Parallel. Truman originally wanted a Rollback strategy to unify Korea; after the Chinese successes he settled for a Containment policy to split the country.James I. Matray, "Truman's Plan for Victory: National Self-Determination and the Thirty-Eighth Parallel Decision in Korea," Journal of American History, Sept. 1979, Vol. 66 Issue 2, pp 314–333, in JSTOR MacArthur argued for rollback but was fired by President Harry Truman after disputes over the conduct of the war. Peace negotiations dragged on for two years until President Dwight D. Eisenhower threatened China with nuclear weapons; an armistice was quickly reached with the two Koreas remaining divided at the 38th parallel. North and South Korea are still today in a state of war, having never signed a peace treaty, and American forces remain stationed in South Korea as part of American foreign policy.Stanley Sandler, ed., The Korean War: An Encyclopedia (Garland, 1995)
Lebanon crisis of 1958
In the Lebanon crisis of 1958 that threatened civil war, Operation Blue Bat deployed several hundred Marines to bolster the pro-Western Lebanese government from July 15 to October 25, 1958.
Dominican Intervention
On April 28, 1965, 400 Marines were landed in Santo Domingo to evacuate the American Embassy and foreign nationals after dissident Dominican armed forces attempted to overthrow the ruling civilian junta. By mid-May, peak strength of 23,850 U.S. soldiers, Marines, and Airmen were in the Dominican Republic and some 38 naval ships were positioned offshore. They evacuated nearly 6,500 men, women, and children of 46 nations, and distributed more than 8 million tons of food.
Vietnam War (1964–75)
200px|thumb|left|Formation of Iroquois ca. 1966
The Vietnam War was a war fought between 1959 and 1975 on the ground in South Vietnam and bordering areas of Cambodia and Laos (see Secret War) and in the strategic bombing (see Operation Rolling Thunder) of North Vietnam. American advisors came in the late 1950s to help the RVN (Republic of Vietnam) combat Communist insurgents known as "Viet Cong." Major American military involvement began in 1964, after Congress provided President Lyndon B. Johnson with blanket approval for presidential use of force in the Gulf of Tonkin Resolution.John Prados, Vietnam: The History of an Unwinnable War, 1945–1975 (2009)
Fighting on one side was a coalition of forces including the Republic of Vietnam (South Vietnam or the "RVN"), the United States, supplemented by South Korea, Thailand, Australia, New Zealand, and the Philippines. The allies fought against the North Vietnamese Army (NVA) as well as the National Liberation Front (NLF, also known as Viet communists Viet Cong), or "VC", a guerrilla force within South Vietnam. The NVA received substantial military and economic aid from the Soviet Union and China, turning Vietnam into a proxy war.Mark Atwood Lawrence, The Vietnam War: A Concise International History (2010)
The military history of the American side of the war involved different strategies over the years.Spencer Tucker, Vietnam (2000); for coverage of wach major operation see Stanley I. Kutler, ed., Encyclopedia of the Vietnam War (1996) and Spencer C. Tucker, ed. Encyclopedia of the Vietnam War: A Political, Social, and Military History (2001) The bombing campaigns of the Air Force were tightly controlled by the White House for political reasons, and until 1972 avoided the main Northern cities of Hanoi and Haiphong and concentrated on bombing jungle supply trails, especially the Ho Chi Minh Trail.Mark Clodfelter, The Limits of Air Power: The American Bombing of North Vietnam (2006) The most controversial Army commander was William Westmoreland whose strategy involved systematic defeat of all enemy forces in the field, despite heavy American casualties that alienated public opinion back home.Lewis Sorley, Westmoreland: The General Who Lost Vietnam (2011)
thumb|200px|United States Embassy following the Tet Offensive
The U.S. framed the war as part of its policy of containment of Communism in south Asia, but American forces were frustrated by an inability to engage the enemy in decisive battles, corruption and incompetence in the Army of the Republic of Vietnam, and ever increasing protests at home. The Tet Offensive in 1968, although a major military defeat for the NLF with half their forces eliminated, marked the psychological turning point in the war. With President Richard M. Nixon opposed to containment and more interested in achieving détente with both the Soviet Union and China, American policy shifted to "Vietnamization," – providing very large supplies of arms and letting the Vietnamese fight it out themselves. After more than 57,000 dead and many more wounded, American forces withdrew in 1973 with no clear victory, and in 1975 South Vietnam was finally conquered by communist North Vietnam and unified.Robert D. Schulzinger, Time for War: The United States and Vietnam, 1941–1975. (1997) online edition
Memories and lessons from the war are still a major factor in American politics. One side views the war as a necessary part of the Containment policy, which allowed the enemy to choose the time and place of warfare. Others note the U.S. made major strategic gains as the Communists were defeated in Indonesia, and by 1972 both Moscow and Beijing were competing for American support, at the expense of their allies in Hanoi. Critics see the conflict as a "quagmire"—an endless waste of American blood and treasure in a conflict that did not concern US interests. Fears of another quagmire have been major factors in foreign policy debates ever since.Patrick Hagopian, The Vietnam War in American Memory: Veterans, Memorials, and the Politics of Healing (2009) excerpt and text search The draft became extremely unpopular, and President Nixon ended it in 1973,George Q. Flynn, The draft, 1940–1973 (1993) forcing the military (the Army especially) to rely entirely upon volunteers. That raised the issue of how well the professional military reflected overall American society and values; the soldiers typically took the position that their service represented the highest and best American values.Bernard Rostker, I want you!: the evolution of the All-Volunteer Force (2006)
Grenada
In October, 1983, a violent power struggle threatened American lives in Grenada. Neighboring nations asked the U.S. to intervene. The invasion was a hurriedly devised grouping of paratroopers, Marines, Rangers, and special operations forces in Operation Urgent Fury. Over a thousand Americans quickly seized the entire island, taking hundreds of military and civilian prisoners, especially Cubans.Vijay Tiwathia, The Grenada war: anatomy of a low-intensity conflict (1987)Mark Adkin, Urgent Fury: The Battle for Grenada: The Truth Behind the Largest U.S. Military Operation Since Vietnam (1989)
Beirut
In 1983 fighting between Palestinian refugees and Lebanese factions reignited that nation's long-running civil war. A UN agreement brought an international force of peacekeepers to occupy Beirut and guarantee security. US Marines landed in August 1982 along with Italian and French forces. On October 23, 1983, a suicide bomber driving a truck filled with 6 tons of TNT crashed through a fence and destroyed the Marine barracks, killing 241 Marines; seconds later, a second bomber leveled a French barracks, killing 58. Subsequently, the US Navy engaged in bombing of militia positions inside Lebanon. While US President Ronald Reagan was initially defiant, political pressure at home eventually forced the withdrawal of the Marines in February 1984. ch 8
Libya
Code-named "Operation El Dorado Canyon", comprised the joint United States Marine Corps, Navy, and Air Force air-strikes against Libya on April 15, 1986. The attack was carried out in response to the 1986 Berlin discotheque bombing, and resulted in the killing of 45 officers and 15 civilians.
Panama
On December 20, 1989 the United States invaded Panama, mainly from U.S. bases within the then-Canal Zone, to oust dictator and international drug trafficker Manuel Noriega. American forces quickly overwhelmed the Panamanian Defense Forces, Noriega was captured on January 3, 1990 and imprisoned in the U.S. and a new government was installed.Thomas Donnelly, Margaret Roth and Caleb Baker, Operation Just Cause: The Storming of Panama (1991)
Post–Cold War era (1991–2001)
thumb|500px|right|US military engagements 1990–2002
Persian Gulf War (1990–91)
The Persian Gulf War was a conflict between Iraq and a coalition force of 34 nations led by the United States. The lead up to the war began with the Iraqi invasion of Kuwait in August 1990 which was met with immediate economic sanctions by the United Nations against Iraq. The coalition commenced hostilities in January 1991, resulting in a decisive victory for the U.S. led coalition forces, which drove Iraqi forces out of Kuwait with minimal coalition deaths. Despite the low death toll, over 180,000 US veterans would later be classified as "permanently disabled" according to the US Department of Veterans Affairs (see Gulf War Syndrome). The main battles were aerial and ground combat within Iraq, Kuwait and bordering areas of Saudi Arabia. Land combat did not expand outside of the immediate Iraq/Kuwait/Saudi border region, although the coalition bombed cities and strategic targets across Iraq, and Iraq fired missiles on Israeli and Saudi cities.Rick Atkinson, Crusade: The Untold Story of the Persian Gulf War (1994)
left|thumb|200px|USS Wisconsin fires on Iraqi positions in Kuwait
Before the war, many observers believed the US and its allies could win but might suffer substantial casualties (certainly more than any conflict since Vietnam), and that the tank battles across the harsh desert might rival those of North Africa during World War II. After nearly 50 years of proxy wars, and constant fears of another war in Europe between NATO and the Warsaw Pact, some thought the Persian Gulf War might finally answer the question of which military philosophy would have reigned supreme. Iraqi forces were battle-hardened after 8 years of war with Iran, and they were well equipped with late model Soviet tanks and jet fighters, but the antiaircraft weapons were crippled; in comparison, the US had no large-scale combat experience since its withdrawal from Vietnam nearly 20 years earlier, and major changes in US doctrine, equipment and technology since then had never been tested under fire.
However, the battle was one-sided almost from the beginning. The reasons for this are the subject of continuing study by military strategists and academics. There is general agreement that US technological superiority was a crucial factor but the speed and scale of the Iraqi collapse has also been attributed to poor strategic and tactical leadership and low morale among Iraqi troops, which resulted from a history of incompetent leadership. After devastating initial strikes against Iraqi air defenses and command and control facilities on 17 January 1991, coalition forces achieved total air superiority almost immediately. The Iraqi air force was destroyed within a few days, with some planes fleeing to Iran, where they were interned for the duration of the conflict. The overwhelming technological advantages of the US, such as stealth aircraft and infrared sights, quickly turned the air war into a "turkey shoot". The heat signature of any tank which started its engine made an easy target. Air defense radars were quickly destroyed by radar-seeking missiles fired from wild weasel aircraft. Grainy video clips, shot from the nose cameras of missiles as they aimed at impossibly small targets, were a staple of US news coverage and revealed to the world a new kind of war, compared by some to a video game. Over 6 weeks of relentless pounding by planes and helicopters, the Iraqi army was almost completely beaten but did not retreat, under orders from Iraqi President Saddam Hussein, and by the time the ground forces invaded on 24 February, many Iraqi troops quickly surrendered to forces much smaller than their own; in one instance, Iraqi forces attempted to surrender to a television camera crew that was advancing with coalition forces.
After just 100 hours of ground combat, and with all of Kuwait and much of southern Iraq under coalition control, US President George H. W. Bush ordered a cease-fire and negotiations began resulting in an agreement for cessation of hostilities. Some US politicians were disappointed by this move, believing Bush should have pressed on to Baghdad and removed Hussein from power; there is little doubt that coalition forces could have accomplished this if they had desired. Still, the political ramifications of removing Hussein would have broadened the scope of the conflict greatly, and many coalition nations refused to participate in such an action, believing it would create a power vacuum and destabilize the region.Marc J. O'Reilly, Unexceptional: America's Empire in the Persian Gulf, 1941–2007 (2008) p 173
Following the Persian Gulf War, to protect minority populations, the US, Britain, and France declared and maintained no-fly zones in northern and southern Iraq, which the Iraqi military frequently tested. The no-fly zones persisted until the 2003 invasion of Iraq, although France withdrew from participation in patrolling the no-fly zones in 1996, citing a lack of humanitarian purpose for the operation.
Somalia
US troops participated in a UN peacekeeping mission in Somalia beginning in 1992. By 1993 the US troops were augmented with Rangers and special forces with the aim of capturing warlord Mohamed Farrah Aidid, whose forces had massacred peacekeepers from Pakistan. During a raid in downtown Mogadishu, US troops became trapped overnight by a general uprising in the Battle of Mogadishu. Eighteen American soldiers were killed, and a US television crew filmed graphic images of the body of one soldier being dragged through the streets by an angry mob. Somali guerrillas paid a staggering toll at an estimated 1,000–5,000 total casualties during the conflict. After much public disapproval, American forces were quickly withdrawn by President Bill Clinton. The incident profoundly affected US thinking about peacekeeping and intervention. The book Black Hawk Down was written about the battle, and was the basis for the later movie of the same name.John L. Hirsch and Robert B. Oakley, Somalia and Operation Restore Hope: Reflections on Peacemaking and Peacekeeping (1995)
Haiti
Operation Uphold Democracy (September 19, 1994 – March 31, 1995) was an intervention designed to reinstate the elected President Jean-Bertrand Aristide, who was reported to have died in office during the bombing of the presidential palace. The operation was effectively authorized by the 31 July 1994 United Nations Security Council Resolution 940.John R. Ballard, Upholding democracy: the United States military campaign in Haiti, 1994–1997 (1998)
Yugoslavia
During the war in Yugoslavia in the early 1990s, the US operated in Bosnia and Herzegovina as part of the NATO-led multinational implementation force (IFOR) in Operation Joint Endeavour . The USA was one of the NATO member countries who bombed Yugoslavia between March 24 and June 9, 1999 during the Kosovo War and later contributed to the multinational force KFOR.Richard C. Holbrooke, To End a War (1999) excerpt and text search
War on Terrorism (2001–present)
The War on Terrorism is a global effort by the governments of several countries (primarily the United States and its principal allies) to neutralize international terrorist groups (primarily Islamic Extremist terrorist groups, including al-Qaeda) and ensure that countries considered by the US and some of its allies to be Rogue Nations no longer support terrorist activities. It has been adopted primarily as a response to the September 11, 2001 attacks on the United States. Since 2001, terrorist motivated attacks upon service members have occurred in Arkansas and Texas.
Afghanistan
thumb|left|200px|alt=Ten horses with riders on the side of a hill|U.S. Army Special Forces and U.S. Air Force Combat Controllers on horseback in November 2001
The intervention in Afghanistan (Operation Enduring Freedom – Afghanistan) to depose that country's Taliban government and destroy training camps associated with al-Qaeda is understood to have been the opening, and in many ways defining, campaign of the broader War on Terrorism. The emphasis on Special Operations Forces (SOF), political negotiation with autonomous military units, and the use of proxy militaries marked a significant change from prior U.S. military approaches.Christopher N. Koontz, Enduring Voices: Oral Histories of the U.S. Army Experience in Afghanistan, 2003–2005 (2008) online
Philippines
In January 2002, the U.S. sent more than 1,200 troops (later raised to 2,000) to assist the Armed Forces of the Philippines in combating terrorist groups linked to al-Qaida, such as Abu Sayyaf, under Operation Enduring Freedom – Philippines. Operations have taken place mostly in the Sulu Archipelago, where terrorists and other groups are active. The majority of troops provide logistics. However, there are special forces troops that are training and assisting in combat operations against the terrorist groups.
Syrian and Iraqi intervention
With the emergence of ISIL and its capture of large areas of Iraq and Syria, a number of crises resulted that sparked international attention. ISIL had perpetrated sectarian killings and war crimes in both Iraq and Syria. Gains made in the Iraq war were rolled back as Iraqi army units abandoned their posts. Cities were taken over by the terrorist group which enforced its brand of Sharia law. The kidnapping and decapitation of numerous Western journalists and aid-workers also garnered interest and outrage among Western powers. The US intervened with airstrikes in Iraq over ISIL held territories and assets in August, and in September a coalition of US and Middle Eastern powers initiated a bombing campaign in Syria aimed at degrading and destroying ISIL and Al-Nusra-held territory.
Iraq
thumb|200px|A Marine Corps M1 Abrams tank patrols a Baghdad street in April 2003
After the lengthy Iraq disarmament crisis culminated with an American demand that Iraqi President Saddam Hussein leave Iraq, which was refused, a coalition led by the United States and the United Kingdom fought the Iraqi army in the 2003 invasion of Iraq. Approximately 250,000 United States troops, with support from 45,000 British, 2,000 Australian and 200 Polish combat forces, entered Iraq primarily through their staging area in Kuwait. (Turkey had refused to permit its territory to be used for an invasion from the north.) Coalition forces also supported Iraqi Kurdish militia, estimated to number upwards of 50,000. After approximately three weeks of fighting, Hussein and the Ba'ath Party were forcibly removed, followed by 9 years of military presence by the United States and the coalition fighting alongside the newly elected Iraqi government against various insurgent groups.
Libyan intervention
As a result of the Libyan Civil War, the United Nations enacted United Nations Security Council Resolution 1973, which imposed a no-fly zone over Libya, and the protection of civilians from the forces of Muammar Gaddafi. The United States, along with Britain, France and several other nations, committed a coalition force against Gaddafi's forces. On 19 March 2011, the first U.S. action was taken when 114 Tomahawk missiles launched by US and UK warships destroyed shoreline air defenses of the Gaddafi regime. The U.S. continued to play a major role in Operation Unified Protector, the NATO-directed mission that eventually incorporated all of the military coalition's actions in the theater. Throughout the conflict however, the U.S. maintained it was playing a supporting role only and was following the UN mandate to protect civilians, while the real conflict was between Gaddafi's loyalists and Libyan rebels fighting to depose him. During the conflict, American drones were also deployed.
See also
Military budget of the United States
United States Armed Forces
History of the United States Army
National Museum of the United States Army
History of the United States Marine Corps
National Museum of the Marine Corps
History of the United States Navy
U.S. Navy Museum
History of the United States Air Force
National Museum of the United States Air Force
History of the United States Coast Guard
History of civil affairs in the United States armed forces
United States and weapons of mass destruction
United States Department of Defense
Awards and decorations of the United States military
United States casualties of war
History of minorities in the United States Armed Forces
United States Armed Forces racial desegregation
Military history of African Americans
Military history of Asian Americans
Military history of Hispanic and Latino Americans
Military history of Jewish Americans
Military history of Sikh Americans
Native Americans in the American Civil War
Native Americans and World War II
Related lists
Timeline of United States military operations
United States military deployments
List of conflicts in the United States
List of military operations
List of United States military leaders by rank
List of wars involving the United States
Military history of Canada
Military history of Mexico
Military history of the Philippines, American period
References
Further reading
Allison, William T., Jeffrey G. Grey, Janet G. Valentine. American Military History: A Survey from Colonial Times to the Present (2nd ed. 2012) 416pp
Boyne, Walter J. Beyond the Wild Blue: A History of the U.S. Air Force, 1947–2007 (2nd ed. 2007) 576 pp excerpt
Chambers, John Whiteclay and G. Kurt Piehler, eds. Major Problems in American Military History: Documents and Essays (1988) 408pp excerpts from primary and secondary sources table of contents
Hagan, Kenneth J. and Michael T. McMaster, eds. In Peace and War: Interpretations of American Naval History (2008), essays by scholars
Hearn, Chester G. Air Force: An Illustrated History: The U.S. Air Force from 1910 to the 21st Century (2008) excerpt and text search
Isenberg, Michael T. Shield of the Republic: The United States Navy in an Era of Cold War and Violent Peace 1945–1962 (1993)
Love, Robert W., Jr. (1992). History of the U.S. Navy 2 vol.
; numerous editions;
; numerous editions
Millett, Allan R. Semper Fidelis: History of the United States Marine Corps (1980) excerpt and text search
Millett, Allan R., Peter Maslowski and William B. Feis. For the Common Defense: A Military History of the United States from 1607 to 2012 (3rd ed. 2013) excerpt and text search
Morris, James M., ed. Readings in American Military History (2003) 401pp articles by experts
Muehlbauer, Matthew S., and David J. Ulbrich. Ways of War: American Military History from the Colonial Era to the Twenty-First Century (Routledge, 2013), 536pp; university textbook; online review
Stewart, Richard W. American military history (2 vol 2010); The current ROTC textbook
Utley, Robert M. (1984) Frontier Regulars: The United States Army and the Indian, 1866–1891
Utley, Robert M. (2002) Indian Wars
Woodward, David R. The American Army and the First World War (Cambridge University Press, 2014). 484 pp. online review
Historiography
Grimsley, Mark. "The American military history master narrative: Three textbooks on the American military experience," Journal of Military History (2015) 79#3 pp 782–802; review of Allison, Millett, and Muehlbauer textbooks
External links
Website for Ways of War: American Military History from the Colonial Era to the Twenty-First Century By Muehlbauer and Ulbrich, with additional text, bibliographies and student aids
United States Military Campaigns, Conflicts, Expeditions and Wars Compiled by Larry Van Horn, U.S. Navy Retired
Military History wiki
A Continent Divided: The U.S. – Mexico War, Center for Greater Southwestern Studies, the University of Texas at Arlington
National Indian Wars Association
Instances of Use of United States Forces Abroad, 1798–1993 by U.S. Navy
| 161,323 | 2017-01 |
Dog | thumb|300px|Montage showing the morphological variation of the dog.
The domestic dog (Canis lupus familiaris or Canis familiaris) is a member of genus Canis (canines) that forms part of the wolf-like canids, and is the most widely abundant carnivore. The dog and the extant gray wolf are sister taxa, with modern wolves not closely related to the wolves that were first domesticated. The dog was the first domesticated species and has been selectively bred over millennia for various behaviors, sensory capabilities, and physical attributes.
Their long association with humans has led dogs to be uniquely attuned to human behavior and they are able to thrive on a starch-rich diet that would be inadequate for other canid species. Dogs vary widely in shape, size and colours.Why are different breeds of dogs all considered the same species? - Scientific American . Nikhil Swaminathan. Accessed on August 28, 2016. Dogs perform many roles for people, such as hunting, herding, pulling loads, protection, assisting police and military, companionship and, more recently, aiding handicapped individuals. This influence on human society has given them the sobriquet "man's best friend".
Etymology
The term "domestic dog" is generally used for both domesticated and feral varieties. The English word dog comes from Middle English dogge, from Old English docga, a "powerful dog breed". The term may possibly derive from Proto-Germanic *dukkōn, represented in Old English finger-docce ("finger-muscle"). The word also shows the familiar petname diminutive -ga also seen in frogga "frog", picga "pig", stagga "stag", wicga "beetle, worm", among others."Dictionary of Etymology", Dictionary.com, s.v. dog, encyclopedia.com retrieved on 27 May 2009. The term dog may ultimately derive from the earliest layer of Proto-Indo-European vocabulary.
In 14th-century England, hound (from ) was the general word for all domestic canines, and dog referred to a subtype of hound, a group including the mastiff. It is believed this "dog" type was so common, it eventually became the prototype of the category "hound". By the 16th century, dog had become the general word, and hound had begun to refer only to types used for hunting. The word "hound" is ultimately derived from the Proto-Indo-European word *kwon-, "dog". This semantic shift may be compared to in German, where the corresponding words Dogge and Hund kept their original meanings.
A male canine is referred to as a dog, while a female is called a bitch. The father of a litter is called the sire, and the mother is called the dam. (Middle English bicche, from Old English bicce, ultimately from Old Norse bikkja) The process of birth is whelping, from the Old English word hwelp; the modern English word "whelp" is an alternate term for puppy. A litter refers to the multiple offspring at one birth which are called puppies or pups from the French poupée, "doll", which has mostly replaced the older term "whelp".
Taxonomy
The dog is classified as Canis lupus familiaris under the Biological Species Concept and Canis familiaris under the Evolutionary Species Concept.Wang, Xiaoming; Tedford, Richard H.; Dogs: Their Fossil Relatives and Evolutionary History. New York: Columbia University Press, 2008
In 1758, the taxonomist Linnaeus published in Systema Naturae a categorization of species which included the Canis species. Canis is a Latin word meaning dog, and the list included the dog-like carnivores: the domestic dog, wolves, foxes and jackals. The dog was classified as Canis familiaris, which means "Dog-family" or the family dog. On the next page he recorded the wolf as Canis lupus, which means "Dog-wolf". In 1978, a review aimed at reducing the number of recognized Canis species proposed that "Canis dingo is now generally regarded as a distinctive feral domestic dog. Canis familiaris is used for domestic dogs, although taxonomically it should probably be synonymous with Canis lupus." In 1982, the first edition of Mammal Species of the World listed Canis familiaris under Canis lupus with the comment: "Probably ancestor of and conspecific with the domestic dog, familiaris. Canis familiaris has page priority over Canis lupus, but both were published simultaneously in Linnaeus (1758), and Canis lupus has been universally used for this species",Page 245- "COMMENTS: "Probably ancestor of and conspecific with the domestic dog, familiaris. Canis familiaris has page priority over Canis lupus, but both were published simultaneously in Linnaeus (1758), and Canis lupus has been universally used for this species." which avoided classifying the wolf as the family dog. The dog is now listed among the many other Latin-named subspecies of Canis lupus as Canis lupus familiaris.
In 2003, the ICZN ruled in its Opinion 2027 that if wild animals and their domesticated derivatives are regarded as one species, then the scientific name of that species is the scientific name of the wild animal. In 2005, the third edition of Mammal Species of the World upheld Opinion 2027 with the name Lupus and the note: "Includes the domestic dog as a subspecies, with the dingo provisionally separate - artificial variants created by domestication and selective breeding". However, Canis familiaris is sometimes used due to an ongoing nomenclature debate because wild and domestic animals are separately recognizable entities and that the ICZN allowed users a choice as to which name they could use, and a number of internationally recognized researchers prefer to use Canis familiaris.Includes Vila (1999) p71, Coppinger (2001) p281, Nowak (2003) p257, Crockford (2006) p100, Bjornenfeldt (2007) p21, Nolan (2009) p16, Druzhkova (2013) p2, and an internet search on Canis familiaris reveals many others.
Origin
The origin of the domestic dog is not clear. The domestic dog is a member of genus Canis (canines) that forms part of the wolf-like canids, and is the most widely abundant carnivore. The closest living relative of the dog is the gray wolf and there is no evidence of any other canine contributing to its genetic lineage. The dog and the extant gray wolf form two sister clades, with modern wolves not closely related to the wolves that were first domesticated. The archaeological record shows the first undisputed dog remains buried beside humans 14,700 years ago, with disputed remains occurring 36,000 years ago. These dates imply that the earliest dogs arose in the time of human hunter-gatherers and not agriculturists. The dog was the first domesticated species.
Where the genetic divergence of dog and wolf took place remains controversial, with the most plausible proposals spanning Western Europe, Central Asia, and East Asia. This has been made more complicated by the most recent proposal that fits the available evidence, which is that an initial wolf population split into East and West Eurasian wolves, these were then domesticated independently before going extinct into two distinct dog populations between 14,000-6,400 years ago, and then the Western Eurasian dog population was partially and gradually replaced by East Asian dogs that were brought by humans at least 6,400 years ago.
Terminology
The term dog typically is applied both to the species (or subspecies) as a whole, and any adult male member of the same.
An adult female is a bitch. In some countries, especially in North America, dog is used instead due to the vulgar connotation of bitch.
An adult male capable of reproduction is a stud.
An adult female capable of reproduction is a brood bitch, or brood mother.
Immature males or females (that is, animals that are incapable of reproduction) are pups or puppies.
A group of pups from the same gestation period is a litter.
The father of a litter is a sire. It is possible for one litter to have multiple sires.
The mother of a litter is a dam.
A group of any three or more adults is a pack.
Biology
thumb|250px|Lateral view of skeleton.
Anatomy
Domestic dogs have been selectively bred for millennia for various behaviors, sensory capabilities, and physical attributes. Modern dog breeds show more variation in size, appearance, and behavior than any other domestic animal. Dogs are predators and scavengers, and like many other predatory mammals, the dog has powerful muscles, fused wrist bones, a cardiovascular system that supports both sprinting and endurance, and teeth for catching and tearing.
Size and weight
Dogs are highly variable in height and weight. The smallest known adult dog was a Yorkshire Terrier, that stood only at the shoulder, in length along the head-and-body, and weighed only . The largest known dog was an English Mastiff which weighed and was from the snout to the tail. The tallest dog is a Great Dane that stands at the shoulder.
Senses
The dog's senses include vision, hearing, sense of smell, sense of taste, touch and sensitivity to the earth's magnetic field. Another study suggested that dogs can see the earth's magnetic field.Magnetoreception molecule found in the eyes of dogs and primates MPI Brain Research, 22 February 2016
See further: Dog anatomy-senses
Coat
thumb|Montage showing the coat variation of the dog.
The coats of domestic dogs are of two varieties: "double" being common with dogs (as well as wolves) originating from colder climates, made up of a coarse guard hair and a soft down hair, or "single", with the topcoat only.
Domestic dogs often display the remnants of countershading, a common natural camouflage pattern. A countershaded animal will have dark coloring on its upper surfaces and light coloring below, which reduces its general visibility. Thus, many breeds will have an occasional "blaze", stripe, or "star" of white fur on their chest or underside.
Tail
There are many different shapes for dog tails: straight, straight up, sickle, curled, or cork-screw. As with many canids, one of the primary functions of a dog's tail is to communicate their emotional state, which can be important in getting along with others. In some hunting dogs, however, the tail is traditionally docked to avoid injuries. In some breeds, such as the Braque du Bourbonnais, puppies can be born with a short tail or no tail at all.
Health
There are many household plants that are poisonous to dogs including begonia, Poinsettia and aloe vera.
Some breeds of dogs are prone to certain genetic ailments such as elbow and hip dysplasia, blindness, deafness, pulmonic stenosis, cleft palate, and trick knees. Two serious medical conditions particularly affecting dogs are pyometra, affecting unspayed females of all types and ages, and bloat, which affects the larger breeds or deep-chested dogs. Both of these are acute conditions, and can kill rapidly. Dogs are also susceptible to parasites such as fleas, ticks, and mites, as well as hookworms, tapeworms, roundworms, and heartworms.
A number of common human foods and household ingestibles are toxic to dogs, including chocolate solids (theobromine poisoning), onion and garlic (thiosulphate, sulfoxide or disulfide poisoning),Sources vary on which of these are considered the most significant toxic item. grapes and raisins, macadamia nuts, xylitol, as well as various plants and other potentially ingested materials. The nicotine in tobacco can also be dangerous. Dogs can be exposed to the substance by scavenging garbage or ashtrays; eating cigars and cigarettes. Signs can be vomiting of large amounts (e.g., from eating cigar butts) or diarrhea. Some other signs are abdominal pain, loss of coordination, collapse, or death. Dogs are highly susceptible to theobromine poisoning, typically from ingestion of chocolate. Theobromine is toxic to dogs because, although the dog's metabolism is capable of breaking down the chemical, the process is so slow that even small amounts of chocolate can be fatal, especially dark chocolate.
Dogs are also vulnerable to some of the same health conditions as humans, including diabetes, dental and heart disease, epilepsy, cancer, hypothyroidism, and arthritis.
Lifespan
thumb|left|A mixed-breed terrier. Mixed-breed dogs have been found to run faster and live longer than their pure-bred parents (See heterosis)
In 2013, a study found that mixed breeds live on average 1.2 years longer than pure breeds, and that increasing body-weight was negatively correlated with longevity (i.e. the heavier the dog the shorter its lifespan).
The typical lifespan of dogs varies widely among breeds, but for most the median longevity, the age at which half the dogs in a population have died and half are still alive, ranges from 10 to 13 years. Individual dogs may live well beyond the median of their breed.
The breed with the shortest lifespan (among breeds for which there is a questionnaire survey with a reasonable sample size) is the Dogue de Bordeaux, with a median longevity of about 5.2 years, but several breeds, including Miniature Bull Terriers, Bloodhounds, and Irish Wolfhounds are nearly as short-lived, with median longevities of 6 to 7 years.
The longest-lived breeds, including Toy Poodles, Japanese Spitz, Border Terriers, and Tibetan Spaniels, have median longevities of 14 to 15 years. The median longevity of mixed-breed dogs, taken as an average of all sizes, is one or more years longer than that of purebred dogs when all breeds are averaged. The dog widely reported to be the longest-lived is "Bluey", who died in 1939 and was claimed to be 29.5 years old at the time of his death. On 5 December 2011, Pusuke, the world's oldest living dog recognized by Guinness Book of World Records, died aged 26 years and 9 months.
Reproduction
thumb|Dog nursing newborn puppies
In domestic dogs, sexual maturity begins to happen around age six to twelve months for both males and females, although this can be delayed until up to two years old for some large breeds. This is the time at which female dogs will have their first estrous cycle. They will experience subsequent estrous cycles semiannually, during which the body prepares for pregnancy. At the peak of the cycle, females will come into estrus, being mentally and physically receptive to copulation. Because the ova survive and are capable of being fertilized for a week after ovulation, it is possible for a female to mate with more than one male.
2–5 days after conception fertilization occurs, 14–16 days later the embryo attaches to the uterus and after 22–23 days the heart beat is detectable.
Dogs bear their litters roughly 58 to 68 days after fertilization, with an average of 63 days, although the length of gestation can vary. An average litter consists of about six puppies, though this number may vary widely based on the breed of dog. In general, toy dogs produce from one to four puppies in each litter, while much larger breeds may average as many as twelve.
Some dog breeds have acquired traits through selective breeding that interfere with reproduction. Male French Bulldogs, for instance, are incapable of mounting the female. For many dogs of this breed, the female must be artificially inseminated in order to reproduce.
Neutering
right|thumb|A feral dog from Sri Lanka nursing her four puppies
Neutering refers to the sterilization of animals, usually by removal of the male's testicles or the female's ovaries and uterus, in order to eliminate the ability to procreate and reduce sex drive. Because of the overpopulation of dogs in some countries, many animal control agencies, such as the American Society for the Prevention of Cruelty to Animals (ASPCA), advise that dogs not intended for further breeding should be neutered, so that they do not have undesired puppies that may have to later be euthanized.
According to the Humane Society of the United States, 3–4 million dogs and cats are put down each year in the United States and many more are confined to cages in shelters because there are many more animals than there are homes. Spaying or castrating dogs helps keep overpopulation down. Local humane societies, SPCAs, and other animal protection organizations urge people to neuter their pets and to adopt animals from shelters instead of purchasing them.
Neutering reduces problems caused by hypersexuality, especially in male dogs. Spayed female dogs are less likely to develop some forms of cancer, affecting mammary glands, ovaries, and other reproductive organs. However, neutering increases the risk of urinary incontinence in female dogs, and prostate cancer in males, as well as osteosarcoma, hemangiosarcoma, cruciate ligament rupture, obesity, and diabetes mellitus in either sex.
Inbreeding depression
A common breeding practice for pet dogs is mating between close relatives (e.g. between half- and full siblings). In a study of seven different French breeds of dogs (Bernese mountain dog, basset hound, Cairn terrier, Epagneul Breton, German Shepard dog, Leonberger, and West Highland white terrier) it was found that inbreeding decreases litter size and survival. Another analysis of data on 42,855 dachshund litters, found that as the inbreeding coefficient increased, litter size decreased and the percentage of stillborn puppies increased, thus indicating inbreeding depression.
About 22% of boxer puppies die before reaching 7 weeks of age. Stillbirth is the most frequent cause of death, followed by infection. Mortality due to infection was found to increase significantly with increases in inbreeding. Inbreeding depression is considered to be due largely to the expression of homozygous deleterious recessive mutations. Outcrossing between unrelated individuals, including dogs of different breeds, results in the beneficial masking of deleterious recessive mutations in progeny.
Intelligence, behavior and communication
Intelligence
Dog intelligence is the ability of the dog to perceive information and retain it as knowledge for applying to solve problems. Dogs have been shown to learn by inference. A study with Rico showed that he knew the labels of over 200 different items. He inferred the names of novel items by exclusion learning and correctly retrieved those novel items immediately and also 4 weeks after the initial exposure. Dogs have advanced memory skills. A study documented the learning and memory capabilities of a border collie, "Chaser", who had learned the names and could associate by verbal command over 1,000 words. Dogs are able to read and react appropriately to human body language such as gesturing and pointing, and to understand human voice commands. Dogs demonstrate a theory of mind by engaging in deception. An experimental study showed compelling evidence that Australian dingos can outperform domestic dogs in non-social problem-solving, indicating that domestic dogs may have lost much of their original problem-solving abilities once they joined humans. Another study indicated that after undergoing training to solve a simple manipulation task, dogs that are faced with an insoluble version of the same problem look at the human, while socialized wolves do not. Modern domestic dogs use humans to solve their problems for them.
Behavior
Dog behavior is the internally coordinated responses (actions or inactions) of the domestic dog (individuals or groups) to internal and/or external stimuli. As the oldest domesticated species, with estimates ranging from 9,000–30,000 years BCE, the minds of dogs inevitably have been shaped by millennia of contact with humans. As a result of this physical and social evolution, dogs, more than any other species, have acquired the ability to understand and communicate with humans and they are uniquely attuned to our behaviors. Behavioral scientists have uncovered a surprising set of social-cognitive abilities in the otherwise humble domestic dog. These abilities are not possessed by the dog's closest canine
relatives nor by other highly intelligent mammals such as great apes. Rather, these skills parallel some of the social-cognitive skills of human children.
Communication
Dog communication is about how dogs "speak" to each other, how they understand messages that humans send to them, and how humans can translate the ideas that dogs are trying to transmit.Coren, Stanley "How To Speak Dog: Mastering the Art of Dog-Human Communication" 2000 Simon & Schuster, New York. These communication behaviors include eye gaze, facial expression, vocalization, body posture (including movements of bodies and limbs) and gustatory communication (scents, pheromones and taste). Humans communicate with dogs by using vocalization, hand signals and body posture.
Compared to wolves
thumb|The Saarloos wolfdog carries more gray wolf DNA than any other dog breed
Physical characteristics
Despite their close genetic relationship and the ability to inter-breed, there are a number of diagnostic features to distinguish the gray wolves from domestic dogs. Domesticated dogs are clearly distinguishable from wolves by starch gel electrophoresis of red blood cell acid phosphatase.Elliot, D. G., and M. Wong. 1972. Acid phosphatase, handy enzyme that separates the dog from the wolf. Acta Biologica et Medica Germanica 28:957 – 62 The tympanic bullae are large, convex and almost spherical in gray wolves, while the bullae of dogs are smaller, compressed and slightly crumpled. Compared to equally sized wolves, dogs tend to have 20% smaller skulls and 30% smaller brains. The teeth of gray wolves are also proportionately larger than those of dogs.Clutton-Brock, Juliet (1987). A Natural History of Domesticated Mammals. British Museum (Natural History), p. 24, ISBN 0-521-34697-5 Compared to wolves, dogs have a more domed forehead. The temporalis muscle that closes the jaws is more robust in wolves. Wolves do not have dewclaws on their back legs, unless there has been admixture with dogs that had them. Dogs lack a functioning pre-caudal gland, and most enter estrus twice yearly, unlike gray wolves which only do so once a year. Dogs require fewer calories to function than wolves. The dog's limp ears may be the result of atrophy of the jaw muscles. The skin of domestic dogs tends to be thicker than that of wolves, with some Inuit tribes favoring the former for use as clothing due to its greater resistance to wear and tear in harsh weather. The paws of a dog are half the size of those of a wolf, and their tails tend to curl upwards, another trait not found in wolves The dog has developed into hundreds of varied breeds, and shows more behavioral and morphological variation than any other land mammal. For example, height measured to the withers ranges from a in the Chihuahua to in the Irish Wolfhound; color varies from white through grays (usually called "blue") to black, and browns from light (tan) to dark ("red" or "chocolate") in a wide variation of patterns; coats can be short or long, coarse-haired to wool-like, straight, curly, or smooth. It is common for most breeds to shed their coat.
Behavioral differences
Unlike other domestic species which were primarily selected for production-related traits, dogs were initially selected for their behaviors.Serpell J, Duffy D. Dog Breeds and Their Behavior. In: Domestic Dog Cognition and Behavior. Berlin, Heidelberg: Springer; 2014 In 2016, a study found that there were only 11 fixed genes that showed variation between wolves and dogs. These gene variations were unlikely to have been the result of natural evolution, and indicate selection on both morphology and behavior during dog domestication. These genes have been shown to affect the catecholamine synthesis pathway, with the majority of the genes affecting the fight-or-flight responseAlmada RC, Coimbra NC. Recruitment of striatonigral disinhibitory and nigrotectal inhibitory GABAergic pathways during the organization of defensive behavior by mice in a dangerous environment with the venomous snake Bothrops alternatus [ Reptilia , Viperidae ] Synapse 2015:n/a–n/a (i.e. selection for tameness), and emotional processing. Dogs generally show reduced fear and aggression compared to wolves.Coppinger R, Schneider R: Evolution of working dogs. The domestic dog: Its evolution, behaviour and interactions with people. Cambridge: Cambridge University press, 1995. Some of these genes have been associated with aggression in some dog breeds, indicating their importance in both the initial domestication and then later in breed formation.
Ecology
Population and habitat
The global dog population is estimated at 900 million and rising. Although it is said that the "dog is man's best friend" regarding 17–24% of dogs in developed countries, in the developing world they are feral, village or community dogs, with pet dogs uncommon. These live their lives as scavengers and have never been owned by humans, with one study showing their most common response when approached by strangers was to run away (52%) or respond with aggression (11%). We know little about these dogs, nor about the dogs that live in developed countries that are feral, stray or are in shelters, yet the great majority of modern research on dog cognition has focused on pet dogs living in human homes.
Competitors
Being the most abundant carnivore, feral and free-ranging dogs have the greatest potential to compete with wolves. A review of the studies in the competitive effects of dogs on sympatric carnivores did not mention any research on competition between dogs and wolves. Competition would favor the wolf that is known to kill dogs, however wolves tend to live in pairs or in small packs in areas where they are highly persecuted, giving them a disadvantage facing large dog groups.
Wolves kill dogs wherever the two canids occur. One survey claims that in Wisconsin in 1999 more compensation had been paid for dog losses than livestock, however in Wisconsin wolves will often kill hunting dogs, perhaps because they are in the wolf's territory. Some wolf pairs have been reported to prey on dogs by having one wolf lure the dog out into heavy brush where the second animal waits in ambush. In some instances, wolves have displayed an uncharacteristic fearlessness of humans and buildings when attacking dogs, to the extent that they have to be beaten off or killed. Although the numbers of dogs killed each year are relatively low, it induces a fear of wolves entering villages and farmyards to take dogs. In many cultures, there are strong social and emotional bonds between humans and their dogs that can be seen as family members or working team members. The loss of a dog can lead to strong emotional responses with demands for more liberal wolf hunting regulations.
Coyotes and big cats have also been known to attack dogs. Leopards in particular are known to have a predilection for dogs, and have been recorded to kill and consume them regardless of the dog's size or ferocity. Tigers in Manchuria, Indochina, Indonesia, and Malaysia are reputed to kill dogs with the same vigor as leopards. Striped hyenas are major predators of village dogs in Turkmenistan, India, and the Caucasus.
The spiked collar common on working and pet dogs is no mere ornament: it originated as a protection of the vulnerable neck of a dog from wolves, but also protects dogs from attacks by other dogs.https://www.quora.com/Why-do-some-dog-collars-have-spikes
Diet
thumb|right|upright|Golden Retriever gnawing a pig's foot
Despite their descent from wolves and classification as Carnivora, dogs are variously described in scholarly and other writings as carnivores or omnivores. Unlike obligate carnivores, dogs can adapt to a wide-ranging diet, and are not dependent on meat-specific protein nor a very high level of protein in order to fulfill their basic dietary requirements. Dogs will healthily digest a variety of foods, including vegetables and grains, and can consume a large proportion of these in their diet, however all-meat diets are not recommended for dogs due to their lack of calcium and iron. Comparing dogs and wolves, dogs have adaptations in genes involved in starch digestion that contribute to an increased ability to thrive on a starch-rich diet.
Breeds
thumb|Cavalier King Charles Spaniels demonstrate variation within breed
Most breeds of dog are at most a few hundred years old, having been artificially selected for particular morphologies and behaviors by people for specific functional roles. Through this selective breeding, the dog has developed into hundreds of varied breeds, and shows more behavioral and morphological variation than any other land mammal. For example, height measured to the withers ranges from in the Chihuahua to about in the Irish Wolfhound; color varies from white through grays (usually called "blue") to black, and browns from light (tan) to dark ("red" or "chocolate") in a wide variation of patterns; coats can be short or long, coarse-haired to wool-like, straight, curly, or smooth. It is common for most breeds to shed this coat.
While all dogs are genetically very similar, natural selection and selective breeding have reinforced certain characteristics in certain populations of dogs, giving rise to dog types and dog breeds. Dog types are broad categories based on function, genetics, or characteristics.
Dog breeds are groups of animals that possess a set of inherited characteristics that distinguishes them from other animals within the same species. Modern dog breeds are non-scientific classifications of dogs kept by modern kennel clubs.
Purebred dogs of one breed are genetically distinguishable from purebred dogs of other breeds, but the means by which kennel clubs classify dogs is unsystematic. DNA microsatellite analyses of 85 dog breeds showed they fell into four major types of dogs that were statistically distinct. These include the "old world dogs" (e.g., Malamute and Shar Pei), "Mastiff"-type (e.g., English Mastiff), "herding"-type (e.g., Border Collie), and "all others" (also called "modern"- or "hunting"-type).
Roles with humans
thumb|upright|Gunnar Kaasen and Balto, the lead dog on the last relay team of the 1925 serum run to Nome.
Domestic dogs inherited complex behaviors, such as bite inhibition, from their wolf ancestors, which would have been pack hunters with complex body language. These sophisticated forms of social cognition and communication may account for their trainability, playfulness, and ability to fit into human households and social situations, and these attributes have given dogs a relationship with humans that has enabled them to become one of the most successful species on the planet today.
The dogs' value to early human hunter-gatherers led to them quickly becoming ubiquitous across world cultures. Dogs perform many roles for people, such as hunting, herding, pulling loads, protection, assisting police and military, companionship, and, more recently, aiding handicapped individuals. This influence on human society has given them the nickname "man's best friend" in the Western world. In some cultures, however, dogs are also a source of meat.
Early roles
Wolves, and their dog descendants, would have derived significant benefits from living in human camps—more safety, more reliable food, lesser caloric needs, and more chance to breed. They would have benefited from humans' upright gait that gives them larger range over which to see potential predators and prey, as well as color vision that, at least by day, gives humans better visual discrimination. Camp dogs would also have benefited from human tool use, as in bringing down larger prey and controlling fire for a range of purposes.
The dogs of Thibet are twice the size of those seen in India, with large heads and hairy bodies. They are powerful animals, and are said to be able to kill a tiger. During the day they are kept chained up, and are let loose at night to guard their masters' house.Travels in Central Asia by Meer Izzut-oollah in the Years 1812-13. Translated by Captain Henderson. Calcutta, 1872, p. 15.
Humans would also have derived enormous benefit from the dogs associated with their camps. For instance, dogs would have improved sanitation by cleaning up food scraps. Dogs may have provided warmth, as referred to in the Australian Aboriginal expression "three dog night" (an exceptionally cold night), and they would have alerted the camp to the presence of predators or strangers, using their acute hearing to provide an early warning.
Anthropologists believe the most significant benefit would have been the use of dogs' robust sense of smell to assist with the hunt. The relationship between the presence of a dog and success in the hunt is often mentioned as a primary reason for the domestication of the wolf, and a 2004 study of hunter groups with and without a dog gives quantitative support to the hypothesis that the benefits of cooperative hunting was an important factor in wolf domestication.
The cohabitation of dogs and humans would have greatly improved the chances of survival for early human groups, and the domestication of dogs may have been one of the key forces that led to human success.
Emigrants from Siberia that walked across the Bering land bridge into North America may have had dogs in their company, and one writerA History of Dogs in the Early Americas, Marion Schwartz, 1998, 260 p., ISBN 978-0-300-07519-9, Yale University Press suggests that the use of sled dogs may have been critical to the success of the waves that entered North America roughly 12,000 years ago, although the earliest archaeological evidence of dog-like canids in North America dates from about 9,400 years ago. Dogs were an important part of life for the Athabascan population in North America, and were their only domesticated animal. Dogs also carried much of the load in the migration of the Apache and Navajo tribes 1,400 years ago. Use of dogs as pack animals in these cultures often persisted after the introduction of the horse to North America. p.12
As pets
thumb|Siberian Huskypack animal
thumb|alt=Couple sitting on the lawn with a pet British Bulldog| A British Bulldog shares a day at the park.
thumb|Green velvet dog collar, dates from 1670 to 1690.
It is estimated that three-quarters of the world's dog population lives in the developing world as feral, village, or community dogs, with pet dogs uncommon.
"The most widespread form of interspecies bonding occurs between humans and dogs" and the keeping of dogs as companions, particularly by elites, has a long history. (As a possible example, at the Natufian culture site of Ain Mallaha in Israel, dated to 12,000 BC, the remains of an elderly human and a four-to-five-month-old puppy were found buried together). However, pet dog populations grew significantly after World War II as suburbanization increased. In the 1950s and 1960s, dogs were kept outside more often than they tend to be today (using the expression "in the doghouse" to describe exclusion from the group signifies the distance between the doghouse and the home) and were still primarily functional, acting as a guard, children's playmate, or walking companion. From the 1980s, there have been changes in the role of the pet dog, such as the increased role of dogs in the emotional support of their human guardians. People and dogs have become increasingly integrated and implicated in each other's lives, to the point where pet dogs actively shape the way a family and home are experienced.
There have been two major trends in the changing status of pet dogs. The first has been the 'commodification' of the dog, shaping it to conform to human expectations of personality and behaviour. The second has been the broadening of the concept of the family and the home to include dogs-as-dogs within everyday routines and practices.
There are a vast range of commodity forms available to transform a pet dog into an ideal companion. The list of goods, services and places available is enormous: from dog perfumes, couture, furniture and housing, to dog groomers, therapists, trainers and caretakers, dog cafes, spas, parks and beaches, and dog hotels, airlines and cemeteries. While dog training as an organized activity can be traced back to the 18th century, in the last decades of the 20th century it became a high-profile issue as many normal dog behaviors such as barking, jumping up, digging, rolling in dung, fighting, and urine marking (which dogs do to establish territory through scent), became increasingly incompatible with the new role of a pet dog. Dog training books, classes and television programs proliferated as the process of commodifying the pet dog continued.
The majority of contemporary people with dogs describe their pet as part of the family, although some ambivalence about the relationship is evident in the popular reconceptualization of the dog–human family as a pack. A dominance model of dog–human relationships has been promoted by some dog trainers, such as on the television program Dog Whisperer. However it has been disputed that "trying to achieve status" is characteristic of dog–human interactions. Pet dogs play an active role in family life; for example, a study of conversations in dog–human families showed how family members use the dog as a resource, talking to the dog, or talking through the dog, to mediate their interactions with each other.
Increasingly, human family members are engaging in activities centered on the perceived needs and interests of the dog, or in which the dog is an integral partner, such as dog dancing and dog yoga.
According to statistics published by the American Pet Products Manufacturers Association in the National Pet Owner Survey in 2009–2010, it is estimated there are 77.5 million people with pet dogs in the United States. The same survey shows nearly 40% of American households own at least one dog, of which 67% own just one dog, 25% two dogs and nearly 9% more than two dogs. There does not seem to be any gender preference among dogs as pets, as the statistical data reveal an equal number of female and male dog pets. Yet, although several programs are ongoing to promote pet adoption, less than a fifth of the owned dogs come from a shelter.
The latest study using magnetic resonance imaging (MRI) comparing humans and dogs showed that dogs have same response to voices and use the same parts of the brain as humans do. This gives dogs the ability to recognize emotional human sounds, making them friendly social pets to humans.
Work
Dogs have lived and worked with humans in so many roles that they have earned the unique nickname, "man's best friend", a phrase used in other languages as well. They have been bred for herding livestock, hunting (e.g. pointers and hounds), rodent control,Dewey, T. and S. Bhagat. 2002. "Canis lupus familiaris ", Animal Diversity Web. Retrieved 6 January 2009. guarding, helping fishermen with nets, detection dogs, and pulling loads, in addition to their roles as companions. In 1957, a husky-terrier mix named Laika became the first animal to orbit the Earth.
Service dogs such as guide dogs, utility dogs, assistance dogs, hearing dogs, and psychological therapy dogs provide assistance to individuals with physical or mental disabilities. Some dogs owned by epileptics have been shown to alert their handler when the handler shows signs of an impending seizure, sometimes well in advance of onset, allowing the guardian to seek safety, medication, or medical care.
Dogs included in human activities in terms of helping out humans are usually called working dogs.
Sports and shows
thumb|Dogs come in a range of sizes.
People often enter their dogs in competitions such as breed-conformation shows or sports, including racing, sledding and agility competitions.
In conformation shows, also referred to as breed shows, a judge familiar with the specific dog breed evaluates individual purebred dogs for conformity with their established breed type as described in the breed standard. As the breed standard only deals with the externally observable qualities of the dog (such as appearance, movement, and temperament), separately tested qualities (such as ability or health) are not part of the judging in conformation shows.
As food
thumb|Gaegogi (dog meat) stew being served in a Korean restaurant
In China and South Vietnam dogs are a source of meat for humans. Dog meat is consumed in some East Asian countries, including Korea, China, and Vietnam, a practice that dates back to antiquity. It is estimated that 13–16 million dogs are killed and consumed in Asia every year. Other cultures, such as Polynesia and pre-Columbian Mexico, also consumed dog meat in their history. However, Western, South Asian, African, and Middle Eastern cultures, in general, regard consumption of dog meat as taboo. In some places, however, such as in rural areas of Poland, dog fat is believed to have medicinal properties—being good for the lungs for instance. Dog meat is also consumed in some parts of Switzerland. Proponents of eating dog meat have argued that placing a distinction between livestock and dogs is western hypocrisy, and that there is no difference with eating the meat of different animals.
In Korea, the primary dog breed raised for meat, the nureongi (누렁이), differs from those breeds raised for pets that Koreans may keep in their homes.Pettid, Michael J., Korean Cuisine: An Illustrated History, London: Reaktion Books Ltd., 2008, 25. ISBN 1-86189-348-5
The most popular Korean dog dish is gaejang-guk (also called bosintang), a spicy stew meant to balance the body's heat during the summer months; followers of the custom claim this is done to ensure good health by balancing one's gi, or vital energy of the body. A 19th century version of gaejang-guk explains that the dish is prepared by boiling dog meat with scallions and chili powder. Variations of the dish contain chicken and bamboo shoots. While the dishes are still popular in Korea with a segment of the population, dog is not as widely consumed as beef, chicken, and pork.
Health risks to humans
In 2005, the WHO reported that 55,000 people died in Asia and Africa from rabies, a disease for which dogs are the most important vector.
Citing a 2008 study, the U.S. Center for Disease Control estimated in 2015 that 4.5 million people in the USA are bitten by dogs each year. citing A 2015 study estimated that 1.8% of the U.S. population is bitten each year. In the 1980s and 1990s the US averaged 17 fatalities per year, while in the 2000s this has increased to 26. 77% of dog bites are from the pet of family or friends, and 50% of attacks occur on the property of the dog's legal owner.
A Colorado study found bites in children were less severe than bites in adults. The incidence of dog bites in the US is 12.9 per 10,000 inhabitants, but for boys aged 5 to 9, the incidence rate is 60.7 per 10,000. Moreover, children have a much higher chance to be bitten in the face or neck. Sharp claws with powerful muscles behind them can lacerate flesh in a scratch that can lead to serious infections.
In the UK between 2003 and 2004, there were 5,868 dog attacks on humans, resulting in 5,770 working days lost in sick leave.
In the United States, cats and dogs are a factor in more than 86,000 falls each year. It has been estimated around 2% of dog-related injuries treated in UK hospitals are domestic accidents. The same study found that while dog involvement in road traffic accidents was difficult to quantify, dog-associated road accidents involving injury more commonly involved two-wheeled vehicles.
Toxocara canis (dog roundworm) eggs in dog feces can cause toxocariasis. In the United States, about 10,000 cases of Toxocara infection are reported in humans each year, and almost 14% of the U.S. population is infected. In Great Britain, 24% of soil samples taken from public parks contained T. canis eggs. Untreated toxocariasis can cause retinal damage and decreased vision. Dog feces can also contain hookworms that cause cutaneous larva migrans in humans.
Health benefits for humans
thumb|alt=Small dog laying between the hands|A human cuddles a Doberman puppy.
The scientific evidence is mixed as to whether companionship of a dog can enhance human physical health and psychological wellbeing. Studies suggesting that there are benefits to physical health and psychological wellbeing have been criticised for being poorly controlled, and finding that "[t]he health of elderly people is related to their health habits and social supports but not to their ownership of, or attachment to, a companion animal." Earlier studies have shown that people who keep pet dogs or cats exhibit better mental and physical health than those who do not, making fewer visits to the doctor and being less likely to be on medication than non-guardians.
A 2005 paper states "recent research has failed to support earlier findings that pet ownership is associated with a reduced risk of cardiovascular disease, a reduced use of general practitioner services, or any psychological or physical benefits on health for community dwelling older people. Research has, however, pointed to significantly less absenteeism from school through sickness among children who live with pets." In one study, new guardians reported a highly significant reduction in minor health problems during the first month following pet acquisition, and this effect was sustained in those with dogs through to the end of the study.
In addition, people with pet dogs took considerably more physical exercise than those with cats and those without pets. The results provide evidence that keeping pets may have positive effects on human health and behaviour, and that for guardians of dogs these effects are relatively long-term. Pet guardianship has also been associated with increased coronary artery disease survival, with human guardians being significantly less likely to die within one year of an acute myocardial infarction than those who did not own dogs.
The health benefits of dogs can result from contact with dogs in general, and not solely from having dogs as pets. For example, when in the presence of a pet dog, people show reductions in cardiovascular, behavioral, and psychological indicators of anxiety. Other health benefits are gained from exposure to immune-stimulating microorganisms, which, according to the hygiene hypothesis, can protect against allergies and autoimmune diseases. The benefits of contact with a dog also include social support, as dogs are able to not only provide companionship and social support themselves, but also to act as facilitators of social interactions between humans. One study indicated that wheelchair users experience more positive social interactions with strangers when they are accompanied by a dog than when they are not. In 2015, a study found that pet owners were significantly more likely to get to know people in their neighborhood than non-pet owners.
The practice of using dogs and other animals as a part of therapy dates back to the late 18th century, when animals were introduced into mental institutions to help socialize patients with mental disorders.Kruger, K.A. & Serpell, J.A. (2006). Animal-assisted interventions in mental health: Definitions and theoretical foundations, In Fine, A.H. (Ed.), Handbook on animal-assisted therapy: Theoretical foundations and guidelines for practice. San Diego, CA, Academic Press: 21–38. ISBN 0-12-369484-1 Animal-assisted intervention research has shown that animal-assisted therapy with a dog can increase social behaviors, such as smiling and laughing, among people with Alzheimer's disease. One study demonstrated that children with ADHD and conduct disorders who participated in an education program with dogs and other animals showed increased attendance, increased knowledge and skill objectives, and decreased antisocial and violent behavior compared to those who were not in an animal-assisted program.
Medical detection dogs
Medical detection dogs are capable of detecting diseases by sniffing a person directly or samples of urine or other specimens. Dogs can detect odour in one part per trillion, as their brain's olfactory cortex is (relative to total brain size) 40 times larger than humans. Dogs may have as many as 300 million odour receptors in their nose, while humans may have only 5 million. Each dog is trained specifically for the detection of single disease from the blood glucose level indicative to diabetes to cancer. The process of training a cancer dog requires six months. A Labrador Retriever called Daisy has detected 551 cancer patients with an accuracy of 93 percent and received the Blue Cross (for pets) Medal for her life-saving skills.
Shelters
Every year, between 6 and 8 million dogs and cats enter US animal shelters.Animals abandoned as recession hits home . TheStar.com. 22 December 2008. The Humane Society of the United States (HSUS) estimates that approximately 3 to 4 million of those dogs and cats are euthanized yearly in the United States.HSUS Pet Overpopulation Estimates The Humane Society of the United States However, the percentage of dogs in US animal shelters that are eventually adopted and removed from the shelters by their new legal owners has increased since the mid-1990s from around 25% to a 2012 average of 40% among reporting shelters (with many shelters reporting 60–75%).
Cultural depictions
Dogs have been viewed and represented in different manners by different cultures and religions, over the course of history.
Mythology
In mythology, dogs often serve as pets or as watchdogs.
In Greek mythology, Cerberus is a three-headed watchdog who guards the gates of Hades. In Norse mythology, a bloody, four-eyed dog called Garmr guards Helheim. In Persian mythology, two four-eyed dogs guard the Chinvat Bridge. In Philippine mythology, Kimat who is the pet of Tadaklan, god of thunder, is responsible for lightning. In Welsh mythology, Annwn is guarded by Cŵn Annwn.
In Hindu mythology, Yama, the god of death owns two watch dogs who have four eyes. They are said to watch over the gates of Naraka. Hunter god Muthappan from North Malabar region of Kerala has a hunting dog as his mount. Dogs are found in and out of the Muthappan Temple and offerings at the shrine take the form of bronze dog figurines.
The role of the dog in Chinese mythology includes a position as one of the twelve animals which cyclically represent years (the zodiacal dog).
Religion and culture
In Homer's epic poem the Odyssey, when the disguised Odysseus returns home after 20 years he is recognized only by his faithful dog, Argos, who has been waiting for his return.
In Islam, dogs are viewed as unclean because they are viewed as scavengers. In 2015 city councillor Hasan Küçük of The Hague called for dog ownership to be made illegal in that city. Islamic activists in Lérida, Spain, lobbied for dogs to be kept out of Muslim neighborhoods, saying their presence violated Muslims' religious freedom. In Britain, police sniffer dogs are carefully used, and are not permitted to contact passengers, only their luggage. They are required to wear leather dog booties when searching mosques or Muslim homes.
Jewish law does not prohibit keeping dogs and other pets. Jewish law requires Jews to feed dogs (and other animals that they own) before themselves, and make arrangements for feeding them before obtaining them. In Christianity, dogs represent faithfulness.
In China, Korea, and Japan, dogs are viewed as kind protectors.
Art
Cultural depictions of dogs in art extend back thousands of years to when dogs were portrayed on the walls of caves. Representations of dogs became more elaborate as individual breeds evolved and the relationships between human and canine developed. Hunting scenes were popular in the Middle Ages and the Renaissance. Dogs were depicted to symbolize guidance, protection, loyalty, fidelity, faithfulness, watchfulness, and love.
thumb|right|Decameron hunting scene, Davide Ghirlandaio, c.1485 Brooklyn Museum
thumb|left|Figure of a Recumbent Dog, China, 4th century, Brooklyn Museum
See also
Aging in dogs
Toy Group
Animal track
Argos (dog)
Dog in Chinese mythology
Dogs in art
Dog odor
Dognapping
Ethnocynology
Hachikō–a notable example of dog loyalty
Lost pet services
Mountain dog
Wolfdog
Lists
List of fictional dogs
List of individual dogs
References
Bibliography
Further reading
External links
Biodiversity Heritage Library bibliography for Canis lupus familiaris
Fédération Cynologique Internationale (FCI) – World Canine Organisation
Dogs in the Ancient World, an article on the history of dogs
View the dog genome on Ensembl
Category:Cosmopolitan vertebrates
Category:Scavengers
Category:Vertebrate animal models
Category:Extant Late Pleistocene first appearances | 4,269,567 | 2017-01 |
Printed circuit board | thumb|Part of a 1983 Sinclair ZX Spectrum computer board; a populated PCB, showing the conductive traces, vias (the through-hole paths to the other surface), and some mounted electronic components
A printed circuit board (PCB) mechanically supports and electrically connects electronic components using conductive tracks, pads and other features etched from copper sheets laminated onto a non-conductive substrate. Components – capacitors, resistors or active devices – are generally soldered on the PCB. Advanced PCBs may contain components embedded in the substrate.
PCBs can be single sided (one copper layer), double sided (two copper layers) or multi-layer (outer and inner layers). Conductors on different layers are connected with vias. Multi-layer PCBs allow for much higher component density.
FR-4 glass epoxy is the primary insulating substrate. A basic building block of the PCB is an FR-4 panel with a thin layer of copper foil laminated to one or both sides. In multi-layer boards multiple layers of material are laminated together.
Printed circuit boards are used in all but the simplest electronic products. Alternatives to PCBs include wire wrap and point-to-point construction. PCBs require the additional design effort to lay out the circuit, but manufacturing and assembly can be automated. Manufacturing circuits with PCBs is cheaper and faster than with other wiring methods as components are mounted and wired with one single part.
A minimal PCB with a single component used for easier modeling is called a breakout board.
When the board has no embedded components it is more correctly called a printed wiring board (PWB) or etched wiring board. However, the term printed wiring board has fallen into disuse. A PCB populated with electronic components is called a printed circuit assembly (PCA), printed circuit board assembly or PCB assembly (PCBA). The IPC preferred term for assembled boards is circuit card assembly (CCA),IPC-14.38 and for assembled backplanes it is backplane assemblies. The term PCB is used informally both for bare and assembled boards.
The world market for bare PCBs exceeded $60.2 billion in 2014.
Design
thumb|A board designed in 1967; the sweeping curves in the traces are evidence of freehand design using adhesive tape
Initially PCBs were designed manually by creating a photomask on a clear mylar sheet, usually at two or four times the true size. Starting from the schematic diagram the component pin pads were laid out on the mylar and then traces were routed to connect the pads. Rub-on dry transfers of common component footprints increased efficiency. Traces were made with self-adhesive tape. Pre-printed non-reproducing grids on the mylar assisted in layout. To fabricate the board, the finished photomask was photolithographically reproduced onto a photoresist coating on the blank copper-clad boards.
Modern PCBs are designed with dedicated layout software, generally in the following steps:http://www.cs.berkeley.edu/~prabal/teaching/cs194-05-s08/cs194-designflow.ppt Printed Circuit Board Design Flow Methodology
Schematic capture through an electronic design automation (EDA) tool.
Card dimensions and template are decided based on required circuitry and case of the PCB.
The positions of the components and heat sinks are determined.
Layer stack of the PCB is decided, with one to tens of layers depending on complexity. Ground and power planes are decided. A power plane is the counterpart to a ground plane and behaves as an AC signal ground while providing DC power to the circuits mounted on the PCB. Signal interconnections are traced on signal planes. Signal planes can be on the outer as well as inner layers. For optimal EMI performance high frequency signals are routed in internal layers between power or ground planes.See appendix D of IPC-2251
Line impedance is determined using dielectric layer thickness, routing copper thickness and trace-width. Trace separation is also taken into account in case of differential signals. Microstrip, stripline or dual stripline can be used to route signals.
Components are placed. Thermal considerations and geometry are taken into account. Vias and lands are marked.
Signal traces are routed. Electronic design automation tools usually create clearances and connections in power and ground planes automatically.
Gerber files are generated for manufacturing.
Manufacturing
PCB manufacturing consists of many steps.
PCB CAM
Manufacturing starts from the PCB fabrication data generated by computer aided design, such as Gerber layer images, Gerber or Excellon drill files, IPC-D-356 netlist and component information. The Gerber or Excellon files in the fabrication data are never used directly on the manufacturing equipment but always read into the CAM (Computer Aided Manufacturing) software. CAM performs the following functions:
Input of the fabrication data.
Verification of the data; optionally DFM
Compensation for deviations in the manufacturing processes (e.g. scaling to compensate for distortions during lamination)
Panelization
Output of the digital tools (copper patterns, solder resist image, legend image, drill files, automated optical inspection data, electrical test files,...)
Panelization
Panelization is a procedure whereby a number of PCBs are grouped for manufacturing onto a larger board - the panel. Usually a panel consists of a single design but sometimes multiple designs are mixed on a single panel. There are two types of panels: assembly panels - often called arrays - and bare board manufacturing panels. The assemblers often mount components on panels rather than single PCBs because this is efficient. The bare board manufactures always uses panels, not only for efficiency, but because of the requirements of the plating process. Thus a manufacturing panel can consist of a grouping of individual PCBs or of arrays, depending on what must be delivered.
The panel is eventually broken apart into individual PCBs; this is called depaneling. Separating the individual PCBs is frequently aided by drilling or routing perforations along the boundaries of the individual circuits, much like a sheet of postage stamps. Another method, which takes less space, is to cut V-shaped grooves across the full dimension of the panel. The individual PCBs can then be broken apart along this line of weakness.Kraig Mitzner, Complete PCB Design Using OrCad Capture and Layout, pages 443–446, Newnes, 2011 ISBN 0080549209. Today depaneling is often done by lasers which cut the board with no contact. Laser panelization reduces stress on the fragile circuits.
Copper patterning
The first step is to replicate the pattern in the fabricator's CAM system on a protective mask on the copper foil PCB layers. Subsequent etching removes the unwanted copper. (Alternatively, a conductive ink can be ink-jetted on a blank (non-conductive) board. This technique is also used in the manufacture of hybrid circuits.)
Silk screen printing uses etch-resistant inks to create the protective mask.
Photoengraving uses a photomask and developer to selectively remove a UV-sensitive photoresist coating and thus create a photoresist mask. Direct imaging techniques are sometimes used for high-resolution requirements. Experiments were made with thermal resist.
PCB milling uses a two or three-axis mechanical milling system to mill away the copper foil from the substrate. A PCB milling machine (referred to as a 'PCB Prototyper') operates in a similar way to a plotter, receiving commands from the host software that control the position of the milling head in the x, y, and (if relevant) z axis.
Laser resist ablation Spray black paint onto copper clad laminate, place into CNC laser plotter. The laser raster-scans the PCB and ablates (vaporizes) the paint where no resist is wanted. (Note: laser copper ablation is rarely used and is considered experimental.)
The method chosen depends on the number of boards to be produced and the required resolution.
Large volume
Silk screen printing – Used for PCBs with bigger features
Photoengraving – Used when finer features are required
Small volume
Print onto transparent film and use as photo mask along with photo-sensitized boards (i.e., pre-sensitized boards), then etch. (Alternatively, use a film photoplotter)
Laser resist ablation
PCB milling
Hobbyist
Laser-printed resist: Laser-print onto toner transfer paper, heat-transfer with an iron or modified laminator onto bare laminate, soak in water bath, touch up with a marker, then etch.
Vinyl film and resist, non-washable marker, some other methods. Labor-intensive, only suitable for single boards.
Subtractive, additive and semi-additive processes
thumb|upright=1.5|The two processing methods used to produce a double-sided PWB with plated-through holes
Subtractive methods remove copper from an entirely copper-coated board to leave only the desired copper pattern. In additive methods the pattern is electroplated onto a bare substrate using a complex process. The advantage of the additive method is that less material is needed and less waste is produced. In the full additive process the bare laminate is covered with a photosensitive film which is imaged (exposed to light through a mask and then developed which removes the unexposed film). The exposed areas are sensitized in a chemical bath, usually containing palladium and similar to that used for through hole plating which makes the exposed area capable of bonding metal ions. The laminate is then plated with copper in the sensitized areas. When the mask is stripped, the PCB is finished.
Semi-additive is the most common process: The unpatterned board has a thin layer of copper already on it. A reverse mask is then applied. (Unlike a subtractive process mask, this mask exposes those parts of the substrate that will eventually become the traces.) Additional copper is then plated onto the board in the unmasked areas; copper may be plated to any desired weight. Tin-lead or other surface platings are then applied. The mask is stripped away and a brief etching step removes the now-exposed bare original copper laminate from the board, isolating the individual traces. Some single-sided boards which have plated-through holes are made in this way. General Electric made consumer radio sets in the late 1960s using additive boards.
The (semi-)additive process is commonly used for multi-layer boards as it facilitates the plating-through of the holes to produce conductive vias in the circuit board.
Chemical etching
Chemical etching is usually done with ammonium persulfate or ferric chloride. For PTH (plated-through holes), additional steps of electroless deposition are done after the holes are drilled, then copper is electroplated to build up the thickness, the boards are screened, and plated with tin/lead. The tin/lead becomes the resist leaving the bare copper to be etched away.
The simplest method, used for small-scale production and often by hobbyists, is immersion etching, in which the board is submerged in etching solution such as ferric chloride. Compared with methods used for mass production, the etching time is long. Heat and agitation can be applied to the bath to speed the etching rate. In bubble etching, air is passed through the etchant bath to agitate the solution and speed up etching. Splash etching uses a motor-driven paddle to splash boards with etchant; the process has become commercially obsolete since it is not as fast as spray etching. In spray etching, the etchant solution is distributed over the boards by nozzles, and recirculated by pumps. Adjustment of the nozzle pattern, flow rate, temperature, and etchant composition gives predictable control of etching rates and high production rates.R. S. Khandpur,Printed circuit boards: design, fabrication, assembly and testing, Tata-McGraw Hill, 2005 ISBN 0-07-058814-7, pages 373–378
As more copper is consumed from the boards, the etchant becomes saturated and less effective; different etchants have different capacities for copper, with some as high as 150 grams of copper per litre of solution. In commercial use, etchants can be regenerated to restore their activity, and the dissolved copper recovered and sold. Small-scale etching requires attention to disposal of used etchant, which is corrosive and toxic due to its metal content.
The etchant removes copper on all surfaces exposed by the resist. "Undercut" occurs when etchant attacks the thin edge of copper under the resist; this can reduce conductor widths and cause open-circuits. Careful control of etch time is required to prevent undercut. Where metallic plating is used as a resist, it can "overhang" which can cause short-circuits between adjacent traces when closely spaced. Overhang can be removed by wire-brushing the board after etching.
Inner layer automated optical inspection (AOI)
The inner layers are given a complete machine inspection before lamination because afterwards mistakes cannot be corrected. The automatic optical inspection system scans the board and compares it with the digital image generated from the original design data.
Lamination
thumb|Cut through a SDRAM-module, a multi-layer PCB. Note the via, visible as a bright copper-colored band running between the top and bottom layers of the board.
Multi-layer printed circuit boards have trace layers inside the board. This is achieved by laminating a stack of materials in a press by applying pressure and heat for a period of time. This results in an inseparable one piece product. For example, a four-layer PCB can be fabricated by starting from a two-sided copper-clad laminate, etch the circuitry on both sides, then laminate to the top and bottom pre-preg and copper foil. It is then drilled, plated, and etched again to get traces on top and bottom layers.
Drilling
thumb|Eyelets (hollow)
Holes through a PCB are typically drilled with small-diameter drill bits made of solid coated tungsten carbide. Coated tungsten carbide is recommended since many board materials are very abrasive and drilling must be high RPM and high feed to be cost effective. Drill bits must also remain sharp so as not to mar or tear the traces. Drilling with high-speed-steel is simply not feasible since the drill bits will dull quickly and thus tear the copper and ruin the boards. The drilling is performed by automated drilling machines with placement controlled by a drill tape or drill file. These computer-generated files are also called numerically controlled drill (NCD) files or "Excellon files". The drill file describes the location and size of each drilled hole.
Holes may be made conductive, by electroplating or inserting metal eyelets (hollow), to electrically and thermally connect board layers. Some conductive holes are intended for the insertion of through-hole-component leads. Others, typically smaller and used to connect board layers, are called vias.
When very small vias are required, drilling with mechanical bits is costly because of high rates of wear and breakage. In this case, the vias may be laser drilled—evaporated by lasers. Laser-drilled vias typically have an inferior surface finish inside the hole. These holes are called micro vias.http://www.magazines007.com/pdf/PCB-May2013.pdf|title= Microvia Fabrication: When to drill, When to Blast
It is also possible with controlled-depth drilling, laser drilling, or by pre-drilling the individual sheets of the PCB before lamination, to produce holes that connect only some of the copper layers, rather than passing through the entire board. These holes are called blind vias when they connect an internal copper layer to an outer layer, or buried vias when they connect two or more internal copper layers and no outer layers.
The hole walls for boards with two or more layers can be made conductive and then electroplated with copper to form plated-through holes. These holes electrically connect the conducting layers of the PCB. For multi-layer boards, those with three layers or more, drilling typically produces a smear of the high temperature decomposition products of bonding agent in the laminate system. Before the holes can be plated through, this smear must be removed by a chemical de-smear process, or by plasma-etch. The de-smear process ensures that a good connection is made to the copper layers when the hole is plated through. On high reliability boards a process called etch-back is performed chemically with a potassium permanganate based etchant or plasma. The etch-back removes resin and the glass fibers so that the copper layers extend into the hole and as the hole is plated become integral with the deposited copper.
Plating and coating
PCBs are plated with solder, tin, or gold over nickel as a resist for etching away the unneeded underlying copper.Appendix F Sample Fabrication Sequence for a Standard Printed Circuit Board, Linkages: Manufacturing Trends in Electronics Interconnection Technology, National Academy of SciencesProduction Methods and Materials 3.1 General Printed Wiring Board Project Report – Table of Contents, Design for the Environment (DfE), US EPA
After PCBs are etched and then rinsed with water, the solder mask is applied, and then any exposed copper is coated with solder, nickel/gold, or some other anti-corrosion coating.George Milad and Don Gudeczauskas.
"Solder Joint Reliability of Gold Surface Finishes (ENIG, ENEPIG and DIG) for PWB Assembled with Lead Free SAC Alloy.""Nickel/Gold tab plating line"
Matte solder is usually fused to provide a better bonding surface or stripped to bare copper. Treatments, such as benzimidazolethiol, prevent surface oxidation of bare copper. The places to which components will be mounted are typically plated, because untreated bare copper oxidizes quickly, and therefore is not readily solderable. Traditionally, any exposed copper was coated with solder by hot air solder levelling (HASL). The HASL finish prevents oxidation from the underlying copper, thereby guaranteeing a solderable surface.Soldering 101 – A Basic Overview This solder was a tin-lead alloy, however new solder compounds are now used to achieve compliance with the RoHS directive in the EU, which restricts the use of lead. One of these lead-free compounds is SN100CL, made up of 99.3% tin, 0.7% copper, 0.05% nickel, and a nominal of 60 ppm germanium.
It is important to use solder compatible with both the PCB and the parts used. An example is ball grid array (BGA) using tin-lead solder balls for connections losing their balls on bare copper traces or using lead-free solder paste.
Other platings used are OSP (organic surface protectant), immersion silver (IAg), immersion tin, electroless nickel with immersion gold coating (ENIG), electroless nickel electroless palladium immersion gold (ENEPIG) and direct gold plating (over nickel). Edge connectors, placed along one edge of some boards, are often nickel-plated then gold-plated. Another coating consideration is rapid diffusion of coating metal into Tin solder. Tin forms intermetallics such as Cu5Sn6 and Ag3Cu that dissolve into the Tin liquidus or solidus(@50C), stripping surface coating or leaving voids.
Electrochemical migration (ECM) is the growth of conductive metal filaments on or in a printed circuit board (PCB) under the influence of a DC voltage bias.IPC Publication IPC-TR-476A, "Electrochemical Migration: Electrically Induced Failures in Printed Wiring Assemblies," Northbrook, IL, May 1997.S.Zhan, M. H. Azarian and M. Pecht, "Reliability Issues of No-Clean Flux Technology with Lead-free Solder Alloy for High Density Printed Circuit Boards", 38th International Symposium on Microelectronics, pp. 367–375, Philadelphia, PA, September 25–29, 2005. Silver, zinc, and aluminum are known to grow whiskers under the influence of an electric field. Silver also grows conducting surface paths in the presence of halide and other ions, making it a poor choice for electronics use. Tin will grow "whiskers" due to tension in the plated surface. Tin-Lead or solder plating also grows whiskers, only reduced by the percentage Tin replaced. Reflow to melt solder or tin plate to relieve surface stress lowers whisker incidence. Another coating issue is tin pest, the transformation of tin to a powdery allotrope at low temperature.Clyde F. Coombs Printed Circuits Handbook McGraw–Hill Professional, 2007 ISBN 0-07-146734-3, pages 45–19
Solder resist application
Areas that should not be soldered may be covered with solder resist (solder mask). One of the most common solder resists used today is called "LPI" (liquid photoimageable solder mask). A photo-sensitive coating is applied to the surface of the PWB, then exposed to light through the solder mask image film, and finally developed where the unexposed areas are washed away. Dry film solder mask is similar to the dry film used to image the PWB for plating or etching. After being laminated to the PWB surface it is imaged and develop as LPI. Once common but no longer commonly used because of its low accuracy and resolution is to screen print epoxy ink. Solder resist also provides protection from the environment.
Legend printing
A legend is often printed on one or both sides of the PCB. It contains the component designators, switch settings, test points and other indications helpful in assembling, testing and servicing the circuit board.
There are three methods to print the legend.
Silk screen printing epoxy ink was the established method. It was so common that legend is often misnamed silk or silkscreen.
Liquid photo imaging is a more accurate method than screen printing.
Ink jet printing is new but increasingly used. Ink jet can print variable data such as a text or bar code with a serial number.
Bare-board test
Unpopulated boards are usually bare-board tested for "shorts" and "opens". A short is a connection between two points that should not be connected. An open is a missing connection between points that should be connected. For high-volume production a fixture or a rigid needle adapter makes contact with copper lands on the board. The fixture or adapter is a significant fixed cost and this method is only economical for high-volume or high-value production. For small or medium volume production flying probe testers are used where test probes are moved over the board by an XY drive to make contact with the copper lands. There is no need for a fixture and hence the fixed costs are much lower. The CAM system instructs the electrical tester to apply a voltage to each contact point as required and to check that this voltage appears on the appropriate contact points and only on these.
Assembly
thumb|PCB with test connection pads
In assembly the bare board is populated (or "stuffed") with electronic components to form a functional printed circuit assembly (PCA), sometimes called a "printed circuit board assembly" (PCBA). In through-hole technology, the component leads are inserted in holes surrounded by conductive pads; the holes keep the components in place. In surface-mount technology (SMT), the component is placed on the PCB so that the pins line up with the conductive pads or lands on the surfaces of the PCB; solder paste, which was previously applied to the pads, holds the components in place; if surface-mount components are applied to both sides of the board, the bottom-side components are glued to the board. In both through hole and surface mount, the components are then soldered.
There are a variety of soldering techniques used to attach components to a PCB. High volume production is usually done with a "Pick and place machine" or SMT placement machine and bulk wave soldering or reflow ovens, but skilled technicians are able to solder very tiny parts (for instance 0201 packages which are 0.02 in. by 0.01 in.) by hand under a microscope, using tweezers and a fine tip soldering iron for small volume prototypes. Some parts cannot be soldered by hand, such as BGA packages.
Often, through-hole and surface-mount construction must be combined in a single assembly because some required components are available only in surface-mount packages, while others are available only in through-hole packages. Another reason to use both methods is that through-hole mounting can provide needed strength for components likely to endure physical stress, while components that are expected to go untouched will take up less space using surface-mount techniques.
For further comparison, see the SMT page.
After the board has been populated it may be tested in a variety of ways:
While the power is off, visual inspection, automated optical inspection. JEDEC guidelines for PCB component placement, soldering, and inspection are commonly used to maintain quality control in this stage of PCB manufacturing.
While the power is off, analog signature analysis, power-off testing.
While the power is on, in-circuit test, where physical measurements (for example, voltage) can be done.
While the power is on, functional test, just checking if the PCB does what it had been designed to do.
To facilitate these tests, PCBs may be designed with extra pads to make temporary connections. Sometimes these pads must be isolated with resistors. The in-circuit test may also exercise boundary scan test features of some components. In-circuit test systems may also be used to program nonvolatile memory components on the board.
In boundary scan testing, test circuits integrated into various ICs on the board form temporary connections between the PCB traces to test that the ICs are mounted correctly. Boundary scan testing requires that all the ICs to be tested use a standard test configuration procedure, the most common one being the Joint Test Action Group (JTAG) standard. The JTAG test architecture provides a means to test interconnects between integrated circuits on a board without using physical test probes. JTAG tool vendors provide various types of stimulus and sophisticated algorithms, not only to detect the failing nets, but also to isolate the faults to specific nets, devices, and pins.JTAG Tutorial (http://www.corelis.com/education/JTAG_Tutorial.htm#History)
When boards fail the test, technicians may desolder and replace failed components, a task known as rework.
Protection and packaging
PCBs intended for extreme environments often have a conformal coating, which is applied by dipping or spraying after the components have been soldered. The coat prevents corrosion and leakage currents or shorting due to condensation. The earliest conformal coats were wax; modern conformal coats are usually dips of dilute solutions of silicone rubber, polyurethane, acrylic, or epoxy. Another technique for applying a conformal coating is for plastic to be sputtered onto the PCB in a vacuum chamber. The chief disadvantage of conformal coatings is that servicing of the board is rendered extremely difficult.
Many assembled PCBs are static sensitive, and therefore must be placed in antistatic bags during transport. When handling these boards, the user must be grounded (earthed). Improper handling techniques might transmit an accumulated static charge through the board, damaging or destroying components. Even bare boards are sometimes static sensitive. Traces have become so fine that it's quite possible to blow an etch off the board (or change its characteristics) with a static charge. This is especially true on non-traditional PCBs such as MCMs and microwave PCBs.
PCB characteristics
Much of the electronics industry's PCB design, assembly, and quality control follows standards published by the IPC organization.
Through-hole technology
thumb|Through-hole (leaded) resistors
The first PCBs used through-hole technology, mounting electronic components by leads inserted through holes on one side of the board and soldered onto copper traces on the other side. Boards may be single-sided, with an unplated component side, or more compact double-sided boards, with components soldered on both sides. Horizontal installation of through-hole parts with two axial leads (such as resistors, capacitors, and diodes) is done by bending the leads 90 degrees in the same direction, inserting the part in the board (often bending leads located on the back of the board in opposite directions to improve the part's mechanical strength), soldering the leads, and trimming off the ends. Leads may be soldered either manually or by a wave soldering machine.Electronic Packaging:Solder Mounting Technologies in K.H. Buschow et al (ed), Encyclopedia of Materials:Science and Technology, Elsevier, 2001 ISBN 0-08-043152-6, pages 2708–2709
Through-hole PCB technology almost completely replaced earlier electronics assembly techniques such as point-to-point construction. From the second generation of computers in the 1950s until surface-mount technology became popular in the late 1980s, every component on a typical PCB was a through-hole component.
Through-hole manufacture adds to board cost by requiring many holes to be drilled accurately, and limits the available routing area for signal traces on layers immediately below the top layer on multi-layer boards since the holes must pass through all layers to the opposite side. Once surface-mounting came into use, small-sized SMD components were used where possible, with through-hole mounting only of components unsuitably large for surface-mounting due to power requirements or mechanical limitations, or subject to mechanical stress which might damage the PCB.
Surface-mount technology
thumb|Surface mount components, including resistors, transistors and an integrated circuit
Surface-mount technology emerged in the 1960s, gained momentum in the early 1980s and became widely used by the mid-1990s.
Components were mechanically redesigned to have small metal tabs or end caps that could be soldered directly onto the PCB surface, instead of wire leads to pass through holes. Components became much smaller and component placement on both sides of the board became more common than with through-hole mounting, allowing much smaller PCB assemblies with much higher circuit densities.
Surface mounting lends itself well to a high degree of automation, reducing labor costs and greatly increasing production rates. Components can be supplied mounted on carrier tapes. Surface mount components can be about one-quarter to one-tenth of the size and weight of through-hole components, and passive components much cheaper; prices of semiconductor surface mount devices (SMDs) are determined more by the chip itself than the package, with little price advantage over larger packages. Some wire-ended components, such as 1N4148 small-signal switch diodes, are actually significantly cheaper than SMD equivalents.
thumb|A PCB in a computer mouse: the component side (left) and the printed side (right)
Circuit properties of the PCB
Each trace consists of a flat, narrow part of the copper foil that remains after etching. The resistance, determined by width and thickness, of the traces must be sufficiently low for the current the conductor will carry. Power and ground traces may need to be wider than signal traces. In a multi-layer board one entire layer may be mostly solid copper to act as a ground plane for shielding and power return. For microwave circuits, transmission lines can be laid out in the form of stripline and microstrip with carefully controlled dimensions to assure a consistent impedance. In radio-frequency and fast switching circuits the inductance and capacitance of the printed circuit board conductors become significant circuit elements, usually undesired; but they can be used as a deliberate part of the circuit design, obviating the need for additional discrete components.
Materials
Excluding exotic products using special materials or processes all printed circuit boards manufactured today can be built using the following four materials:
Laminates
Copper-clad laminates
Resin impregnated B-stage cloth (Pre-preg)
Copper foil
Laminates
Laminates are manufactured by curing under pressure and temperature layers of cloth or paper with thermoset resin to form an integral final piece of uniform thickness. The size can be up to in width and length. Varying cloth weaves (threads per inch or cm), cloth thickness, and resin percentage are used to achieve the desired final thickness and dielectric characteristics. Available standard laminate thickness are listed in
+ Table 1Standard laminate thickness per ANSI/IPC-D-275 IPC laminatenumber Thicknessin inches Thicknessin millimeters IPC laminatenumber Thicknessin inches Thicknessin millimetersL10.0020.05 L90.0280.70L20.0040.10 L100.0350.90L30.0060.15 L110.0431.10L40.0080.20 L120.0551.40L50.0100.25 L130.0591.50L60.0120.30 L140.0751.90L70.0160.40 L150.0902.30L80.0200.50 L160.1223.10Notes:
The cloth or fiber material used, resin material, and the cloth to resin ratio determine the laminate's type designation (FR-4, CEM-1, G-10, etc.) and therefore the characteristics of the laminate produced. Important characteristics are the level to which the laminate is fire retardant, the dielectric constant (er), the loss factor (tδ), the tensile strength, the shear strength, the glass transition temperature (Tg), and the Z-axis expansion coefficient (how much the thickness changes with temperature).
There are quite a few different dielectrics that can be chosen to provide different insulating values depending on the requirements of the circuit. Some of these dielectrics are polytetrafluoroethylene (Teflon), FR-4, FR-1, CEM-1 or CEM-3. Well known pre-preg materials used in the PCB industry are FR-2 (phenolic cotton paper), FR-3 (cotton paper and epoxy), FR-4 (woven glass and epoxy), FR-5 (woven glass and epoxy), FR-6 (matte glass and polyester), G-10 (woven glass and epoxy), CEM-1 (cotton paper and epoxy), CEM-2 (cotton paper and epoxy), CEM-3 (non-woven glass and epoxy), CEM-4 (woven glass and epoxy), CEM-5 (woven glass and polyester). Thermal expansion is an important consideration especially with ball grid array (BGA) and naked die technologies, and glass fiber offers the best dimensional stability.
FR-4 is by far the most common material used today. The board with copper on it is called "copper-clad laminate".
With decreasing size of board features and increasing frequencies, small nonhomogeneities like uneven distribution of fiberglass or other filler, thickness variations, and bubbles in the resin matrix, and the associated local variations in the dielectric constant, are gaining importance.
Key substrate parameters
The circuitboard substrates are usually dielectric composite materials. The composites contain a matrix (usually an epoxy resin), a reinforcement (usually a woven, sometimes nonwoven, glass fibers, sometimes even paper), and in some cases a filler is added to the resin (e.g. ceramics; titanate ceramics can be used to increase the dielectric constant).
The reinforcement type defines two major classes of materials - woven and non-woven. Woven reinforcements are cheaper, but the high dielectric constant of glass may not be favorable for many higher-frequency applications. The spatially nonhomogeneous structure also introduces local variations in electrical parameters, due to different resin/glass ratio at different areas of the weave pattern. Nonwoven reinforcements, or materials with low or no reinforcement, are more expensive but more suitable for some RF/analog applications.
The substrates are characterized by several key parameters, chiefly thermomechanical (glass transition temperature, tensile strength, shear strength, thermal expansion), electrical (dielectric constant, loss tangent, dielectric breakdown voltage, leakage current, tracking resistance...), and others (e.g. moisture absorption).
At the glass transition temperature the resin in the composite softens and significantly increases thermal expansion; exceeding Tg then exerts mechanical overload on the board components - e.g. the joints and the vias. Below Tg the thermal expansion of the resin roughly matches copper and glass, above it gets significantly higher. As the reinforcement and copper confine the board along the plane, virtually all volume expansion projects to the thickness and stresses the plated-through holes. Repeated soldering or other exposition to higher temperatures can cause failure of the plating, especially with thicker boards; thick boards therefore require high Tg matrix.
The materials used determine the substrate's dielectric constant. This constant is also dependent on frequency, usually decreasing with frequency. As this constant determines the signal propagation speed, frequency dependence introduces phase distortion in wideband applications; as flat dielectric constant vs frequency characteristics as achievable is important here. The impedance of transmission lines decreases with frequency, therefore faster edges of signals reflect more than slower ones.
Dielectric breakdown voltage determines the maximum voltage gradient the material can be subjected to before suffering a breakdown.
Tracking resistance determines how the material resists high voltage electrical discharges creeping over the board surface.
Loss tangent determines how much of the electromagnetic energy from the signals in the conductors is absorbed in the board material. This factor is important for high frequencies. Low-loss materials are more expensive. Choosing unnecessarily low-loss material is a common error in high-frequency digital design; it increases the cost of the boards without a corresponding benefit. Signal degradation by loss tangent and dielectric constant can be easily assessed by an eye pattern.
Moisture absorption occurs when the material is exposed to high humidity or water. Both the resin and the reinforcement may absorb water; water may be also soaked by capillary forces through voids in the materials and along the reinforcement. Epoxies of the FR-4 materials aren't too susceptible, with absorption of only 0.15%. Teflon has very low absorption of 0.01%. Polyimides and cyanate esters, on the other side, suffer from high water absorption. Absorbed water can lead to significant degradation of key parameters; it impairs tracking resistance, breakdown voltage, and dielectric parameters. Relative dielectric constant of water is about 73, compared to about 4 for common circuitboard materials. Absorbed moisture can also vaporize on heating and cause cracking and delamination, the same effect responsible for "popcorning" damage on wet packaging of electronic parts. Careful baking of the substrates may be required.http://speedingedge.com/PDF-Files/tutorial.pdf
Common substrates
Often encountered materials:
FR-2 (Flame Retardant 2), phenolic paper or phenolic cotton paper, paper impregnated with a phenol formaldehyde resin. Cheap, common in low-end consumer electronics with single-sided boards. Electrical properties inferior to FR-4. Poor arc resistance. Generally rated to 105 °C. Resin composition varies by supplier.
FR-4 (Flame Retardant 4), a woven fiberglass cloth impregnated with an epoxy resin. Low water absorption (up to about 0.15%), good insulation properties, good arc resistance. Well-proven, properties well understood by manufacturers. Very common, workhorse of the industry. Several grades with somewhat different properties are available. Typically rated to 130 °C. Thin FR-4, about 0.1 mm, can be used for bendable circuitboards. Many different grades exist, with varying parameters; versions are with higher Tg, higher tracking resistance, etc.
Aluminium, or metal core board or insulated metal substrate (IMS), clad with thermally conductive thin dielectric - used for parts requiring significant cooling - power switches, LEDs. Consists of usually single, sometimes double layer thin circuitboard based on e.g. FR-4, laminated on an aluminium sheetmetal, commonly 0.8, 1, 1.5, 2 or 3 mm thick. The thicker laminates sometimes come also with thicker copper metalization.
Flexible substrates - can be a standalone copper-clad foil or can be laminated to a thin stiffener, e.g. 50-130 µm
Kapton, a polyimide foil. Used for flexible printed circuits, in this form common in small form-factor consumer electronics or for flexible interconnects. Resistant to high temperatures.
Pyralux, a polyimide-fluoropolymer composite foil. Copper layer can delaminate during soldering.
Less-often encountered materials:
FR-1 (Flame Retardant 1), like FR-2, typically specified to 105 °C, some grades rated to 130 °C. Room-temperature punchable. Similar to cardboard. Poor moisture resistance. Low arc resistance.
FR-3 (Flame Retardant 3), cotton paper impregnated with epoxy. Typically rated to 105 °C.
FR-5 (Flame Retardant 5), woven fiberglass and epoxy, high strength at higher temperatures, typically specified to 170 °C.
FR-6 (Flame Retardant 6), matte glass and polyester
G-10, woven glass and epoxy - high insulation resistance, low moisture absorption, very high bond strength. Typically rated to 130 °C.
G-11, woven glass and epoxy - high resistance to solvents, high flexural strength retention at high temperatures. Typically rated to 170 °C.
CEM-1, cotton paper and epoxy
CEM-2, cotton paper and epoxy
CEM-3, non-woven glass and epoxy
CEM-4, woven glass and epoxy
CEM-5, woven glass and polyester
PTFE, pure - expensive, low dielectric loss, for high frequency applications, very low moisture absorption (0.01%), mechanically soft. Difficult to laminate, rarely used in multilayer applications.
PTFE, ceramic filled - expensive, low dielectric loss, for high frequency applications. Varying ceramics/PTFE ratio allows adjusting dielectric constant and thermal expansion.
RF-35, fiberglass-reinforced ceramics-filled PTFE. Relatively less expensive, good mechanical properties, good high-frequency properties.http://www.multi-circuit-boards.eu/fileadmin/user_upload/downloads/e_taconic_rf35-hf_www.multi-circuit-boards.eu.pdf
Alumina, a ceramic. Hard, brittle, very expensive, very high performance, good thermal conductivity.
Polyimide, a high-temperature polymer. Expensive, high-performance. Higher water absorption (0.4%). Can be used from cryogenic temperatures to over 260 °C.
Copper thickness
Copper thickness of PCBs can be specified as units of length (in micrometers or mils) but is often specified as weight of copper per area (in ounce per square foot) which is easier to measure. One ounce per square foot is 1.344 mils or 34 micrometers thickness.
The printed circuit board industry defines heavy copper as layers exceeding three ounces of copper, or approximately 0.0042 inches (4.2 mils, 105 μm) thick. PCB designers and fabricators often use heavy copper when design and manufacturing circuit boards in order to increase current-carrying capacity as well as resistance to thermal strains. Heavy copper-plated vias transfer heat to external heat sinks. IPC 2152 is a standard for determining current-carrying capacity of printed circuit board traces.
On the common FR-4 substrates, 1 oz copper (35 µm) is the usual, most common thickness; 2 oz (70 µm) and 0.5 oz (18 µm) thickness is often an option. Less common are 12 and 105 µm, 9 µm is sometimes available on some substrates. Flexible substrates typically have thinner metalization; 18 and 35 µm seem to be common, with 9 and 70 µm sometimes available. Aluminium or metal-core boards for high power devices commonly use thicker copper; 35 µm is usual but also 140 and 400 µm can be encountered.
Safety certification (US)
Safety Standard UL 796 covers component safety requirements for printed wiring boards for use as components in devices or appliances. Testing analyzes characteristics such as flammability, maximum operating temperature, electrical tracking, heat deflection, and direct support of live electrical parts.
Multiwire boards
Multiwire is a patented technique of interconnection which uses machine-routed insulated wires embedded in a non-conducting matrix (often plastic resin). It was used during the 1980s and 1990s. (Kollmorgen Technologies Corp, filed 1978) Multiwire is still available in 2010 through Hitachi. There are other competitive discrete wiring technologies that have been developed (Jumatech , layered sheets).
Since it was quite easy to stack interconnections (wires) inside the embedding matrix, the approach allowed designers to forget completely about the routing of wires (usually a time-consuming operation of PCB design): Anywhere the designer needs a connection, the machine will draw a wire in straight line from one location/pin to another. This led to very short design times (no complex algorithms to use even for high density designs) as well as reduced crosstalk (which is worse when wires run parallel to each other—which almost never happens in Multiwire), though the cost is too high to compete with cheaper PCB technologies when large quantities are needed.
Corrections can be made to a Multiwire board more easily than to a PCB.David E. Weisberg.
"Chapter 14: Intergraph".
2008.
p. 14-8.
Cordwood construction
thumb|upright=1.5|A cordwood module
thumb|Cordwood construction was used in proximity fuzes.
Cordwood construction can save significant space and was often used with wire-ended components in applications where space was at a premium (such as fuzes, missile guidance, and telemetry systems) and in high-speed computers, where short traces were important. In cordwood construction, axial-leaded components were mounted between two parallel planes. The components were either soldered together with jumper wire, or they were connected to other components by thin nickel ribbon welded at right angles onto the component leads. To avoid shorting together different interconnection layers, thin insulating cards were placed between them. Perforations or holes in the cards allowed component leads to project through to the next interconnection layer. One disadvantage of this system was that special nickel-leaded components had to be used to allow the interconnecting welds to be made. Differential thermal expansion of the component could put pressure on the leads of the components and the PCB traces and cause physical damage (as was seen in several modules on the Apollo program). Additionally, components located in the interior are difficult to replace. Some versions of cordwood construction used soldered single-sided PCBs as the interconnection method (as pictured), allowing the use of normal-leaded components.
Before the advent of integrated circuits, this method allowed the highest possible component packing density; because of this, it was used by a number of computer vendors including Control Data Corporation. The cordwood method of construction was used only rarely once semiconductor electronics and PCBs became widespread.
History
Development of the methods used in modern printed circuit boards started early in the 20th century. In 1903, a German inventor, Albert Hanson, described flat foil conductors laminated to an insulating board, in multiple layers. Thomas Edison experimented with chemical methods of plating conductors onto linen paper in 1904. Arthur Berry in 1913 patented a print-and-etch method in the UK, and in the United States Max Schoop obtained a patent to flame-spray metal onto a board through a patterned mask. Charles Ducas in 1927 patented a method of electroplating circuit patterns.Charles A. Harper, Electronic materials and processes handbook, McGraw-Hill,2003 ISBN 0-07-140214-4, pages 7.3 and 7.4
The Austrian engineer Paul Eisler invented the printed circuit as part of a radio set while working in the UK around 1936. Around 1943 the USA began to use the technology on a large scale to make proximity fuses for use in World War II. After the war, in 1948, the USA released the invention for commercial use. Printed circuits did not become commonplace in consumer electronics until the mid-1950s, after the Auto-Sembly process was developed by the United States Army. At around the same time in the UK work along similar lines was carried out by Geoffrey Dummer, then at the RRDE.
thumb|An example of hand-drawn etched traces on a PCB
Before printed circuits (and for a while after their invention), point-to-point construction was used. For prototypes, or small production runs, wire wrap or turret board can be more efficient. Predating the printed circuit invention, and similar in spirit, was John Sargrove's 1936–1947 Electronic Circuit Making Equipment (ECME) which sprayed metal onto a Bakelite plastic board. The ECME could produce three radio boards per minute.
During World War II, the development of the anti-aircraft proximity fuse required an electronic circuit that could withstand being fired from a gun, and could be produced in quantity. The Centralab Division of Globe Union submitted a proposal which met the requirements: a ceramic plate would be screenprinted with metallic paint for conductors and carbon material for resistors, with ceramic disc capacitors and subminiature vacuum tubes soldered in place. The technique proved viable, and the resulting patent on the process, which was classified by the U.S. Army, was assigned to Globe Union. It was not until 1984 that the Institute of Electrical and Electronics Engineers (IEEE) awarded Mr. Harry W. Rubinstein, the former head of Globe Union's Centralab Division, its coveted Cledo Brunetti Award for early key contributions to the development of printed components and conductors on a common insulating substrate.IEEE Cledo Brunetti Award, http://www.ieee.org/documents/brunetti_rl.pdf As well, Mr. Rubinstein was honored in 1984 by his alma mater, the University of Wisconsin-Madison, for his innovations in the technology of printed electronic circuits and the fabrication of capacitors.Engineers' Day, 1984 Award Recipients, College of Engineering, University of Wisconsin-Madison, http://www.engr.wisc.edu/eday/eday1984.html
thumb|A PCB as a design on a computer (left) and realized as a board assembly populated with components (right). The board is double sided, with through-hole plating, green solder resist and a white legend. Both surface mount and through-hole components have been used.
Originally, every electronic component had wire leads, and the PCB had holes drilled for each wire of each component. The components' leads were then passed through the holes and soldered to the PCB trace. This method of assembly is called through-hole construction. In 1949, Moe Abramson and Stanislaus F. Danko of the United States Army Signal Corps developed the Auto-Sembly process in which component leads were inserted into a copper foil interconnection pattern and dip soldered. The patent they obtained in 1956 was assigned to the U.S. Army. assigned to US Army. July 31, 1956. With the development of board lamination and etching techniques, this concept evolved into the standard printed circuit board fabrication process in use today. Soldering could be done automatically by passing the board over a ripple, or wave, of molten solder in a wave-soldering machine. However, the wires and holes are wasteful since drilling holes is expensive and the protruding wires are merely cut off.
From the 1980s small surface mount parts have been used increasingly instead of through-hole components; this has led to smaller boards for a given functionality and lower production costs, but with some additional difficulty in servicing faulty boards.
Historically, many PCB measurements were in multiples of a thousandth of an inch, also called "mils".
For example, the Dual In-line Package (DIP) and most other through-hole components have pins located on a grid spacing of 100 mils (0.1 inch).
Surface-mount SOIC components have a pin pitch of 50 mils.
SOP components have a pin pitch of 25 mils.
Level B technology recommends a minimum trace width of 8 mils, which allows "double-track" – two traces between DIP pins.Kraig Mitzner.
"Complete PCB Design Using OrCad Capture and Layout".
2011."TINA PCB DesignManual".
See also
Breadboard
C.I.D.+
Design for manufacturability (PCB)
Electronic packaging
Electronic waste
Multi-chip module
Occam process – another process for the manufacturing of PCBs
Point-to-point construction
Printed electronics – creation of components by printing
Printed circuit board milling
Stripboard
Veroboard
Wire wrap
PCB materials
Conductive ink
Laminate materials:
BT-Epoxy
Composite epoxy material, CEM-1,5
Cyanate Ester
FR-2
FR-4, the most common PCB material
Polyimide
PTFE, Polytetrafluoroethylene (Teflon)
PCB layout software
List of EDA companies
Comparison of EDA software
References
External links
A collection of board & module construction techniques (Italian, 2 pp.)
PCB Fabrication Data - A Guide
The Gerber Format Specification
Category:Electrical engineering
Category:Electronics substrates
Category:Electronics manufacturing
Category:Electronic engineering
Category:Printed circuit board manufacturing | 65,910 | 2017-01 |
Empiricism | thumb|right|John Locke, a leading philosopher of British empiricism
Empiricism is a theory that states that knowledge comes only or primarily from sensory experience. One of several views of epistemology, the study of human knowledge, along with rationalism and skepticism, empiricism emphasizes the role of empirical evidence in the formation of ideas, over the notion of innate ideas or traditions; empiricists may argue however that traditions (or customs) arise due to relations of previous sense experiences.Hume, David. Inquiry Concerning Human Understanding, 1748.
Empiricism in the philosophy of science emphasizes evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation.
Empiricism, often used by natural scientists, says that "knowledge is based on experience" and that "knowledge is tentative and probabilistic, subject to continued revision and falsification."Shelley, M. (2006). Empiricism. In F. English (Ed.), Encyclopedia of educational leadership and administration. (pp. 338-339). Thousand Oaks, CA: SAGE Publications, Inc. One of the epistemological tenets is that sensory experience creates knowledge. Empirical research, including experiments and validated measurement tools, guides the scientific method.
Etymology
The English term empirical derives from the Greek word ἐμπειρία, empeiria, which is cognate with and translates to the Latin experientia, from which are derived the word experience and the related experiment. The term was used by the Empiric school of ancient Greek medical practitioners, who rejected the three doctrines of the Dogmatic school, preferring to rely on the observation of "phenomena".Sini, Carlo (2004), "Empirismo", in Gianni Vattimo et al. (eds.), Enciclopedia Garzanti della Filosofia.
History
Background
A central concept in science and the scientific method is that it must be empirically based on the evidence of the senses. Both natural and social sciences use working hypotheses that are testable by observation and experiment. The term semi-empirical is sometimes used to describe theoretical methods that make use of basic axioms, established scientific laws, and previous experimental results in order to engage in reasoned model building and theoretical inquiry.
Philosophical empiricists hold no knowledge to be properly inferred or deduced unless it is derived from one's sense-based experience.Markie, P. (2004), "Rationalism vs. Empiricism" in Edward D. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint. This view is commonly contrasted with rationalism, which states that knowledge may be derived from reason independently of the senses. For example, John Locke held that some knowledge (e.g. knowledge of God's existence) could be arrived at through intuition and reasoning alone. Similarly Robert Boyle, a prominent advocate of the experimental method, held that we have innate ideas.Loeb, Luis E. (1981), From Descartes to Hume: Continental Metaphysics and the Development of Modern Philosophy, Ithaca, Cornell University Press.Engfer, Hans-Jürgen (1996), Empirismus versus Rationalismus? Kritik eines philosophiegeschichtlichen Schemas, Padeborn: Schöningh. The main continental rationalists (Descartes, Spinoza, and Leibniz) were also advocates of the empirical "scientific method".Buckle, Stephen (1999), "British Sceptical Realism. A Fresh Look at the British Tradition", European Journal of Philosophy, 7, pp. 1–2.Peter Anstey, "ESP is best", Early Modern Experimental Philosophy, 2010.
Early empiricism
Vaisheshika darshana, founded by the ancient Indian philosopher Kanada, accepted perception and inference as the only two reliable sources of knowledge. This is enumerated in his work Vaiśeṣika Sūtra.
The notion of tabula rasa ("clean slate" or "blank tablet") connotes a view of mind as an originally blank or empty recorder (Locke used the words "white paper") on which experience leaves marks. This denies that humans have innate ideas. The image dates back to Aristotle:
What the mind (nous) thinks must be in it in the same sense as letters are on a tablet (grammateion) which bears no actual writing (grammenon); this is just what happens in the case of the mind. (Aristotle, On the Soul, 3.4.430a1).
Aristotle's explanation of how this was possible was not strictly empiricist in a modern sense, but rather based on his theory of potentiality and actuality, and experience of sense perceptions still requires the help of the active nous. These notions contrasted with Platonic notions of the human mind as an entity that pre-existed somewhere in the heavens, before being sent down to join a body on Earth (see Plato's Phaedo and Apology, as well as others). Aristotle was considered to give a more important position to sense perception than Plato, and commentators in the Middle Ages summarized one of his positions as "nihil in intellectu nisi prius fuerit in sensu" (Latin for "nothing in the intellect without first being in the senses").
This idea was later developed in ancient philosophy by the Stoic school. Stoic epistemology generally emphasized that the mind starts blank, but acquires knowledge as the outside world is impressed upon it. The doxographer Aetius summarizes this view as "When a man is born, the Stoics say, he has the commanding part of his soul like a sheet of paper ready for writing upon."Diels-Kranz 4.11 translated by Later stoics, such as Sextus of Chaeronea, would continue this idea of empiricism in later Stoic writings as well. As Sextus contends "For every thought comes from sense-perception or not without sense-perception and either from direct experience or not without direct experience" (Against the Professors, 8.56-8).
thumb|left|A drawing of Ibn Sina (Avicenna) from 1271
During the Middle Ages Aristotle's theory of tabula rasa was developed by Islamic philosophers starting with Al Farabi, developing into an elaborate theory by Avicenna and demonstrated as a thought experiment by Ibn Tufail. For Avicenna (Ibn Sina), for example, the tabula rasa is a pure potentiality that is actualized through education, and knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" developed through a "syllogistic method of reasoning in which observations lead to propositional statements which when compounded lead to further abstract concepts". The intellect itself develops from a material intellect (al-'aql al-hayulani), which is a potentiality "that can acquire knowledge to the active intellect (al-'aql al-fa'il), the state of the human intellect in conjunction with the perfect source of knowledge".Sajjad H. Rizvi (2006), Avicenna/Ibn Sina (CA. 980–1037), Internet Encyclopedia of Philosophy So the immaterial "active intellect", separate from any individual person, is still essential for understanding to occur.
In the 12th century CE the Andalusian Muslim philosopher and novelist Abu Bakr Ibn Tufail (known as "Abubacer" or "Ebn Tophail" in the West) included the theory of tabula rasa as a thought experiment in his Arabic philosophical novel, Hayy ibn Yaqdhan in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. The Latin translation of his philosophical novel, entitled Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in An Essay Concerning Human Understanding.G. A. Russell (1994), The 'Arabick' Interest of the Natural Philosophers in Seventeenth-Century England, pp. 224–62, Brill Publishers, ISBN 90-04-09459-8
A similar Islamic theological novel, Theologus Autodidactus, was written by the Arab theologian and physician Ibn al-Nafis in the 13th century. It also dealt with the theme of empiricism through the story of a feral child on a desert island, but departed from its predecessor by depicting the development of the protagonist's mind through contact with society rather than in isolation from society.Dr. Abu Shadi Al-Roubi (1982), "Ibn Al-Nafis as a philosopher", Symposium on Ibn al-Nafis, Second International Conference on Islamic Medicine: Islamic Medical Organization, Kuwait (cf. Ibn al-Nafis As a Philosopher , Encyclopedia of Islamic World)
During the 13th century Thomas Aquinas adopted the Aristotelian position that the senses are essential to mind into scholasticism. Bonaventure (1221–1274), one of Aquinas' strongest intellectual opponents, offered some of the strongest arguments in favour of the Platonic idea of the mind.
Renaissance Italy
In the late renaissance various writers began to question the medieval and classical understanding of knowledge acquisition in a more fundamental way. In political and historical writing Niccolò Machiavelli and his friend Francesco Guicciardini initiated a new realistic style of writing. Machiavelli in particular was scornful of writers on politics who judged everything in comparison to mental ideals and demanded that people should study the "effectual truth" instead. Their contemporary, Leonardo da Vinci (1452–1519) said, "If you find from your own experience that something is a fact and it contradicts what some authority has written down, then you must abandon the authority and base your reasoning on your own findings.""Seeing the Body: The Divergence of Ancient Chineseand Western Medical Illustration", Camillia Matuk, Journal of Biocommunication, VOl. 32, No. 1, 2006,
The decidedly anti-Aristotelian and anti-clerical music theorist Vincenzo Galilei (ca. 1520–1591), father of Galileo and the inventor of monody, made use of the method in successfully solving musical problems, firstly, of tuning such as the relationship of pitch to string tension and mass in stringed instruments, and to volume of air in wind instruments; and secondly to composition, by his various suggestions to composers in his Dialogo della musica antica e moderna (Florence, 1581). The Italian word he used for "experiment" was esperienza. It is known that he was the essential pedagogical influence upon the young Galileo, his eldest son (cf. Coelho, ed. Music and Science in the Age of Galileo Galilei), arguably one of the most influential empiricists in history. Vincenzo, through his tuning research, found the underlying truth at the heart of the misunderstood myth of 'Pythagoras' hammers' (the square of the numbers concerned yielded those musical intervals, not the actual numbers, as believed), and through this and other discoveries that demonstrated the fallibility of traditional authorities, a radically empirical attitude developed, passed on to Galileo, which regarded "experience and demonstration" as the sine qua non of valid rational enquiry.
British empiricism
British empiricism, though it was not a term used at the time, derives from the 17th century period of early modern philosophy and modern science. The term became useful in order to describe differences perceived between two of its founders Francis Bacon, described as empiricist, and René Descartes, who is described as a rationalist. Thomas Hobbes and Baruch Spinoza, in the next generation, are often also described as an empiricist and a rationalist respectively. John Locke, George Berkeley, and David Hume were the primary exponents of empiricism in the 18th century Enlightenment, with Locke being the person who is normally known as the founder of empiricism as such.
In response to the early-to-mid-17th century "continental rationalism" John Locke (1632–1704) proposed in An Essay Concerning Human Understanding (1689) a very influential view wherein the only knowledge humans can have is a posteriori, i.e., based upon experience. Locke is famously attributed with holding the proposition that the human mind is a tabula rasa, a "blank tablet", in Locke's words "white paper", on which the experiences derived from sense impressions as a person's life proceeds are written. There are two sources of our ideas: sensation and reflection. In both cases, a distinction is made between simple and complex ideas. The former are unanalysable, and are broken down into primary and secondary qualities. Primary qualities are essential for the object in question to be what it is. Without specific primary qualities, an object would not be what it is. For example, an apple is an apple because of the arrangement of its atomic structure. If an apple was structured differently, it would cease to be an apple. Secondary qualities are the sensory information we can perceive from its primary qualities. For example, an apple can be perceived in various colours, sizes, and textures but it is still identified as an apple. Therefore, its primary qualities dictate what the object essentially is, while its secondary qualities define its attributes. Complex ideas combine simple ones, and divide into substances, modes, and relations. According to Locke, our knowledge of things is a perception of ideas that are in accordance or discordance with each other, which is very different from the quest for certainty of Descartes.
thumb|upright|left|Bishop George Berkeley
A generation later, the Irish Anglican bishop, George Berkeley (1685–1753), determined that Locke's view immediately opened a door that would lead to eventual atheism. In response to Locke, he put forth in his Treatise Concerning the Principles of Human Knowledge (1710) an important challenge to empiricism in which things only exist either as a result of their being perceived, or by virtue of the fact that they are an entity doing the perceiving. (For Berkeley, God fills in for humans by doing the perceiving whenever humans are not around to do it.) In his text Alciphron, Berkeley maintained that any order humans may see in nature is the language or handwriting of God.Thornton, Stephen (1987) "Berkeley's Theory of Reality" in The Journal of the Limerick Philosophical Society, UL.ie Berkeley's approach to empiricism would later come to be called subjective idealism.Macmillan Encyclopedia of Philosophy (1969), "George Berkeley", vol. 1, p. 297.
The Scottish philosopher David Hume (1711–1776) responded to Berkeley's criticisms of Locke, as well as other differences between early modern philosophers, and moved empiricism to a new level of skepticism. Hume argued in keeping with the empiricist view that all knowledge derives from sense experience, but he accepted that this has implications not normally acceptable to philosophers. He wrote for example, "Locke divides all arguments into demonstrative and probable. On this view, we must say that it is only probable that all men must die or that the sun will rise to-morrow, because neither of these can be demonstrated. But to conform our language more to common use, we ought to divide arguments into demonstrations, proofs, and probabilities—by ‘proofs’ meaning arguments from experience that leave no room for doubt or opposition." And,
Hume divided all of human knowledge into two categories: relations of ideas and matters of fact (see also Kant's analytic-synthetic distinction). Mathematical and logical propositions (e.g. "that the square of the hypotenuse is equal to the sum of the squares of the two sides") are examples of the first, while propositions involving some contingent observation of the world (e.g. "the sun rises in the East") are examples of the second. All of people's "ideas", in turn, are derived from their "impressions". For Hume, an "impression" corresponds roughly with what we call a sensation. To remember or to imagine such impressions is to have an "idea". Ideas are therefore the faint copies of sensations.
thumb|right|David Hume's empiricism led to numerous philosophical schools.
Hume maintained that all knowledge, even the most basic beliefs about the natural world, cannot be conclusively established by reason. Rather, he maintained, our beliefs are more a result of accumulated habits, developed in response to accumulated sense experiences. Among his many arguments Hume also added another important slant to the debate about scientific method — that of the problem of induction. Hume argued that it requires inductive reasoning to arrive at the premises for the principle of inductive reasoning, and therefore the justification for inductive reasoning is a circular argument. Among Hume's conclusions regarding the problem of induction is that there is no certainty that the future will resemble the past. Thus, as a simple instance posed by Hume, we cannot know with certainty by inductive reasoning that the sun will continue to rise in the East, but instead come to expect it to do so because it has repeatedly done so in the past.Hume, D. "An Enquiry Concerning Human Understanding", in Enquiries Concerning the Human Understanding and Concerning the Principles of Morals, 2nd edition, L.A. Selby-Bigge (ed.), Oxford University Press, Oxford, UK, 1902[1748].
Hume concluded that such things as belief in an external world and belief in the existence of the self were not rationally justifiable. According to Hume these beliefs were to be accepted nonetheless because of their profound basis in instinct and custom. Hume's lasting legacy, however, was the doubt that his skeptical arguments cast on the legitimacy of inductive reasoning, allowing many skeptics who followed to cast similar doubt.
Phenomenalism
Most of Hume's followers have disagreed with his conclusion that belief in an external world is rationally unjustifiable, contending that Hume's own principles implicitly contained the rational justification for such a belief, that is, beyond being content to let the issue rest on human instinct, custom and habit.Morick, H. (1980), Challenges to Empiricism, Hackett Publishing, Indianapolis, IN. According to an extreme empiricist theory known as phenomenalism, anticipated by the arguments of both Hume and George Berkeley, a physical object is a kind of construction out of our experiences.Marconi, Diego (2004), "Fenomenismo"', in Gianni Vattimo and Gaetano Chiurazzi (eds.), L'Enciclopedia Garzanti di Filosofia, 3rd edition, Garzanti, Milan, Italy. Phenomenalism is the view that physical objects, properties, events (whatever is physical) are reducible to mental objects, properties, events. Ultimately, only mental objects, properties, events, exist — hence the closely related term subjective idealism. By the phenomenalistic line of thinking, to have a visual experience of a real physical thing is to have an experience of a certain kind of group of experiences. This type of set of experiences possesses a constancy and coherence that is lacking in the set of experiences of which hallucinations, for example, are a part. As John Stuart Mill put it in the mid-19th century, matter is the "permanent possibility of sensation".Mill, J.S., "An Examination of Sir William Rowan Hamilton's Philosophy", in A.J. Ayer and Ramond Winch (eds.), British Empirical Philosophers, Simon and Schuster, New York, NY, 1968.
Mill's empiricism went a significant step beyond Hume in still another respect: in maintaining that induction is necessary for all meaningful knowledge including mathematics. As summarized by D.W. Hamlin:
Mill's empiricism thus held that knowledge of any kind is not from direct experience but an inductive inference from direct experience.Wilson, Fred (2005), "John Stuart Mill", in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy. The problems other philosophers have had with Mill's position center around the following issues: Firstly, Mill's formulation encounters difficulty when it describes what direct experience is by differentiating only between actual and possible sensations. This misses some key discussion concerning conditions under which such "groups of permanent possibilities of sensation" might exist in the first place. Berkeley put God in that gap; the phenomenalists, including Mill, essentially left the question unanswered. In the end, lacking an acknowledgement of an aspect of "reality" that goes beyond mere "possibilities of sensation", such a position leads to a version of subjective idealism. Questions of how floor beams continue to support a floor while unobserved, how trees continue to grow while unobserved and untouched by human hands, etc., remain unanswered, and perhaps unanswerable in these terms.Macmillan Encyclopedia of Philosophy (1969), "Phenomenalism", vol. 6, p. 131. Secondly, Mill's formulation leaves open the unsettling possibility that the "gap-filling entities are purely possibilities and not actualities at all". Thirdly, Mill's position, by calling mathematics merely another species of inductive inference, misapprehends mathematics. It fails to fully consider the structure and method of mathematical science, the products of which are arrived at through an internally consistent deductive set of procedures which do not, either today or at the time Mill wrote, fall under the agreed meaning of induction.Macmillan Encyclopedia of Philosophy (1969), "Empiricism", vol. 2, p. 503.Macmillan Encyclopedia of Philosophy (1969), "Axiomatic Method", vol. 5, p.188–189, 191ff.
The phenomenalist phase of post-Humean empiricism ended by the 1940s, for by that time it had become obvious that statements about physical things could not be translated into statements about actual and possible sense data.Bolender, John (1998), "Factual Phenomenalism: A Supervenience Theory"', Sorites, no. 9, pp. 16–31. If a physical object statement is to be translatable into a sense-data statement, the former must be at least deducible from the latter. But it came to be realized that there is no finite set of statements about actual and possible sense-data from which we can deduce even a single physical-object statement. Remember that the translating or paraphrasing statement must be couched in terms of normal observers in normal conditions of observation. There is, however, no finite set of statements that are couched in purely sensory terms and can express the satisfaction of the condition of the presence of a normal observer. According to phenomenalism, to say that a normal observer is present is to make the hypothetical statement that were a doctor to inspect the observer, the observer would appear to the doctor to be normal. But, of course, the doctor himself must be a normal observer. If we are to specify this doctor's normality in sensory terms, we must make reference to a second doctor who, when inspecting the sense organs of the first doctor, would himself have to have the sense data a normal observer has when inspecting the sense organs of a subject who is a normal observer. And if we are to specify in sensory terms that the second doctor is a normal observer, we must refer to a third doctor, and so on (also see the third man).Berlin, Isaiah (2004), The Refutation of Phenomenalism, Isaiah Berlin Virtual Library.
Logical empiricism
Logical empiricism (also logical positivism or neopositivism) was an early 20th-century attempt to synthesize the essential ideas of British empiricism (e.g. a strong emphasis on sensory experience as the basis for knowledge) with certain insights from mathematical logic that had been developed by Gottlob Frege and Ludwig Wittgenstein. Some of the key figures in this movement were Otto Neurath, Moritz Schlick and the rest of the Vienna Circle, along with A.J. Ayer, Rudolf Carnap and Hans Reichenbach.
The neopositivists subscribed to a notion of philosophy as the conceptual clarification of the methods, insights and discoveries of the sciences. They saw in the logical symbolism elaborated by Frege (1848–1925) and Bertrand Russell (1872–1970) a powerful instrument that could rationally reconstruct all scientific discourse into an ideal, logically perfect, language that would be free of the ambiguities and deformations of natural language. This gave rise to what they saw as metaphysical pseudoproblems and other conceptual confusions. By combining Frege's thesis that all mathematical truths are logical with the early Wittgenstein's idea that all logical truths are mere linguistic tautologies, they arrived at a twofold classification of all propositions: the analytic (a priori) and the synthetic (a posteriori).Achinstein, Peter, and Barker, Stephen F. (1969), The Legacy of Logical Positivism: Studies in the Philosophy of Science, Johns Hopkins University Press, Baltimore, MD. On this basis, they formulated a strong principle of demarcation between sentences that have sense and those that do not: the so-called verification principle. Any sentence that is not purely logical, or is unverifiable is devoid of meaning. As a result, most metaphysical, ethical, aesthetic and other traditional philosophical problems came to be considered pseudoproblems.Barone, Francesco (1986), Il neopositivismo logico, Laterza, Roma Bari.
In the extreme empiricism of the neopositivists—at least before the 1930s—any genuinely synthetic assertion must be reducible to an ultimate assertion (or set of ultimate assertions) that expresses direct observations or perceptions. In later years, Carnap and Neurath abandoned this sort of phenomenalism in favor of a rational reconstruction of knowledge into the language of an objective spatio-temporal physics. That is, instead of translating sentences about physical objects into sense-data, such sentences were to be translated into so-called protocol sentences, for example, "X at location Y and at time T observes such and such."Rescher, Nicholas (1985), The Heritage of Logical Positivism, University Press of America, Lanham, MD. The central theses of logical positivism (verificationism, the analytic-synthetic distinction, reductionism, etc.) came under sharp attack after World War II by thinkers such as Nelson Goodman, W.V. Quine, Hilary Putnam, Karl Popper, and Richard Rorty. By the late 1960s, it had become evident to most philosophers that the movement had pretty much run its course, though its influence is still significant among contemporary analytic philosophers such as Michael Dummett and other anti-realists.
Pragmatism
In the late 19th and early 20th century several forms of pragmatic philosophy arose. The ideas of pragmatism, in its various forms, developed mainly from discussions between Charles Sanders Peirce and William James when both men were at Harvard in the 1870s. James popularized the term "pragmatism", giving Peirce full credit for its patrimony, but Peirce later demurred from the tangents that the movement was taking, and redubbed what he regarded as the original idea with the name of "pragmaticism". Along with its pragmatic theory of truth, this perspective integrates the basic insights of empirical (experience-based) and rational (concept-based) thinking.
thumb|upright|Charles Sanders Peirce
Charles Peirce (1839–1914) was highly influential in laying the groundwork for today's empirical scientific method. Although Peirce severely criticized many elements of Descartes' peculiar brand of rationalism, he did not reject rationalism outright. Indeed, he concurred with the main ideas of rationalism, most importantly the idea that rational concepts can be meaningful and the idea that rational concepts necessarily go beyond the data given by empirical observation. In later years he even emphasized the concept-driven side of the then ongoing debate between strict empiricism and strict rationalism, in part to counterbalance the excesses to which some of his cohorts had taken pragmatism under the "data-driven" strict-empiricist view.
Among Peirce's major contributions was to place inductive reasoning and deductive reasoning in a complementary rather than competitive mode, the latter of which had been the primary trend among the educated since David Hume wrote a century before. To this, Peirce added the concept of abductive reasoning. The combined three forms of reasoning serve as a primary conceptual foundation for the empirically based scientific method today. Peirce's approach "presupposes that (1) the objects of knowledge are real things, (2) the characters (properties) of real things do not depend on our perceptions of them, and (3) everyone who has sufficient experience of real things will agree on the truth about them. According to Peirce's doctrine of fallibilism, the conclusions of science are always tentative. The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth".Ward, Teddy (n.d.), "Empiricism", Eprint.
thumb|upright|left|William James
In his Harvard "Lectures on Pragmatism" (1903), Peirce enumerated what he called the "three cotary propositions of pragmatism" (L: cos, cotis whetstone), saying that they "put the edge on the maxim of pragmatism". First among these he listed the peripatetic-thomist observation mentioned above, but he further observed that this link between sensory perception and intellectual conception is a two-way street. That is, it can be taken to say that whatever we find in the intellect is also incipiently in the senses. Hence, if theories are theory-laden then so are the senses, and perception itself can be seen as a species of abductive inference, its difference being that it is beyond control and hence beyond critique – in a word, incorrigible. This in no way conflicts with the fallibility and revisability of scientific concepts, since it is only the immediate percept in its unique individuality or "thisness" – what the Scholastics called its haecceity – that stands beyond control and correction. Scientific concepts, on the other hand, are general in nature, and transient sensations do in another sense find correction within them. This notion of perception as abduction has received periodic revivals in artificial intelligence and cognitive science research, most recently for instance with the work of Irvin Rock on indirect perception.Rock, Irvin (1983), The Logic of Perception, MIT Press, Cambridge, MA.Rock, Irvin, (1997) Indirect Perception, MIT Press, Cambridge, MA.
Around the beginning of the 20th century, William James (1842–1910) coined the term "radical empiricism" to describe an offshoot of his form of pragmatism, which he argued could be dealt with separately from his pragmatism – though in fact the two concepts are intertwined in James's published lectures. James maintained that the empirically observed "directly apprehended universe needs ... no extraneous trans-empirical connective support",James, William (1911), The Meaning of Truth. by which he meant to rule out the perception that there can be any value added by seeking supernatural explanations for natural phenomena. James' "radical empiricism" is thus not radical in the context of the term "empiricism", but is instead fairly consistent with the modern use of the term "empirical". His method of argument in arriving at this view, however, still readily encounters debate within philosophy even today.
John Dewey (1859–1952) modified James' pragmatism to form a theory known as instrumentalism. The role of sense experience in Dewey's theory is crucial, in that he saw experience as unified totality of things through which everything else is interrelated. Dewey's basic thought, in accordance with empiricism was that reality is determined by past experience. Therefore, humans adapt their past experiences of things to perform experiments upon and test the pragmatic values of such experience. The value of such experience is measured experientially and scientifically, and the results of such tests generate ideas that serve as instruments for future experimentation,Dewey, John (1906), Studies in Logical Theory. in physical sciences as in ethics. Thus, ideas in Dewey's system retain their empiricist flavour in that they are only known a posteriori.
See also
Empirical evidence
Empirical formula
Empirical relationship
Empirical research
History of scientific method
Inductive reasoning
Inquiry
Logical positivism
Natural philosophy
Naturalism
Objectivity
Psychological nativism
Quasi-empirical method
Radical empiricism
Feminist empiricism
Sensualism
Sextus Empiricus
Two Dogmas of Empiricism
Verificationism
Endnotes
References
Achinstein, Peter, and Barker, Stephen F. (1969), The Legacy of Logical Positivism: Studies in the Philosophy of Science, Johns Hopkins University Press, Baltimore, MD.
Aristotle, "On the Soul" (De Anima), W. S. Hett (trans.), pp. 1–203 in Aristotle, Volume 8, Loeb Classical Library, William Heinemann, London, UK, 1936.
Aristotle, Posterior Analytics.
Barone, Francesco (1986), Il neopositivismo logico, Laterza, Roma Bari
Berlin, Isaiah (2004), The Refutation of Phenomenalism, Isaiah Berlin Virtual Library.
Bolender, John (1998), "Factual Phenomenalism: A Supervenience Theory"', Sorites, no. 9, pp. 16–31.
Chisolm, R. (1948), "The Problem of Empiricism", Journal of Philosophy 45, 512–517.
Cushan, Anna-Marie (1983/2014). Investigation into Facts and Values: Groundwork for a theory of moral conflict resolution. [Thesis, Melbourne University], Ondwelle Publications (online): Melbourne.
Dewey, John (1906), Studies in Logical Theory.
Encyclopædia Britannica, "Empiricism", vol. 4, p. 480.
Hume, D., A Treatise of Human Nature, L.A. Selby-Bigge (ed.), Oxford University Press, London, UK, 1975.
Hume, David. "An Enquiry Concerning Human Understanding", in Enquiries Concerning the Human Understanding and Concerning the Principles of Morals, 2nd edition, L.A. Selby-Bigge (ed.), Oxford University Press, Oxford, UK, 1902. Gutenberg press full-text
James, William (1911), The Meaning of Truth.
Leavitt, Fred: "Dancing with Absurdity: Your Most Cherished Beliefs (and All Your Others) are Probably Wrong. (2015) Peter Lang Publishers.
Keeton, Morris T. (1962), "Empiricism", pp. 89–90 in Dagobert D. Runes (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Leftow, Brian (ed., 2006), Aquinas: Summa Theologiae, Questions on God, pp. vii et seq.
Macmillan Encyclopedia of Philosophy (1969), "Development of Aristotle's Thought", vol. 1, p. 153ff.
Macmillan Encyclopedia of Philosophy (1969), "George Berkeley", vol. 1, p. 297.
Macmillan Encyclopedia of Philosophy (1969), "Empiricism", vol. 2, p. 503.
Macmillan Encyclopedia of Philosophy (1969), "Mathematics, Foundations of", vol. 5, p, 188–189.
Macmillan Encyclopedia of Philosophy (1969), "Axiomatic Method", vol. 5, p. 192ff.
Macmillan Encyclopedia of Philosophy (1969), "Epistemological Discussion", subsections on "A Priori Knowledge" and "Axioms".
Macmillan Encyclopedia of Philosophy (1969), "Phenomenalism", vol. 6, p. 131.
Macmillan Encyclopedia of Philosophy (1969), "Thomas Aquinas", subsection on "Theory of Knowledge", vol. 8, pp. 106–107.
Marconi, Diego (2004), "Fenomenismo"', in Gianni Vattimo and Gaetano Chiurazzi (eds.), L'Enciclopedia Garzanti di Filosofia, 3rd edition, Garzanti, Milan, Italy.
Markie, P. (2004), "Rationalism vs. Empiricism" in Edward D. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint.
Maxwell, Nicholas (1998), The Comprehensibility of the Universe: A New Conception of Science, Oxford University Press, Oxford.
Mill, J.S., "An Examination of Sir William Rowan Hamilton's Philosophy", in A.J. Ayer and Ramond Winch (eds.), British Empirical Philosophers, Simon and Schuster, New York, NY, 1968.
Morick, H. (1980), Challenges to Empiricism, Hackett Publishing, Indianapolis, IN.
Peirce, C.S., "Lectures on Pragmatism", Cambridge, MA, March 26 – May 17, 1903. Reprinted in part, Collected Papers, CP 5.14–212. Published in full with editor's introduction and commentary, Patricia Ann Turisi (ed.), Pragmatism as a Principle and Method of Right Thinking: The 1903 Harvard "Lectures on Pragmatism", State University of New York Press, Albany, NY, 1997. Reprinted, pp. 133–241, Peirce Edition Project (eds.), The Essential Peirce, Selected Philosophical Writings, Volume 2 (1893–1913), Indiana University Press, Bloomington, IN, 1998.
Rescher, Nicholas (1985), The Heritage of Logical Positivism, University Press of America, Lanham, MD.
Rock, Irvin (1983), The Logic of Perception, MIT Press, Cambridge, MA.
Rock, Irvin, (1997) Indirect Perception, MIT Press, Cambridge, MA.
Runes, D.D. (ed., 1962), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Sini, Carlo (2004), "Empirismo", in Gianni Vattimo et al. (eds.), Enciclopedia Garzanti della Filosofia.
Solomon, Robert C., and Higgins, Kathleen M. (1996), A Short History of Philosophy, pp. 68–74.
Sorabji, Richard (1972), Aristotle on Memory.
Thornton, Stephen (1987), Berkeley's Theory of Reality, Eprint
Vanzo, Alberto (2014), "From Empirics to Empiricists", Intellectual History Review, 2014, Eprint available here and here.
Ward, Teddy (n.d.), "Empiricism", Eprint.
Wilson, Fred (2005), "John Stuart Mill", in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint.
External links
Empiricism
Theory of Knowledge: An Introduction to Empiricism
Empiricist Man
Category:Philosophical movements
Category:Epistemological theories
Category:Justification
Category:Philosophical methodology
Category:Internalism and externalism
Category:Philosophy of science | 10,174 | 2017-01 |
The Blitz | The Blitz, from the German word Blitzkrieg meaning 'lightning war', was the name used by the British press to describe the heavy air raids carried out over Britain in 1940 and 1941, during the Second World War.
The German air offensive was concentrated, aimed at the direct bombing of industrial targets and civilian centres which began with heavy raids on London during the latter phase of the battle for air superiority over the United Kingdom which became known as the Battle of Britain.
By September 1940—two months into the battle—faulty German intelligence suggested that the Royal Air Force (RAF) was close to defeat at the hands of the Luftwaffe. The German air fleets (Luftflotten) were ordered to attack London, thereby drawing up the last remnants of RAF Fighter Command into a battle of annihilation.Price 1990, p. 12.Ray 2009, pp. 104–105. Adolf Hitler and commander-in-chief of the Luftwaffe (German Air Force) Reichsmarschall Hermann Göring, sanctioned the change in emphasis on 6 September 1940.
From 7 September 1940, one year into the war, London was systematically bombed by the Luftwaffe for 56 out of the following 57 day and nights.Stansky 2007, p. 28. On 15 September 1940, a large daylight attack against London was repulsed with significant German losses. Thereafter, the Luftwaffe gradually decreased daylight operations in favour of nocturnal attacks, to avoid RAF defences. The Blitz became a fundamental night bombing campaign after October 1940, when it had become clear the Luftwaffe had failed to meet preconditions for a 1940 launch of Operation Sea Lion, the provisionally planned German invasion of Britain.
Ports and industrial centres outside London were also attacked. The main Atlantic sea port of Liverpool was bombed. The North Sea port of Hull, a convenient and easily found target or secondary target for bombers unable to locate their primary targets, was subjected to raids in the Hull Blitz during the war. Other ports including Bristol, Cardiff, Portsmouth, Plymouth, Southampton and Swansea were also bombed, as were the industrial cities of Birmingham, Belfast, Coventry, Glasgow, Manchester and Sheffield. More than one million London houses were destroyed or damaged and more than 40,000 civilians were killed, almost half of them in the capital.Richards 1954, p. 217.
By May 1941, the threat of an invasion of Britain had ended, and Hitler's attention turned to Operation Barbarossa, the invasion of the Soviet Union. The bombing failed to demoralise the British into surrender or significantly damage the war economy.Cooper 1981, p. 174. The eight months of bombing never seriously hampered British production and the war industries continued to operate and expand.Cooper 1981, p. 173. The German offensive's greatest effect was forcing the dispersal of aircraft production and parts.Hooton 1997, p. 38. British wartime studies concluded that cities generally took 10 to 15 days to recover when hit severely but exceptions like Birmingham took three months.
The German air offensive failed for several reasons. In particular, the Luftwaffe High Command (Oberkommando der Luftwaffe, OKL) did not develop a methodical strategy for destroying British war industry: German effort was diverted and dispersed against several sets of industries instead of maintaining pressure on any of them. Discussions in the OKL revolved around tactics rather than strategy. Poor intelligence on British industry and economic efficiency was also a factor.Overy 1980, pp. 34, 36.
Background
The Luftwaffe and strategic bombing
In the 1920s and 1930s, air power theorists Giulio Douhet and Billy Mitchell espoused the idea that air forces could win wars by themselves, without a need for land and sea fighting.Cox and Gray 2002, p. xvii. It was thought there was no defence against air attack, particularly at night. Enemy industry, their seats of government, factories and communications could be destroyed, effectively taking away their means to resist. It was also thought the bombing of residential centres would cause a collapse of civilian will, which might have led to the collapse of production and civil life. Democracies, where the populace was allowed to show overt disapproval of the ruling government, were thought particularly vulnerable. This thinking was prevalent in both the RAF and what was then known as the United States Army Air Corps (USAAC) between the two world wars. RAF Bomber Command's policy in particular would attempt to achieve victory through the destruction of civilian will, communications and industry.Montgomery-Hyde 1976, p. 137.
Within the Luftwaffe, there was a more muted view of strategic bombing. The OKL did not oppose the strategic bombardment of enemy industries and or cities, and believed it could greatly affect the balance of power on the battlefield in Germany's favour by disrupting production and damaging civilian morale, but they did not believe that air power alone could be decisive. Contrary to popular belief, the Luftwaffe did not have a systematic policy of what became known as "terror bombing". Evidence suggests that the Luftwaffe did not adopt an official bombing policy in which civilians became the primary target until 1942.Corum 1997, p. 7.
The vital industries and transport centres that would be targeted for shutdown were valid military targets. It could be claimed civilians were not to be targeted directly, but the breakdown of production would affect their morale and will to fight. German legal scholars of the 1930s carefully worked out guidelines for what type of bombing was permissible under international law. While direct attacks against civilians were ruled out as "terror bombing", the concept of attacking vital war industries—and probable heavy civilian casualties and breakdown of civilian morale—was ruled as acceptable.Corum 1997, p. 240
thumb|right|upright=.5|Walter Wever
Throughout the National Socialist era, until 1939, debate and discussion raged within German military journals over the role of strategic bombardment. Some argued along the lines of the British and Americans.Corum 1997, pp. 238–241. Walter Wever—the first Chief of the General Staff—championed strategic bombing and the building of appropriate aircraft for that purpose, although he emphasised the importance of aviation in operational and tactical terms. Wever outlined five key points to air strategy:
To destroy the enemy air force by bombing its bases and aircraft factories, and defeating enemy air forces attacking German targets.
To prevent the movement of large enemy ground forces to the decisive areas by destroying railways and roads, particularly bridges and tunnels, which are indispensable for the movement and supply of forces
To support the operations of the army formations, independent of railways, i.e., armoured forces and motorised forces, by impeding the enemy advance and participating directly in ground operations.
To support naval operations by attacking naval bases, protecting Germany's naval bases and participating directly in naval battles
To paralyse the enemy armed forces by stopping production in the armaments factories.Corum 1997, p. 138.
Wever argued that the Luftwaffe General Staff should not be solely educated in tactical and operational matters. He argued they should be educated in grand strategy, war economics, armament production, and the mentality of potential opponents (also known as mirror imaging). Wever's vision was not realised; the General Staff studies in those subjects fell by the wayside, and the Air Academies focused on tactics, technology, and operational planning, rather than on independent strategic air offensives.Corum 1997, p. 252.
In 1936, Wever was killed in an air crash. The failure to implement his vision for the new Luftwaffe was largely attributable to his immediate successors. Ex-Army personnel and following Chiefs of Staff Albert Kesselring and Hans-Jürgen Stumpff are usually blamed for abandoning strategic planning and focusing on close air support. However, it would seem the two most prominent enthusiasts for the focus on ground-support operations (direct or indirect) were actually Hugo Sperrle and Hans Jeschonnek, the only two wartime Chiefs of Staff. These men were long-standing professional airmen and had been since the beginning of their careers. The Luftwaffe was not pressed into ground support operations because of pressure from the army, or because it was led by ex-army personnel. It was instead a mission that suited the Luftwaffes existing approach to warfare; a culture of joint inter-service operations, rather than independent strategic air campaigns.Corum 1997, p. 248.
Hitler, Göring and air power
thumb|Hitler and Göring, March 1938
Adolf Hitler failed to pay as much attention to bombing the enemy as he did to protection from enemy bombing, although he had promoted the development of a bomber force in the 1930s and understood that it was possible to use bombers for major strategic purposes. He told the OKL in 1939 that ruthless employment of the Luftwaffe against the heart of the British will to resist could and would follow when the moment was right; however, he quickly developed a lively scepticism toward strategic bombing, confirmed by the results of the Blitz. He frequently complained of the Luftwaffes inability to damage industries sufficiently, saying, "The munitions industry cannot be interfered with effectively by air raids ... usually the prescribed targets are not hit".Overy July 1980, p. 410.
While the war was being planned Hitler never insisted upon the Luftwaffe planning a strategic bombing campaign, and did not even give ample warning to the air staff that war with Britain or even Russia was an imminent possibility. The amount of firm operational and tactical preparation for a bombing campaign was minimal, largely because of the failure by Hitler as supreme commander to insist upon such a commitment.
Ultimately, Hitler was trapped within his own vision of bombing as a terror weapon, formed in the 1930s when he threatened smaller nations into accepting German rule rather than submit to air bombardment. This fact had important implications. It showed the extent to which Hitler personally mistook Allied strategy for one of morale breaking instead of one of economic warfare, with the collapse of morale as an additional bonus.Overy July 1980, p. 411. Hitler was much more attracted to the political aspects of bombing. As the mere threat of it had produced diplomatic results in the 1930s, he expected that the threat of German retaliation would persuade the Allies to adopt a policy of moderation and not to begin a policy of unrestricted bombing. His hope was — for reasons of political prestige within Germany itself — that the German population would be protected from the Allied bombings. When this proved impossible, he began to fear that popular feeling would turn against his regime, and he redoubled efforts to mount a similar "terror offensive" against Britain in order to produce a stalemate in which both sides would hesitate to use bombing at all.
A major problem in the managing of the Luftwaffe was Hermann Göring. Hitler believed the Luftwaffe was "the most effective strategic weapon", and in reply to repeated requests from the Kriegsmarine for control over aircraft insisted, "We should never have been able to hold our own in this war if we had not had an undivided Luftwaffe".Overy July 1980, p. 407. Such principles made it much harder to integrate the air force into the overall strategy and produced in Göring a jealous and damaging defence of his "empire" while removing Hitler voluntarily from the systematic direction of the Luftwaffe at either the strategic or operational level. When Hitler tried to intervene more in the running of the air force later in the war, he was faced with a political conflict of his own making between himself and Göring, which was not fully resolved until the war was almost over. In 1940 and 1941, Göring's refusal to cooperate with the Kriegsmarine denied the entire Wehrmacht military forces of the Reich the chance to strangle British sea communications, which might have had strategic or decisive effect in the war against the British Empire.Corum 1997, p. 280.
The deliberate separation of the Luftwaffe from the rest of the military structure encouraged the emergence of a major "communications gap" between Hitler and the Luftwaffe, which other factors helped to exacerbate. For one thing, Göring's fear of Hitler led him to falsify or misrepresent what information was available in the direction of an uncritical and over-optimistic interpretation of air strength. When Göring decided against continuing Wever's original heavy bomber programme in 1937, the Reichsmarschall's own explanation was that Hitler wanted to know only how many bombers there were, not how many engines each had. In July 1939, Göring arranged a display of the Luftwaffes most advanced equipment at Rechlin, to give the impression the air force was more prepared for a strategic air war than was actually the case.Overy July 1980, p. 408.
Battle of Britain
thumb|RAF pilots with one of their Hawker Hurricanes, October 1940
Although not specifically prepared to conduct independent strategic air operations against an opponent, the Luftwaffe was expected to do so over Britain. From July until September 1940 the Luftwaffe attacked RAF Fighter Command to gain air superiority as a prelude to invasion. This involved the bombing of English Channel convoys, ports, and RAF airfields and supporting industries. Destroying RAF Fighter Command would allow the Germans to gain control of the skies over the invasion area. It was supposed that Bomber Command, RAF Coastal Command and the Royal Navy could not operate under conditions of German air superiority.McKee 1989, pp. 40–41.
The Luftwaffe's poor intelligence meant that their aircraft were not always able to locate their targets, and thus attacks on factories and airfields failed to achieve the desired results. British fighter aircraft production continued at a rate surpassing Germany's by 2 to 1.Faber 1977, p. 203. The British produced 10,000 aircraft in 1940, in comparison to Germany's 8,000.McKee 1989, p. 294. The replacement of pilots and aircrew was more difficult. Both the RAF and Luftwaffe struggled to replace manpower losses, though the Germans had larger reserves of trained aircrew. The circumstances affected the Germans more than the British. Operating over home territory, British flyers could fly again if they survived being shot down. German crews, even if they survived, faced capture. Moreover, bombers had four to five crewmen on board, representing a greater loss of manpower.Faber 1977, pp. 202–203. On 7 September, the Germans shifted away from the destruction of the RAF's supporting structures. German intelligence suggested Fighter Command was weakening, and an attack on London would force it into a final battle of annihilation while compelling the British Government to surrender.Price 1990, p. 12; McKee 1989, p. 225.
The decision to change strategy is sometimes claimed as a major mistake by the Oberkommando der Luftwaffe (OKL). It is argued that persisting with attacks on RAF airfields might have won air superiority for the Luftwaffe.Wood and Dempster 2003, pp. 212–213. Others argue that the Luftwaffe made little impression on Fighter Command in the last week of August and first week of September and that the shift in strategy was not decisive.Bungay 2000, pp. 368–369. It has also been argued that it was doubtful the Luftwaffe could have won air superiority before the "weather window" began to deteriorate in October.Corum 1997, p. 283. It was also possible, if RAF losses became severe, that they could pull out to the north, wait for the German invasion, then redeploy southward again. Other historians argue that the outcome of the air battle was irrelevant; the massive numerical superiority of British naval forces and the inherent weakness of the Kriegsmarine would have made the projected German invasion, Unternehmen Seelöwe (Operation Sea Lion), a disaster with or without German air superiority.Corum 1997, pp. 283–284; Murray 1983, pp. 45–46.
Change in strategy
Regardless of the ability of the Luftwaffe to win air superiority, Adolf Hitler was frustrated that it was not happening quickly enough. With no sign of the RAF weakening, and Luftwaffe air fleets (Luftflotten) taking punishing losses, the OKL was keen for a change in strategy. To reduce losses further, a change in strategy was also favoured to take place at night, to give the bombers greater protection under cover of darkness.Ray 1996, p. 101.
It was decided to focus on bombing Britain's industrial cities in daylight to begin with. The main focus of the bombing operations was against the city of London. The first major raid in this regard took place on 7 September. On 15 September, on a date known as the Battle of Britain Day, a large-scale raid was launched in daylight, but suffered significant loss for no lasting gain. Although there were a few large air battles fought in daylight later in the month and into October, the Luftwaffe switched its main effort to night attacks in order to reduce losses. This became official policy on 7 October. The air campaign soon got underway against London and other British cities. However, the Luftwaffe faced limitations. Its aircraft—Dornier Do 17, Junkers Ju 88, and Heinkel He 111s—were capable of carrying out strategic missions,Corum 1997, p. 282. but were incapable of doing greater damage because of bomb-load limitations.Overy 1980, p. 35. The Luftwaffes decision in the interwar period to concentrate on medium bombers can be attributed to several reasons: Hitler did not intend or foresee a war with Britain in 1939; the OKL believed a medium bomber could carry out strategic missions just as well as a heavy bomber force; and Germany did not possess the resources or technical ability to produce four-engined bombers before the war.Murray 1983, pp. 10–11.
Although it had equipment capable of doing serious damage, the problem for the Luftwaffe was its unclear strategy and poor intelligence. OKL had not been informed that Britain was to be considered a potential opponent until early 1938. It had no time to gather reliable intelligence on Britain's industries. Moreover, OKL could not settle on an appropriate strategy. German planners had to decide whether the Luftwaffe should deliver the weight of its attacks against a specific segment of British industry such as aircraft factories, or against a system of interrelated industries such as Britain's import and distribution network, or even in a blow aimed at breaking the morale of the British population.Murray 1983, p. 54; McKee 1989, p. 255. The Luftwaffes strategy became increasingly aimless over the winter of 1940–1941.Overy 1980, pp. 34, 37. Disputes among the OKL staff revolved more around tactics than strategy.Hooton 1997, p. 38; Hooton 2010, p. 90. This method condemned the offensive over Britain to failure before it began.Bungay 2000, p. 379.
In an operational capacity, limitations in weapons technology and quick British reactions were making it more difficult to achieve strategic effect. Attacking ports, shipping and imports as well as disrupting rail traffic in the surrounding areas, especially the distribution of coal, an important fuel in all industrial economies of the Second World War, would net a positive result. However, the use of delayed-action bombs, while initially very effective, gradually had less impact, partly because they failed to detonate. Moreover, the British had anticipated the change in strategy and dispersed its production facilities making them less vulnerable to a concentrated attack. Regional commissioners were given plenipotentiary powers to restore communications and organise the distribution of supplies to keep the war economy moving.
Civilian defensive measures
Prewar preparations and fears
Experts saw London, with nine million people— one-fifth of Britain's population— living in , as both a top target for air attack and difficult to defend because of its size.Titmuss 1950, p. 11. As aircraft technology improved in the 1930s, most believed that "the bomber will always get through", and estimates of casualties from an air war grew.
Based on experience with German strategic bombing during World War I against the United Kingdom, the British government estimated after the war that 50 casualties— with about one third killed— would result for every tonne of bombs dropped on London. The estimate of tonnes of bombs an enemy could drop per day grew as aircraft technology advanced, from 75 in 1922, to 150 in 1934, to 644 in 1937. That year the Committee on Imperial Defence estimated that an attack of 60 days would result in 600,000 dead and 1,200,000 wounded. News reports of the Spanish Civil War, such as the bombing of Barcelona, supported the 50-casualties-per-tonne estimate. By 1938 experts generally expected that Germany would attempt to drop as much as 3,500 tonnes in the first 24 hours of war and average 700 tonnes a day for several weeks. In addition to high explosive and incendiary bombs the enemy would possibly use poison gas and even bacteriological warfare, all with a high degree of accuracy.Titmuss 1950, pp. 4–6,9,12–13. In 1939 military theorist Basil Liddell-Hart predicted that 250,000 deaths and injuries in Britain could occur in the first week of war.
thumb|Barrage balloons flying over central London
Civilians were aware of the deadly power of aerial attacks through newsreels of Barcelona, Guernica and Shanghai. Many popular works of fiction during the 1920s and 1930s portrayed aerial bombing, such as H. G. Wells' novel The Shape of Things to Come and its 1936 film adaptation, and others such as The Air War of 1936 and The Poison War. Harold Macmillan wrote in 1956 that he and others around him "thought of air warfare in 1938 rather as people think of nuclear war today".Mackay 2002, pp. 39–41.
In addition to the dead and wounded, government leaders feared mass psychological trauma from aerial attack and a resulting collapse of civil society. A committee of psychiatrists reported to the government in 1938 that there would be three times as many mental as physical casualties from aerial bombing, implying three to four million psychiatric patients.Titmuss 1950, p. 20. Winston Churchill told Parliament in 1934, "We must expect that, under the pressure of continuous attack upon London, at least three or four million people would be driven out into the open country around the metropolis." Panicked reactions during the Munich crisis, such as the migration by 150,000 to Wales, contributed to fear of societal chaos.Titmuss 1950, p. 31.
The government planned to voluntarily evacuate four million people—mostly women and children—from urban areas, including 1.4 million from London. It expected about 90% of evacuees to stay in private homes, and conducted an extensive survey to determine available space. Detailed preparations for transporting them were developed. A trial blackout was held on 10 August 1939, and when Germany invaded Poland on 1 September a blackout began at sunset. Lights would not be allowed after dark for almost six years,Titmuss 1950, p. 34-42, 90, 97. and the blackout became by far the most unpopular aspect of the war for civilians, more than rationing. The relocation of the government and the civil service was also planned, but would only have occurred if necessary so as not to damage civilian morale.Mackay 2002, pp. 51, 106.
Much civil-defence preparation in the form of shelters was left in the hands of local authorities, and many areas such as Birmingham, Coventry, Belfast and the East End of London did not have enough shelters.Field 2002, p. 13. The Phoney War, however, and the unexpected delay of civilian bombing permitted the shelter programme to finish in June 1940Mackay 2002, p. 35. The programme favoured backyard Anderson shelters and small brick surface shelters; many of the latter were soon abandoned in 1940 as unsafe. In addition, authorities expected that the raids would be brief and during the day. Few predicted that attacks by night would force Londoners to sleep in shelters.Field 2002, p. 14.
Communal shelters
thumb|left|Aldwych tube station being used as a bomb shelter in 1940.
Very deeply buried shelters provided the most protection against a direct hit. The government did not build them for large populations before the war because of cost, time to build, and fears that their very safety would cause occupants to refuse to leave to return to work, or that anti-war sentiment would develop in large groups. The government saw the Communist Party's leading role in advocating for building deep shelters as an attempt to damage civilian morale, especially after the Molotov-Ribbentrop Pact of August 1939.Mackay 2002, p. 34.
The most important existing communal shelters were the London Underground stations. Although many civilians had used them as such during the First World War, the government in 1939 refused to allow the stations to be used as shelters so as not to interfere with commuter and troop travel, and the fears that occupants might refuse to leave. Underground officials were ordered to lock station entrances during raids; but by the second week of heavy bombing the government relented and ordered the stations to be opened. Each day orderly lines of people queued until 4 pm, when they were allowed to enter the stations. In mid-September 1940 about 150,000 a night slept in the Underground, although by the winter and spring months the numbers had declined to 100,000 or less. Noises of battle were muffled and sleep was easier in the deepest stations, but many were killed from direct hits on several stations.Field 2002, p. 15.
thumb|160px|A young woman plays a gramophone in an air raid shelter in north London during 1940
Communal shelters never housed more than one seventh of Greater London residents, however.Titmuss 1950, pp. 342–343. Peak use of the Underground as shelter was 177,000 on 27 September 1940, and a November 1940 census of London found that about 4% of residents used the Tube and other large shelters; 9% in public surface shelters; and 27% in private home shelters, implying that the remaining 60% of the city likely stayed at home.Field 2002, p. 44.Harrison 1990, p. 112. The government distributed Anderson shelters until 1941 and that year began distributing the Morrison shelter, which could be used inside homes.Mackay 2002, p. 190.
Public demand caused the government in October 1940 to build new deep sheltersMackay 2002, pp. 189–190. within the Underground to hold 80,000 people but were not completed until the period of heaviest bombing had passed. By the end of 1940 significant improvements had been made in the Underground and in many other large shelters. Authorities provided stoves and bathrooms and canteen trains provided food. Tickets were issued for bunks in large shelters to reduce the amount of time spent queuing. Committees quickly formed within shelters as informal governments, and organisations such as the British Red Cross and the Salvation Army worked to improve conditions. Entertainment included concerts, films, plays and books from local libraries.
Although only a small number of Londoners used the mass shelters, as journalists, celebrities and foreigners visited they became part of the national debate on societal and class divisions. Most residents found that such divisions continued within the shelters, and many fights and arguments occurred regarding noise, space or other issues. Contrary to prewar fears of anti-Semitic violence in the East End, one observer found that the "Cockney and the Jew [worked] together, against the Indian."Field 2002, p. 15-18.
No collapse of morale
thumb|upright|160px|The original 1939 Keep Calm and Carry On poster
Although the intensity of the bombing was not as great as prewar expectations so an equal comparison is impossible, no psychiatric crisis occurred because of the Blitz even during the period of greatest bombing of September 1940. An American witness wrote "By every test and measure I am able to apply, these people are staunch to the bone and won't quit ... the British are stronger and in a better position than they were at its beginning". People referred to raids as if they were weather, stating that a day was "very blitzy".{Mackay 2002, pp. 75, 261
According to Anna Freud and Edward Glover, London civilians surprisingly did not suffer from widespread shell shock, unlike the soldiers in the Dunkirk evacuation. The psychoanalysts were correct, and the special network of psychiatric clinics opened to receive mental casualties of the attacks closed due to lack of need. Although the stress of the war resulted in many anxiety attacks, eating disorders, fatigue, weeping, miscarriages, and other physical and mental ailments, society did not collapse. The number of suicides and drunkenness declined, and London recorded only about two cases of "bomb neuroses" per week in the first three months of bombing. Many civilians found that the best way to retain mental stability was to be with family, and after the first few weeks of bombing avoidance of the evacuation programs grew.Field 2002, p. 15-20.Titmuss 1950, p. 340, 349.Mackay 2002, pp. 80-81.
thumb|left|Office workers make their way to work through debris after a heavy air raid.
The cheerful crowds visiting bomb sites were so large they interfered with rescue work, pub visits increased in number (beer was never rationed), and 13,000 attended cricket at Lord's. People left shelters when told instead of refusing to leave, although many housewives reportedly enjoyed the break from housework. Some people even told government surveyors that they enjoyed air raids if they occurred occasionally, perhaps once a week. Despite the attacks, defeat in Norway and France, and the threat of invasion, overall morale remained high; a Gallup poll found only 3% of Britons expected to lose the war in May 1940, another found an 88% approval rating for Churchill in July, and a third found 89% support for his leadership in October. Support for peace negotiations declined from 29% in February. Each setback caused more civilians to volunteer to become unpaid Local Defence Volunteers, workers worked longer shifts and over weekends, contributions rose to the £5,000 "Spitfire Funds" to build fighters, and the number of work days lost to strikes in 1940 was the lowest in history.Mackay2002, pp. 60–63,67–68,75,78–79,215–216.
Civilian mobilisation
The civilians of London had an enormous role to play in the protection of their city. Many civilians who were unwilling or unable to join the military became members of the Home Guard, the Air Raid Precautions service (ARP), the Auxiliary Fire Service, and many other organisations. The AFS had 138,000 personnel by July 1939. Only one year earlier, there had only been 6,600 full-time and 13,800 part-time firemen in the entire country.Ray 1996, p. 51. During the Blitz, The Scout Association guided fire engines to where they were most needed, and became known as the "Blitz Scouts". Many unemployed were drafted into the Royal Army Pay Corps. These personnel, along with others from the Pioneer Corps, were charged with the task of salvage and clean-up.Hill 2002, p. 36.
The WVS (Women's Voluntary Services for Civil Defence) was set up under the direction of Samuel Hoare, Home Secretary in 1938 specifically in the event of air raids. Hoare considered it the female branch of the ARP.Summerfield and Peniston-Bird 2007, p. 84. They organised the evacuation of children, established centres for those displaced by bombing, and operated canteens, salvage and recycling schemes. By the end of 1941, the WVS had one million members. Prior to the outbreak of war, civilians were issued with 50 million respirators (gas masks). These were issued in the event of bombing taking place with gas before evacuation.Ray 1996, p. 50.
Pre-war RAF strategy for night defence
In the inter-war years and after 1940, Hugh Dowding, Air Officer Commanding Fighter Command has received credit for the defence of British air space and the failure of the Luftwaffe to achieve air superiority. However, Dowding had spent so much effort preparing day fighter defences, there was little to prevent the Germans carrying out an alternative strategy by bombing at night. When the Luftwaffe struck at British cities for the first time on 7 September 1940, a number of civic and political leaders were worried by Dowding's apparent lack of reaction to the new crisis.Ray 2009, p. 124.
Dowding accepted that as AOC, he was responsible for the day and night defence of Britain, and the blame, should he fail, would be laid at his door. When urgent changes and improvements needed to be made, Dowding seemed reluctant to act quickly. The Air Staff felt that this was due to his stubborn nature and reluctance to cooperate. Dowding's opponents in the Air Ministry, already critical of his handling of the day battle (see Battle of Britain Day and the Big Wing controversy), were ready to use these failings as a cudgel with which to attack him and his abilities.
Dowding was summoned to an Air Ministry conference on 17 October 1940 to explain the poor state of night defences and the supposed (but ultimately successful) "failure" of his daytime strategy. The criticism of his leadership extended far beyond the Air Council, and the Minister of Aircraft Production, Lord Beaverbrook, and Churchill themselves intimated their support was waning. While the failure of night defence preparation was undeniable, it was not the AOC's responsibility to accrue resources. The general neglect of the RAF until the late spurt in 1938 had left sparse resources to build defences. While it was permissible to disagree with Dowding's operational and tactical deployment of forces, the failure of the Government and Air Ministry to allot resources was ultimately the responsibility of the civil and military institutions at large. In the pre-war period, the Chamberlain Government stated that night defence from air attack should not take up much of the national effort and, along with the Air Ministry, did not make it a priority.
The attitude of the Air Ministry was in contrast to the experiences of the First World War when a few German bombers caused physical and psychological damage out of all proportion to their numbers. Around (9,000 bombs) had been dropped, killing 1,413 people and injuring 3,500 more. Most people aged 35 or over remembered the threat and greeted the bombings with great trepidation. From 1916–1918, German raids had diminished against countermeasures which demonstrated defence against night air raids was possible.Ray 2009, p. 125.
Although night air defence was causing greater concern before the war, it was not at the forefront of RAF planning. Most of the resources went into planning for daylight fighter defences. The difficulty RAF bombers had navigating in darkness, led the British to believe German bombers would suffer the same problems and would be unable to reach and identify their targets. There was also a mentality in all air forces that, if they could carry out effective operations by day, night missions and their disadvantages could be avoided.Ray 2009, p. 126.
British air doctrine, since the time of Chief of the Air Staff Hugh Trenchard in the early 1920s, had stressed offence was the best means of defence.Hyde 1976, pp. 138, 223–228. British defensive strategy revolved around offensive action, what became known as the cult of the offensive. To prevent German formations from hitting targets in Britain, RAF's Bomber Command would destroy Luftwaffe aircraft on their own bases, aircraft in their factories and fuel reserves by attacking oil plants. This philosophy was impractical as Bomber Command lacked the technology and equipment and needed several years to develop it. This strategy retarded the development of fighter defences in the 1930s. Dowding agreed air defence would require some offensive action, and fighters could not defend Britain alone.Ray 2009, p. 127. Until September 1940, the RAF lacked specialist night-fighting aircraft and relied on anti-aircraft units which were poorly equipped and lacking in numbers.Ray 1996, pp. 127–128.
Technological battle
German navigation techniques
thumb|Map of Knickebein transmitters
Because of the inaccuracy of celestial navigation for precise target location in a fast moving aircraft, the Luftwaffe developed radio navigation devices and relied on three major systems: Knickebein ("Crooked leg"), X-Gerät (X-Device), and Y-Gerät (Y-Device). This led the British to develop countermeasures, giving rise to the "Battle of the Beams".Ray 1996, p. 194.
Bomber crews already had some experience with these types of systems due to the deployment of the Lorenz beam, a commercial blind-landing aid which allowed aircraft to land at night or in bad weather. The Germans developed the short-range Lorenz system into the Knickebein aid, a system which used two Lorenz beams with much stronger signal transmissions. The concept was the same as the Lorenz system. Two aerials were rotated for the two converging beams which were pointed to cross directly over the target. The German bombers would attach themselves to either beam and fly along it until they started to pick up the signal from the other beam. When a continuous sound was heard from the second beam the crew knew they were above the target and began dropping their bombs.
While Knickebein was used by German crews en masse, X-Gerät use was limited to specially trained pathfinder crews. Special receivers were mounted in He 111s, with a radio mast on the bomber's fuselage. The system worked on a higher frequency (66–77 MHz, compared to Knickebeins 30–33 MHz). Transmitters on the ground sent pulses at a rate of 180 per minute. X-Gerät received and analysed the pulses, giving the pilot both visual and aural "on course" signals. Three beams intersected the beam along the He 111's flight path. The first cross-beam acted as a warning for the bomb-aimer to start the bombing-clock which he would activate only when the second cross-beam was reached. When the third cross-beam was reached the bomb aimer activated a third trigger, which stopped the first hand of the equipment's clock, with the second hand continuing. When the second hand re-aligned with the first, the bombs were released. The clock's timing mechanism was co-ordinated with the distances of the intersecting beams from the target so the target was directly below when the bomb release occurred.Mackay 2003, p. 89.
Y-Gerät was the most complex system of the three. It was, in effect, an automatic beam-tracking system, operated through the bomber's autopilot. The single approach beam along which the bomber tracked was monitored by a ground controller. The signals from the station were retransmitted by the bomber's equipment. This way the distance the bomber travelled along the beam could be precisely verified. Direction-finding checks also enabled the controller to keep the crew on an exact course. The crew would be ordered to drop their bombs either by issue of a code word by the ground controller, or at the conclusion of the signal transmissions which would stop. Although its maximum usable range was similar to the previous systems, it was not unknown for specific buildings to be hit.
British counter measures
In June 1940, a German prisoner of war was overheard boasting that the British would never find the Knickebein, even though it was under their noses. The details of the conversation were passed to an RAF Air Staff technical advisor, Dr. R. V. Jones, who started an in-depth investigation which discovered that the Luftwaffe's Lorenz receivers were more than blind-landing devices. Jones therefore began a search for the German beams. Avro Ansons of the Beam Approach Training Development Unit (BATDU) were flown up and down Britain fitted with a 30 MHz receiver to detect them. Soon a beam was traced to Derby (which had been mentioned in Luftwaffe transmissions). The first jamming operations were carried out using requisitioned hospital electrocautery machines. A subtle form of distortion was introduced. Up to nine special transmitters directed their signals at the beams in a manner that widened its path, negating its ability to accurately locate targets. Confidence in the device was diminished by the time the Luftwaffe decided to launch large-scale raids.Mackay 2003, pp. 88–89. The counter operations were carried out by British Electronic Counter Measures (ECM) units under Wing Commander Edward Addison, No. 80 Wing RAF. The production of false radio navigation signals by re-transmitting the originals was a technique known as masking beacons (meacons).
German beacons operated on the medium-frequency band and the signals involved a two-letter Morse identifier followed by a lengthy time-lapse which enabled the Luftwaffe crews to determine the signal's bearing. The Meacon system involved separate locations for a receiver with a directional aerial and a transmitter. The receipt of the German signal by the receiver was duly passed to the transmitter, the signal to be repeated. The action did not guarantee automatic success. If the German bomber flew closer to its own beam than the Meacon then the former signal would come through the stronger on the direction finder. The reverse would apply only if the meacon were closer.Mackay 2003, p. 91.
In general, German bombers were likely to get through to their targets without too much difficulty. It was to be some months before an effective night fighter force would be ready, and anti-aircraft defences only became adequate after the Blitz was over, so ruses were created to lure German bombers away from their targets. Throughout 1940, dummy airfields were prepared, good enough to stand up to skilled observation. An unknown number of bombs fell on these diversionary ("Starfish") targets.
For industrial areas, fires and lighting were simulated. It was decided to recreate normal residential street lighting, and in non-essential areas, lighting to recreate heavy industrial targets. In those sites, carbon arc lamps were used to simulate the flash of tram cables. Red lamps were used to simulate blast furnaces and locomotive fireboxes. Reflections made by factory skylights were created by placing lights under angled wooden panels.
The use of diversionary techniques such as fires had to be made carefully. The fake fires could only begin when the bombing started over an adjacent target and its effects were brought under control. Too early and the chances of success receded; too late and the real conflagration at the target would exceed the diversionary fires. Another innovation was the boiler fire. These units were fed from two adjacent tanks containing oil and water. The oil-fed fires were then injected with water from time to time; the flashes produced were similar to those of the German C-250 and C-500 Flammbomben. The hope was that, if it could deceive German bombardiers, it would draw more bombers away from the real target.
First phase
Loge and Seeschlange
thumb|left|Smoke rising from fires in Surrey docks, following bombing on 7 September
The first deliberate air raids on London were mainly aimed at the Port of London, causing severe damage. Late in the afternoon of 7 September 1940, the Germans began Operation Loge (the codename for London) and Seeschlange (Sea Snake), the air offensives against London and other industrial cities. Loge continued for 57 nights.Bungay 2000, p. 313. A total of 348 bombers and 617 fighters took part in the attack.Bungay 2000, p. 309.Shores 1985, p. 52.
Initially the change in strategy caught the RAF off-guard, and caused extensive damage and civilian casualties. Some of shipping was damaged in the Thames Estuary and 1,600 civilians were casualties.Hooton 1997, p. 26. Of this total around 400 were killed.Stansky 2007, p. 95. The fighting in the air was more intense in daylight. Overall Loge had cost the Luftwaffe 41 aircraft; 14 bombers, 16 Messerschmitt Bf 109s, seven Messerschmitt Bf 110s and four reconnaissance aircraft.Bungay 2000, p. 310. Fighter Command lost 23 fighters, with six pilots killed and another seven wounded.Bungay 2000, p. 311. Another 247 bombers from Sperrle's Luftflotte 3 (Air Fleet 3) attacked that night.Collier 1980, p. 178. On 8 September, the Luftwaffe returned. This time 412 people were killed and 747 severely wounded.
thumb|Heinkel He 111 bomber over the Surrey docks and Wapping in the East End of London on 7 September 1940
On 9 September the OKL appeared to be backing two strategies. Its round-the-clock bombing of London was an immediate attempt to force the British government to capitulate, but it was also striking at Britain's vital sea communications to achieve a victory through siege. Although the weather was poor, heavy raids took place that afternoon on the London suburbs and the airfield at Farnborough. The day's fighting cost Kesselring and Luftflotte 2 (Air Fleet 2) 24 aircraft, including 13 Bf 109s. Fighter Command lost 17 fighters and six pilots. Over the next few days weather was poor and the next main effort would not be made until 15 September 1940.
On 15 September the Luftwaffe made two large daylight attacks on London along the Thames Estuary, targeting the docks and rail communications in the city. Its hope was to destroy its targets and draw the RAF into defending them, allowing the Luftwaffe to destroy their fighters in large numbers, thereby achieving an air superiority. Large air battles broke out, lasting for most of the day. The first attack merely damaged the rail network for three days,Goss 2000, p. 154. and the second attack failed altogether.Price 1990, pp. 93–104. The air battle was later commemorated by Battle of Britain Day. The Luftwaffe lost 18 percent of the bombers sent on the operations that day, and failed to gain air superiority.Hooton 2010, p. 80.
While Göring was optimistic the Luftwaffe could prevail, Hitler was not. On 17 September he postponed Operation Sea Lion (as it turned out, indefinitely) rather than gamble Germany's newly gained military prestige on a risky cross-Channel operation, particularly in the face of a sceptical Joseph Stalin in the Soviet Union. In the last days of the battle, the bombers became lures in an attempt to draw the RAF into combat with German fighters. But their operations were to no avail; the worsening weather and unsustainable attrition in daylight gave the OKL an excuse to switch to night attacks on 7 October.Shores 1985, p. 55.McKee 1989, p. 286.
thumb|Bomb damage to a street in Birmingham after an air raid.
On 14 October, the heaviest night attack to date saw 380 German bombers from Luftflotte 3 hit London. Around 200 people were killed and another 2,000 injured. British anti-aircraft defences (General Frederick Alfred Pile) fired 8,326 rounds and shot down only two bombers. On 15 October, the bombers returned and about 900 fires were started by the mix of of high explosive and of incendiaries dropped. Five main rail lines were cut in London and rolling stock damaged.Ray 1996, p. 131.
Loge continued during October. According to German sources, of bombs were dropped in that month, of which about 10 percent of which was dropped in daylight. Over was aimed at London during the night. Attacks on Birmingham and Coventry were subject to of bombs between them in the last 10 days of October. Liverpool suffered of bombs dropped. Hull and Glasgow were attacked, but of bombs were spread out all over Britain. The Metropolitan-Vickers works in Manchester was targeted and of bombs dropped against it. Little tonnage was dropped on Fighter Command airfields; Bomber Command airfields were hit instead.James and Cox 2000, p. 307.
thumb|left|Firefighters tackling a blaze amongst ruined buildings after an air raid on London
Luftwaffe policy at this point was primarily to continue progressive attacks on London, chiefly by night attack; second, to interfere with production in the vast industrial arms factories of the West Midlands, again chiefly by night attack; and third to disrupt plants and factories during the day by means of fighter-bombers.James and Cox 2000, p. 308.
Kesselring, commanding Luftflotte 2, was ordered to send 50 sorties per night against London and attack eastern harbours in daylight. Sperrle, commanding Luftflotte 3, was ordered to dispatch 250 sorties per night including 100 against the West Midlands. Seeschlange would be carried out by Fliegerkorps X (10th Air Corps) which concentrated on mining operations against shipping. It also took part in the bombing over Britain. By 19/20 April 1941, it had dropped 3,984 mines, ⅓ of the total dropped. The mines' ability to destroy entire streets earned them respect in Britain, but several fell unexploded into British hands allowing counter-measures to be developed which damaged the German anti-shipping campaign.
By mid-November 1940, when the Germans adopted a changed plan, more than of high explosive and nearly 1,000,000 incendiaries had fallen on London. Outside the capital, there had been widespread harassing activity by single aircraft, as well as fairly strong diversionary attacks on Birmingham, Coventry and Liverpool, but no major raids. The London docks and railways communications had taken a heavy pounding, and much damage had been done to the railway system outside. In September, there had been no less than 667 hits on railways in Great Britain, and at one period, between 5,000 and 6,000 wagons were standing idle from the effect of delayed action bombs. But the great bulk of the traffic went on; and Londoners—though they glanced apprehensively each morning at the list of closed stretches of line displayed at their local station, or made strange detours round back streets in the buses—still got to work. For all the destruction of life and property, the observers sent out by the Ministry of Home Security failed to discover the slightest sign of a break in morale. More than 13,000 civilians had been killed, and almost 20,000 injured, in September and October alone,Richards 1954, p. 206. but the death toll was much less than expected. In late 1940, Churchill credited the shelters:
Wartime observers perceived the bombing as indiscriminate. American observer Ralph Ingersoll reported the bombing was inaccurate and did not hit targets of military value, but destroyed the surrounding areas. Ingersol wrote that Battersea Power Station, one of the largest landmarks in London, received only a minor hit. In fact, on 8 September 1940 both Battersea and West Ham Power Station were both shut down after the 7 September daylight attack on London.Ray 2004, p. 125. In the case of Battersea power station, an unused extension was hit and destroyed during November but the station was not put out of action during the night attacks.Ramsay 1988, p. 280. It is not clear whether the power station or any specific structure was targeted during the German offensive as the Luftwaffe could not accurately bomb select targets during night operations.Sansom 1990, p. 28. In the initial operations against London, it did appear as if rail targets and the bridges over the Thames had been singled out: Victoria Station was hit by four bombs and suffered extensive damage. The bombing disrupted rail traffic through London without destroying any of the crossings.Sansom 1990, p. 162. On 7 November, St Pancras, Kensal and Bricklayers' Arms stations were hit and several lines of Southern Rail were cut on 10 November. The British government grew anxious about the delays and disruption of supplies during the month. Reports suggested the attacks blocked the movement of coal to the Greater London regions and urgent repairs were required.Ray 2004, p. 150. Attacks against East End docks were effective and many Thames Barges were destroyed. The London Underground rail system was also effected; high explosive bombs damaged the tunnels rendering some unsafe.Sansom 1990, pp. 28, 81. The London Docklands, in particular the Royal Victoria Dock, received many hits and the Port of London's trade was disrupted. In some cases, the concentration of the bombing and resulting conflagration created firestorms of 1,000°C.Ray 2004, p. 177. The Ministry of Home Security reported that although the damage caused was "serious" it was not "crippling", and the quays, basins, railways and equipment remained sufficiently functional to ensure their continued use.Cooper 1981, p. 166.
Improvements in British defences
thumb|An anti-aircraft searchlight and crew at the Royal Hospital Chelsea, 17 April 1940.
thumb|3.7-inch anti-aircraft guns in Hyde Park London
British night air defences were in a poor state.Shores 1985, p. 56. Few anti-aircraft guns had fire-control systems, and the underpowered searchlights were usually ineffective against aircraft at altitudes above .Hooton 1997, p. 33.Richards 1954, p. 201. In July 1940, only 1,200 heavy and 549 light guns were deployed in the whole of Britain. Of the "heavies", some 200 were of the obsolescent type; the remainder were the effective and guns, with a theoretical "ceiling"' of over but a practical limit of because the predictor in use could not accept greater heights. The light guns, about half of which were of the excellent Bofors 40 mm, dealt with aircraft only up to .Richards 1954, p. 202. Although the use of the guns improved civilian morale, with the knowledge the German bomber crews were facing the barrage, it is now believed that the anti-aircraft guns achieved little and in fact the falling shell fragments caused more British casualties on the ground.Gaskin 2006, pp. 186–187.
Few fighter aircraft were able to operate at night. Ground-based radar was limited, airborne radar and RAF night fighters were generally ineffective.Price 1990, p. 20. RAF day fighters were converting to night operations and the interim Bristol Blenheim night fighter conversion of the light bomber was being replaced by the powerful Beaufighter, but this was only available in very small numbers. By the second month of the Blitz the defences were not performing well.Dobinson 2001, p. 252.
thumb|left|Boulton Paul Defiant night fighter N1671
London's defences were rapidly reorganised by General Pile, the Commander-in-Chief of Anti-Aircraft Command. The difference this made to the effectiveness of air defences is questionable. The British were still one-third below the establishment of heavy anti-aircraft artillery AAA (or ack-ack) in May 1941, with only 2,631 weapons available. Dowding had to rely on night fighters. From 1940 to 1941, the most successful night-fighter was the Boulton Paul Defiant; its four squadrons shot down more enemy aircraft than any other type.Taylor 1969, p. 326. AA defences improved by better use of radar and searchlights. Over several months, the 20,000 shells spent per raider shot down in September 1940, was reduced to 4,087 in January 1941 and to 2,963 shells in February 1941.Ray 1996, p. 193.
Airborne Interception radar (AI) was unreliable. The heavy fighting in the Battle of Britain had eaten up most of Fighter Command's resources, so there was little investment in night fighting. Bombers were flown with airborne search lights out of desperation but to little avail. Of greater potential was the GL (Gunlaying) radar and searchlights with fighter direction from RAF fighter control rooms to begin a GCI system (Ground Control-led Interception) under Group-level control (No. 10 Group RAF, No. 11 Group RAF and No. 12 Group RAF).Hooton 1997, p. 32.
Whitehall's disquiet at the failures of the RAF led to the replacement of Dowding (who was already due for retirement) with Sholto Douglas on 25 November. Douglas set about introducing more squadrons and dispersing the few GL sets to create a carpet effect in the southern counties. Still, in February 1941, there remained only seven squadrons with 87 pilots, under half the required strength. The GL carpet was supported by six GCI sets controlling radar-equipped night-fighters. By the height of the Blitz, they were becoming more successful. The number of contacts and combats rose in 1941, from 44 and two in 48 sorties in January 1941, to 204 and 74 in May (643 sorties). But even in May, 67% of the sorties were visual cat's-eye missions. Curiously, while 43% of the contacts in May 1941 were by visual sightings, they accounted for 61% of the combats. Yet when compared with Luftwaffe daylight operations, there was a sharp decline in German losses to 1%. If a vigilant bomber crew could spot the fighter first, they had a decent chance at evading it.
Nevertheless, it was radar that proved to be critical weapon in the night battles over Britain from this point onward. Dowding had introduced the concept of airborne radar and encouraged its usage. Eventually it would become a success. On the night of 22/23 July 1940, Flying Officer Cyril Ashfield (pilot), Pilot Officer Geoffrey Morris (Observer) and Flight Sergeant Reginald Leyland (Air Intercept radar operator) of the Fighter Interception Unit became the first pilot and crew to intercept and destroy an enemy aircraft using onboard radar to guide them to a visual interception, when their AI night fighter brought down a Do 17 off Sussex.White 2007, pp. 50–51. On 19 November 1940 the famous RAF night fighter ace John Cunningham shot down a Ju 88 bomber using airborne radar, just as Dowding had predicted.Holland 2007, pp. 602–603.
By mid-November, nine squadrons were available, but only one was equipped with Beaufighters (No. 219 Squadron RAF at RAF Kenley). By 16 February 1941, this had grown to 12; with five equipped, or partially equipped with Beaufighters spread over five Groups.Ray 1996, p. 189.
Second phase
Night attacks
thumb|right|Coventry city centre following 14/15 November 1940 raid
From November 1940 – February 1941, the Luftwaffe shifted its strategy and attacked other industrial cities.Cooper 1981, p. 170. In particular, the West Midlands were targeted. On the night of 13/14 November, 77 He 111s of Kampfgeschwader 26 (26th Bomber Wing, or KG 26) bombed London while 63 from KG 55 hit Birmingham. The next night, a large force hit Coventry. "Pathfinders" from 12 Kampfgruppe 100 (Bomb Group 100 or KGr 100) led 437 bombers from KG 1, KG 3, KG 26, KG 27, KG 55 and Lehrgeschwader 1 (1st Training Wing, or LG 1) which dropped of high explosive, of incendiaries, and 127 parachute mines.Shores 1985, p. 57. Other sources say 449 bombers and a total of of bombs were dropped.Hooton 1997, p. 35. The raid against Coventry was particularly devastating, and led to widespread use of the phrase "to conventrate". Over 10,000 incendiaries were dropped.Gaskin 2005, p. 156. Around 21 factories were seriously damaged in Coventry, and loss of public utilities stopped work at nine others, disrupting industrial output for several months. Only one bomber was lost, to anti-aircraft fire, despite the RAF flying 125 night sorties. No follow up raids were made, as OKL underestimated the British power of recovery (as Bomber Command would do over Germany from 1943–1945). The Germans were surprised by the success of the attack. The concentration had been achieved by accident.Price 1977, pp. 43–45. The strategic effect of the raid was a brief 20 percent dip in aircraft production.
Five nights later, Birmingham was hit by 369 bombers from KG 54, KG 26, and KG 55. By the end of November, 1,100 bombers were available for night raids. An average of 200 were able to strike per night. This weight of attack went on for two months, with the Luftwaffe dropping of bombs. In November 1940, 6,000 sorties and 23 major attacks (more than 100 tons of bombs dropped) were flown. Two heavy ( of bombs) attacks were also flown. In December, only 11 major and five heavy attacks were made.Hooton 2010, p. 87.
thumb|left|View from St. Paul's Cathedral after the Blitz
Probably the most devastating strike occurred on the evening of 29 December, when German aircraft attacked the City of London itself with incendiary and high explosive bombs, causing a firestorm that has been called the Second Great Fire of London.Hooton 1997, p. 36. The first group to use these incendiaries was Kampfgruppe 100 which despatched 10 "pathfinder" He 111s. At 18:17, it released the first of 10,000 fire bombs, eventually amounting to 300 dropped per minute.Gaskin 2005, p. 193. Altogether, 130 German bombers destroyed the historical centre of London.Mackay 2003, p. 94. Civilian casualties on London throughout the Blitz amounted to 28,556 killed, and 25,578 wounded. The Luftwaffe had dropped of bombs.Stansky 2007, p. 180.
Not all of the Luftwaffes effort was made against inland cities. Port cities were also attacked to try to disrupt trade and sea communications. In January Swansea was bombed four times, very heavily. On 17 January around 100 bombers dropped a high concentration of incendiaries, some 32,000 in all. The main damage was inflicted on the commercial and domestic areas. Four days later 230 tons was dropped including 60,000 incendiaries. In Portsmouth Southsea and Gosport waves of 150 bombers destroyed vast swaths of the city with 40,000 incendiaries. Warehouses, rail lines and houses were destroyed and damaged, but the docks were largely untouched.Ray 1996, p. 185.
In January and February 1941, Luftwaffe serviceability rates declined, until just 551 of 1,214 bombers were combat worthy. Seven major and eight heavy attacks were flown, but the weather made it difficult to keep up the pressure. Still, at Southampton, attacks were so effective morale did give way briefly with civilian authorities leading people en masse out of the city.
Strategic or "terror" bombing
thumb|Children in the East End of London, made homeless by the Blitz
Although official German air doctrine did target civilian morale, it did not espouse the attacking of civilians directly. It hoped to destroy morale by destroying the enemy's factories and public utilities as well as its food stocks (by attacking shipping). Nevertheless, its official opposition to attacks on civilians became an increasingly moot point when large-scale raids were conducted in November and December 1940. Although not encouraged by official policy, the use of mines and incendiaries, for tactical expediency, came close to indiscriminate bombing. Locating targets in skies obscured by industrial haze meant the target area needed to be illuminated and hit "without regard for the civilian population".Hooton 1997, p. 34.
Special units, such as KGr 100, became the Beleuchtergruppe (Firelighter Group), which used incendiaries and high explosive to mark the target area. The tactic was expanded into Feuerleitung (Blaze Control) with the creation of Brandbombenfelder (Incendiary Fields) to mark targets. These were marked out by parachute flares. Then bombers carrying SC 1000 (), SC 1400 (), and SC 1800 () "Satan" bombs were used to level streets and residential areas. By December, the SC 2500 () "Max" bomb was used.
These decisions, apparently taken at the Luftflotte or Fliegerkorps level (see Organisation of the Luftwaffe (1933–1945)), meant attacks on individual targets were gradually replaced by what was, for all intents and purposes, an unrestricted area attack or Terrorangriff (Terror Attack).Hooton 2010, p. 85. Part of the reason for this was inaccuracy of navigation. The effectiveness of British countermeasures against Knickebein, which was designed to avoid area attacks, forced the Luftwaffe to resort to these methods. The shift from precision bombing to area attack is indicated in the tactical methods and weapons dropped. KGr 100 increased its use of incendiaries from 13–28 percent. By December, this had increased to 92 percent. Use of incendiaries, which were inherently inaccurate, indicated much less care was taken to avoid civilian property close to industrial sites. Other units ceased using parachute flares and opted for explosive target markers. Captured German air crews also indicated the homes of industrial workers were deliberately targeted.
Final attacks
Directive 23: Göring and the Kriegsmarine
In 1941, the Luftwaffe shifted strategy again. Erich Raeder—commander-in-chief of the Kriegsmarine—had long argued the Luftwaffe should support the German submarine force (U-Bootwaffe) in the Battle of the Atlantic by attacking shipping in the Atlantic Ocean and attacking British ports.Raeder 2001, p. 322. Eventually, he convinced Hitler of the need to attack British port facilities.Over 1980, p. 36. At Raeder's prompting, Hitler correctly noted that the greatest damage to the British war economy had been done through the destruction of merchant shipping by submarines and air attacks by small numbers of Focke-Wulf Fw 200 naval aircraft and ordered the German air arm to focus its efforts against British convoys. This meant that British coastal centres and shipping at sea west of Ireland were the prime targets.Isby 2005, p. 110.
Hitler's interest in this strategy forced Göring and Jeschonnek to review the air war against Britain in January 1941. This led to Göring and Jeschonnek agreeing to Hitler's Directive 23, Directions for operations against the British War Economy, which was published on 6 February 1941 and gave aerial interdiction of British imports by sea top priority.Hooton 2010, p. 88. This strategy had been recognised before the war, but Operation Eagle Attack and the following Battle of Britain had got in the way of striking at Britain's sea communications and diverted German air strength to the campaign against the RAF and its supporting structures.Ray 1996, p. 195. The OKL had always regarded the interdiction of sea communications of less importance than bombing land-based aircraft industries.Isby 2005, p. 109.
Directive 23 was the only concession made by Göring to the Kriegsmarine over the strategic bombing strategy of the Luftwaffe against Britain. Thereafter, he would refuse to make available any air units to destroy British dockyards, ports, port facilities, or shipping in dock or at sea, lest Kriegsmarine gain control of more Luftwaffe units.Overy 1980, p. 37. Raeder's successor—Karl Dönitz—would—on the intervention of Hitler—gain control of one unit (KG 40), but Göring would soon regain it. Göring's lack of cooperation was detrimental to the one air strategy with potentially decisive strategic effect on Britain. Instead, he wasted aircraft of Fliegerführer Atlantik (Flying Command Atlantic) on bombing mainland Britain instead of attacks against convoys.Murray 1983, p. 136. For Göring, his prestige had been damaged by the defeat in the Battle of Britain, and he wanted to regain it by subduing Britain by air power alone. He was always reluctant to cooperate with Raeder.Murray 1983, p. 135.
Even so, the decision by OKL to support the strategy in Directive 23 was instigated by two considerations, both of which had little to do with wanting to destroy Britain's sea communications in conjunction with the Kriegsmarine. First, the difficulty in estimating the impact of bombing upon war production was becoming apparent, and second, the conclusion British morale was unlikely to break led OKL to adopt the naval option. The indifference displayed by OKL to Directive 23 was perhaps best demonstrated in operational directives which diluted its effect. They emphasised the core strategic interest was attacking ports but they insisted in maintaining pressure, or diverting strength, onto industries building aircraft, anti-aircraft guns, and explosives. Other targets would be considered if the primary ones could not be attacked because of weather conditions.
A further line in the directive stressed the need to inflict the heaviest losses possible, but also to intensify the air war in order to create the impression an amphibious assault on Britain was planned for 1941. However, meteorological conditions over Britain were not favourable for flying and prevented an escalation in air operations. Airfields became water-logged and the 18 Kampfgruppen (bomber groups) of the Luftwaffes Kampfgeschwadern (bomber wings) were relocated to Germany for rest and re-equipment.
British ports
From the German point of view, March 1941 saw an improvement. The Luftwaffe flew 4,000 sorties that month, including 12 major and three heavy attacks. The electronic war intensified but the Luftwaffe flew major inland missions only on moonlit nights. Ports were easier to find and made better targets. To confuse the British, radio silence was observed until the bombs fell. X- and Y-Gerät beams were placed over false targets and switched only at the last minute. Rapid frequency changes were introduced for X-Gerät, whose wider band of frequencies and greater tactical flexibility ensured it remained effective at a time when British selective jamming was degrading the effectiveness of Y-Gerät.
By now, the imminent threat of invasion had all but passed as the Luftwaffe had failed to gain the prerequisite air superiority. The aerial bombing was now principally aimed at the destruction of industrial targets, but also continued with the objective of breaking the morale of the civilian population.
The attacks were focused against western ports in March. These attacks produced some breaks in morale, with civil leaders fleeing the cities before the offensive reached its height. But the Luftwaffe's effort eased in the last 10 attacks as seven Kampfgruppen moved to Austria in preparation for the Balkans Campaign in Yugoslavia and Greece. The shortage of bombers caused the OKL to improvise. Some 50 Junkers Ju 87 Stuka dive-bombers and Jabos (fighter-bombers) were used, officially classed as Leichte Kampfflugzeuge ("light bombers") and sometimes called Leichte Kesselringe ("Light Kesselrings"). The defences failed to prevent widespread damage but on some occasions did prevent German bombers concentrating on their targets. On occasion, only one-third of German bombs hit their targets.Hooton 2010, pp. 88–89.
thumb|upright=1.5|Liverpool city centre after heavy bombing.
The diversion of heavier bombers to the Balkans meant that the crews and units left behind were asked to fly two or three sorties per night. Bombers were noisy, cold, and vibrated badly. Added to the tension of the mission which exhausted and drained crews, tiredness caught up with and killed many. In one incident on 28/29 April, Peter Stahl of KG 30 was flying on his 50th mission. He fell asleep at the controls of his Ju 88 and woke up to discover the entire crew asleep. He roused them, ensured they took oxygen and Dextro-Energen tablets, then completed the mission.Hooton 1997, p. 37.
Regardless, the Luftwaffe could still inflict huge damage. With the German occupation of Western Europe, the intensification of submarine and air attack on Britain's sea communications was feared by the British. Such an event would have serious consequences on the future course of the war, should the Germans succeed. Liverpool and its port became an important destination for convoys heading through the Western Approaches from North America, bringing supplies and materials. The considerable rail network distributed to the rest of the country.Ray 1996, p. 205. Operations against Liverpool were successful. Air attacks sank of shipping, with another damaged. Minister of Home Security Herbert Morrison was also worried morale was breaking, noting the defeatism expressed by civilians. Other sources point to half of the port's 144 berths rendered unusable, while cargo unloading capability was reduced by 75%. Roads and railways were blocked and ships could not leave harbour. On 8 May 1941, 57 ships were destroyed, sunk or damaged amounting to . Around 66,000 houses were destroyed, 77,000 people made homeless, and 1,900 people killed and 1,450 seriously hurt on one night.Ray 1996, p. 207. Operations against London up until May 1941 could also have a severe impact on morale. The populace of the port of Hull became 'trekkers', people who underwent a mass exodus from cities before, during, and after attacks. However, the attacks failed to knock out or damage railways, or port facilities for long, even in the Port of London, a target of many attacks. The Port of London in particular was an important target, bringing in one-third of overseas trade.Ray 1996, p. 16.
On 13 March, the upper Clyde port of Clydebank near Glasgow was bombed. All but seven of its 12,000 houses were damaged. Many more ports were attacked. Plymouth was attacked five times before the end of the month while Belfast, Hull, and Cardiff were hit. Cardiff was bombed on three nights, Portsmouth centre was devastated by five raids. The rate of civilian housing lost was averaging 40,000 people per week dehoused in September 1940. In March 1941, two raids on Plymouth and London dehoused 148,000 people.Calder 2003, p. 37. Still, while heavily damaged, British ports continued to support war industry and supplies from North America continued to pass through them while the Royal Navy continued to operate in Plymouth, Southampton, and Portsmouth.Calder 2003, p. 119. Plymouth in particular, because of its vulnerable position on the south coast and close proximity to German air bases, was subjected to the heaviest attacks. On 10/11 March, 240 bombers dropped 193 tons of high explosives and 46,000 incendiaries. Many houses and commercial centres were heavily damaged, the electrical supply was knocked out, and five oil tanks and two magazines exploded. Nine days later, two waves of 125 and 170 bombers dropped heavy bombs, including 160 tons of high explosive and 32,000 incendiaries. Much of the city centre was destroyed. Damage was inflicted on the port installations, but many bombs fell on the city itself. On 17 April 346 tons of explosives and 46,000 incendiaries were dropped from 250 bombers led by KG 26. The damage was considerable, and the Germans also used aerial mines. Over 2,000 AAA shells were fired, destroying two Ju 88s.Ray 1996, pp. 215, 217. By the end of the air campaign over Britain, only eight percent of the German effort against British ports was made using mines.Neitzel 2003, p. 453.
In the north, substantial efforts were made against Newcastle-upon-Tyne and Sunderland, which were large ports on the English east coast. On 9 April 1941 Luftflotte 2 dropped 150 tons of high explosives and 50,000 incendiaries from 120 bombers in a five-hour attack. Sewer, rail, docklands, and electric installations were damaged. In Sunderland on 25 April, Luftflotte 2 sent 60 bombers which dropped 80 tons of high explosive and 9,000 incendiaries. Much damage was done. A further attack on the Clyde, this time at Greenock, took place on 6 and 7 May. However, as with the attacks in the south, the Germans failed to prevent maritime movements or cripple industry in the regions.Ray 1996, p. 225.
thumb|Firefighters at work amongst burning buildings, during the large raid of 10/11 May
The last major attack on London was on 10/11 May 1941, on which the Luftwaffe flew 571 sorties and dropped 800 tonnes of bombs. This caused more than 2,000 fires. 1,436 people were killed and 1,792 seriously injured, which affected morale badly. Another raid was carried out on 11/12 May 1941. Westminster Abbey and the Law Courts were damaged, while the Chamber of the House of Commons was destroyed. One-third of London's streets were impassable. All but one railway station line was blocked for several weeks. This raid was significant, as 63 German fighters were sent with the bombers, indicating the growing effectiveness of RAF night fighter defences.
RAF night fighters
German air supremacy at night was also now under threat. British night-fighter operations out over the Channel were proving successful.Faber 1977, p. 205. This was not immediately apparent.Mackay 2003, p. 88. The Bristol Blenheim F.1 was undergunned, with just four machine guns which struggledMackay 2003, pp. 86–87. to down the Do 17, Ju 88, or Heinkel He 111. Moreover, the Blenheim struggled to reach the speed of the German bombers. Added to the fact an interception relied on visual sighting, a kill was most elusive even in the conditions of a moonlit sky.
The Boulton Paul Defiant, despite its poor performance during daylight engagements, was a much better night fighter. It was faster, able to catch the bombers and its configuration of four machine guns in a turret could (much like German night fighters in 1943–1945 with Schräge Musik) engage the unsuspecting German bomber from beneath. Attacks from below offered a larger target, compared to attacking tail-on, as well as a better chance of not being seen by the bomber (so less chance of evasion), as well as greater likelihood of detonating its bombload. In subsequent months a steady number of German bombers would fall to night fighters.Mackay 2003, p. 87.
Improved aircraft designs were in the offing with the Bristol Beaufighter, then under development. It would prove formidable, but its development was slow. The Beaufighter had a maximum speed of , an operational ceiling of and a climb rate of per minute. Its armament of four Hispano cannon and six .303 in Browning machine guns offered a serious threat to German bombers.Mackay 2003, p. 93. On 19 November, John Cunningham of No. 604 Squadron RAF shot down a bomber flying an AI-equipped Beaufighter. It was the first air victory for the airborne radar.
In November and December 1940, the Luftwaffe flew 9,000 sorties against British targets and RAF night fighters claimed only six shot down. In January 1941, Fighter Command flew 486 sorties against 1,965 made by the Germans. Just three and 12 were claimed by the RAF and AAA defences respectively.Ray 1996, p. 190. In the bad weather of February 1941, Fighter Command flew 568 sorties to counter the Luftwaffe which flew 1,644 individual sorties. Night fighters could claim only four bombers, and lost four themselves.Ray 1996, p. 191.
By April and May 1941, the Luftwaffe was still getting through to their targets, taking no more than one- to two-percent losses on any given mission.Mackay 2003, p. 98. On 19/20 April 1941, in honour of Hitler's 52nd birthday, 712 bombers hit Plymouth with a record 1,000 tons of bombs. Losses were minimal. In the following month, 22 German bombers were lost with 13 confirmed to have been shot down by night fighters. On 3/4 May, nine were shot down in one night. On 10/11 May, London suffered severe damage, but 10 German bombers were downed. In May 1941, RAF night fighters shot down 38 German bombers.Ray 1996, p. 208.
By the end of May, Kesselring's Luftflotte 2 had been withdrawn, leaving Hugo Sperrle's Luftflotte 3 as a token force to maintain the illusion of strategic bombing. Hitler now had his sights set on Operation Barbarossa, the invasion of the Soviet Union in June. The Blitz came to end.
Aftermath
Between 20 June 1940, when the first German air operations began over Britain, and 31 March 1941, the OKL recorded the loss of 2,265 aircraft over the British Isles, a quarter of them fighters and one third bombers. At least 3,363 Luftwaffe airmen were killed, 2,641 missing and 2,117 wounded. Total losses could have been as high as 600 bombers, just 1.5% of the sorties flown. A significant number of aircraft were wrecked during landings, or downed by bad weather.Dear and Foot 2005, p. 109.
Effectiveness of bombing
The military effectiveness of bombing varied. The Luftwaffe dropped around of bombs during the Blitz disrupting production and transport, reducing food supplies and shaking the British morale. It also helped to support the U-Boat blockade by sinking some of shipping destroyed and damaged. Yet, overall the British production rose steadily throughout this period although there were significant falls during April 1941, probably influenced by the departure of workers of Easter Holidays according to the British official history. The British official history on war production noted the great impact was upon the supply of components rather than complete equipment.Hooton 2010, p. 89.
In aircraft production, the British were denied the opportunity to reach the planned target of 2,500 aircraft in a month, arguably the greatest achievement of the bombing, as it forced the dispersal of industry.Hooton 1997, p. 38. In April 1941, when the targets were British ports, rifle production fell by 25%, filled-shell production by 4.6%, and in smallarms production 4.5% overall. The strategic impact on industrial cities was varied; most took from 10–15 days to recover from heavy raids, although Belfast and Liverpool took longer. The attacks against Birmingham took war industries some three months to recover fully from. The exhausted population took three weeks to overcome the effects of an attack.
The air offensive against the RAF and British industry failed to have the desired effect. More might have been achieved had the OKL exploited their enemy's weak spot, the vulnerability of British sea communications. The Allies did so later when Bomber Command attacked rail communications and the United States Army Air Forces targeted oil, but that would have required an economic-industrial analysis of which the Luftwaffe was incapable. The OKL instead sought clusters of targets that suited the latest policy (which changed frequently), and disputes within the leadership were about tactics rather than strategy.Hooton 2010, p. 90. Though militarily ineffective, the Blitz caused enormous damage to Britain's infrastructure and housing stock. It cost around 41,000 lives, and may have injured another 139,000.
RAF evaluation
The relieved British began to assess the impact of the Blitz in August 1941, and the RAF Air Staff used the German experience to improve Bomber Command's offensives. They concluded bombers should strike a single target each night and use more incendiaries because they had a greater impact on production than high explosives. They also noted regional production was severely disrupted when city centres were devastated through the loss of administrative offices, utilities and transport. They believed the Luftwaffe had failed in precision attack, and concluded the German example of area attack using incendiaries was the way forward for operations over Germany.
Some writers claim the Air Staff ignored a critical lesson, however: British morale did not break. Targeting German morale, as Bomber Command would do, was not sufficient to induce a collapse. Aviation strategists dispute that morale was ever a major consideration for Bomber Command. Throughout 1933–39 none of the 16 Western Air Plans drafted mentioned morale as a target. The first three directives in 1940 did not mention civilian populations or morale in any way. Morale was not mentioned until the ninth wartime directive on 21 September 1940.Hall 1998, p. 118. The 10th directive in October 1940 mentioned morale by name. However, industrial cities were only to be targeted if weather denied strikes on Bomber Command's main concern, oil.Hall 1998, p. 119.
AOC Bomber Command Arthur Harris did see German morale as a major objective.Hall 1998, p. 120. However, he did not believe that the morale-collapse could occur without the destruction of the German economy. The primary goal of Bomber Command's offensives was to destroy the German industrial base (economic warfare), and in doing so reduce morale. In late 1943, just before the Battle of Berlin, he declared the power of Bomber Command would enable it to achieve "a state of devastation in which surrender is inevitable."Hall 1998, p. 137.
A summary of Harris' strategic intentions was clear:
From 1943 to the end of the war, he [Harris] and other proponents of the area offensive represented it [the bomber offensive] less as an attack on morale than as an assault on the housing, utilities, communications, and other services that supported the war production effort.
In comparison to the Allied bombing campaign against Germany casualties due to the Blitz were relatively low. For instance, the bombing of Hamburg alone inflicted about 42,000 civilian casualties.
Popular imagery and propaganda
A converse popular image arose of British people in the Second World War: a collection of people locked in national solidarity. This image entered the historiography of the Second World War in the 1980s and 1990s, especially after the publication of Angus Calder's book The Myth of the Blitz (1991). It was evoked by both the right and left political factions in Britain during the Falklands War when it was embedded in a nostalgic narrative in which the Second World War represented aggressive British patriotism successfully defending democracy.Summerfield and Penistion-Bird 2007, p. 3.Field 2002, p. 12. This imagery of people in the Blitz was and is powerfully portrayed in film, radio, newspapers and magazines.Summerfield and Penistion-Bird 2007, p. 4. At the time it was a useful propaganda tool for home and foreign consumption.Calder 2003, pp. 17–18. Historians' critical response to this construction focused on what were seen as over-emphasised claims of righteous nationalism and national unity. In the Myth of the Blitz, Calder exposed some of the counter-evidence of anti-social and divisive behaviours. What he saw as the myth—serene national unity—became "historical truth". In particular, class division was most evident.
thumb|left|Women salvaging possessions from their bombed house, including plants and a clock.
Raids during the Blitz produced the greatest divisions and morale effects in the working-class areas. Lack of sleep, insufficient shelters and inefficiency of warning systems were causes. The loss of sleep was a particular factor, with many not bothering to attend inconvenient shelters. The Communist Party made political capital out of these difficulties.Calder 2003, pp. 125–126.
In the wake of the Coventry Blitz, there was widespread agitation from the Communist Party over the need for bomb-proof shelters. Many Londoners, in particular, took to using the Underground railway system, without authority, for shelter and sleeping through the night there until the following morning. So worried were the Government over the sudden campaign of leaflets and posters distributed by the Communist Party in Coventry and London, that the Police were sent in to seize their production facilities. The Government, up until November 1940, was opposed to the centralised organisation of shelter. Home Secretary Sir John Anderson was replaced by Morrison soon afterwards, in the wake of a Cabinet reshuffle as the dying Neville Chamberlain resigned. Morrison warned that he could not counter the Communist unrest unless provision of shelters were made. He recognised the right of the public to seize tube stations and authorised plans to improve their condition and expand them by tunnelling. Still, many British citizens, who had been members of the Labour Party, itself inert over the issue, turned to the Communist Party. The Communists attempted to blame the damage and casualties of the Coventry raid on the rich factory owners, big business and landowning interests and called for a negotiated peace. Though they failed to make a large gain in influence, the membership of the Party had doubled by June 1941.Calder 2003, pp. 83–84. The "Communist threat" was deemed important enough for Herbert Morrison to order, with the support of the Cabinet, the stoppage of the Daily Worker and The Week; the Communist newspaper and journal.Calder 2003, p. 88.
The brief success of the Communists also fed into the hands of the British Union of Fascists (BUF). Anti-Semitic attitudes became widespread, particularly in London. Rumours that Jewish support was underpinning the Communist surge were frequent. Rumours that Jews were inflating prices, were responsible for the Black Market, were the first to panic under attack (even the cause of the panic), and secured the best shelters via underhanded methods, were also widespread. Moreover, there was also racial antagonism between the small Black, Indian and Jewish communities. However, the feared race riots did not transpire despite the mixing of different peoples into confined areas.Field 2002, p. 19.
In other cities, class conflict was more evident. Over a quarter of London's population had left the city by November 1940. Civilians left for more remote areas of the country. Upsurges in population in south Wales and Gloucester intimated where these displaced people went. Other reasons, including industry dispersal may have been a factor. However, resentment of rich self-evacuees or hostile treatment of poor ones were signs of persistence of class resentments although these factors did not appear to threaten social order.Calder 2003, pp. 129–130. The total number of evacuees numbered 1.4 million, including a high proportion from the poorest inner-city families. Reception committees were completely unprepared for the condition of some of the children. Far from displaying the nation's unity in time of war, the scheme backfired, often aggravating class antagonism and bolstering prejudice about the urban poor. Within four months, 88% of evacuated mothers, 86% of small children, and 43% of school children had been returned home. The lack of bombing in the Phoney War contributed significantly to the return of people to the cities, but class conflict was not eased a year later when evacuation operations had to be put into effect again.
Archive audio recordings
In recent years a large number of wartime recordings relating to the Blitz have been made available on audiobooks such as The Blitz, The Home Front and British War Broadcasting. These collections include period interviews with civilians, servicemen, aircrew, politicians and Civil Defence personnel, as well as Blitz actuality recordings, news bulletins and public information broadcasts. Notable interviews include Thomas Alderson, the first recipient of the George Cross, John Cormack, who survived eight days trapped beneath rubble on Clydeside, and Herbert Morrison's famous "Britain shall not burn" appeal for more fireguards in December 1940.Hayward 2007, www.ltmrecordings.com/blitz1notes.html
Use of bombsite rubble
In one 6-month period, 750,000 tons of bombsite rubble from London were transported by railway on 1,700 freight trains to make runways on Bomber Command airfields in East Anglia. Bombsite rubble from Birmingham was used to make runways on US Air Force bases in Kent and Essex in southeast England.Sucking Eggs (largely about wartime rationing in Britain), by Patricia Nicol, page 237, Vintage Books, London, 2010, ISBN 9780099521129
Many sites of bombed buildings, when cleared of rubble, were cultivated to grow vegetables to ease wartime food shortages and were known as "victory gardens".
Tables
Bombing raid statistics
Below is a table by city of the number of major raids (where at least 100 tons of bombs were dropped) and tonnage of bombs dropped during these major raids. Smaller raids are not included in the tonnages.
City Tonnage of high explosives dropped. Number of major air raids. London 18,291 71 Liverpool/Merseyside 1,957 8 Birmingham 1,852 8 Glasgow/Clydeside 1,329 5 Plymouth 1,228 8 Bristol 919 6 Coventry 818 2 Portsmouth 687 3 Southampton 647 4 Hull 593 3 Manchester 578 3 Belfast 440 2 Sheffield 355 2 Newcastle 155 1 Nottingham 137 1 Cardiff 115 1Source: 'The Night Blitz' John Ray, ISBN 0-304-35676-X, page 264
Sorties flown
+ The Blitz: Estimated Luftwaffe sorties.Month/Year Day Sorties (losses) Night Sorties (Losses) Luftflotte 2 sorties Luftflotte 3 sorties Major attacks Heavy attacksOctober 1940 2,300 (79) 5,900 (23) 2,400 3,500 25 4November 1940 925 (65) 6,125 (48) 1,600 4,525 23 2December 1940 650 (24) 3,450 (44) 700 2,750 11 5January 1941 675 (7) 2,050 (22) 450 1,600 7 6February 1941 500 (9) 1,450 (18) 475 975 – 2March 1941 800 (8) 4,275 (46) 1,625 2,650 12 3April 1941 800 (9) 5,250 (58) 1,500 3,750 16 5May 1941 200 (3) 3,800 (55) 1,300 2,500 11 3
See also
Baedeker Blitz
Operation Steinbock
V-1 flying bomb
V-2 rocket
Strategic bombing during World War II
Bombing of Wiener Neustadt in World War II
List of Polish cities damaged in World War II
NotesNotesCitations'''
References
Addison, Paul and Jeremy Crang. The Burning Blue: A New History of the Battle of Britain. London: Pimlico, 2000. ISBN 0-7126-6475-0.
Bungay, Stephen. The Most Dangerous Enemy: A History of the Battle of Britain. London: Aurum Press, 2000. ISBN 1-85410-801-8
Calder, Angus. The Myth of the Blitz. Pimlico, London, 2003. ISBN 0-7126-9820-5
Collier, Richard. Eagle Day: The Battle of Britain, 6 August – 15 September 1940. J.M Dent and Sons Ltd. 1980. ISBN 0-460-04370-6
Cooper, Matthew. The German Air Force 1933–1945: An Anatomy of Failure. New York: Jane's. 1981. ISBN 0-531-03733-9
Corum, James. The Luftwaffe: Creating the Operational Air War, 1918–1940. Kansas University Press. 1997. ISBN 978-0-7006-0836-2
de Zeng, Henry L., Doug G. Stankey and Eddie J. Creek. Bomber Units of the Luftwaffe 1933–1945: A Reference Source, Volume 1. Hersham, Surrey, UK: Ian Allen Publishing, 2007. ISBN 978-1-85780-279-5.
de Zeng, Henry L., Doug G. Stankey and Eddie J. Creek. Bomber Units of the Luftwaffe 1933–1945: A Reference Source, Volume 2. Hersham, Surrey, UK: Ian Allen Publishing, 2007. ISBN 978-1-903223-87-1.
Faber, Harold. Luftwaffe: An analysis by former Luftwaffe Generals. Sidwick and Jackson, London, 1977. ISBN 0-283-98516-X
Field, Geoffrey. 'Nights Underground in Darkest London: The Blitz, 1940–1941', in International Labour and Working-Class History. Issue No. 62, Class and Catastrophe: September 11 and Other Working-Class Disasters. (Autumn, 2002), pp. 11–49. OCLC 437133095
Gaskin, M.J. Blitz: The Story of the 29th December 1940. Faber and Faber, London. 2006. ISBN 0-571-21795-8
Goss, Chris. The Luftwaffe Bombers' Battle of Britain. Crecy Publishing. 2000, ISBN 0-947554-82-3
Hall, Cargill. Case Studies In Strategic Bombardment. Air Force History and Museums Program, 1998. ISBN 0-16-049781-7.
Hill, Maureen. The Blitz. Marks and Spencer, London, 2002. ISBN 1-84273-750-3
Holland, James. The Battle of Britain: Five Months that Changed History. Bantam Press, London, 2007. ISBN 978-0-593-05913-5
Hough, Richard and Denis Richards. The Battle of Britain :Pen & Sword. 2007. ISBN 1-84415-657-5
Isby, David. The Luftwaffe and the War at Sea, 1939–1945. Chatham Publishing, London, 2005. ISBN 1-86176-256-9
James, T.C.G and Cox, Sebastian. The Battle of Britain. Frank Cass, London. 2000. ISBN 0-7146-8149-0
Levine, Joshua. Forgotten Voices of the Blitz and the Battle for Britain, Ebury Press, 2006. ISBN 978-0-09-191003-7
Mackay, Ron. Heinkel He 111 (Crowood Aviation Series). Ramsbury, Marlborough, Wiltshire, UK: Crowood Press, 2003. ISBN 1-86126-576-X.
Mitcham, Samuel W. Retreat to the Reich: The German Defeat in France, 1944. Stackpole, 2007, ISBN 978-0-8117-3384-7
Montgomery-Hyde, H. British Air Policy Between the Wars. Heinemann, London, 1976. SBN 434 47983 7
National Archives. The Rise and Fall of the German Air Force, 1933–1945. 2000. ISBN 978-1-905615-30-8
Neitzel, Sönke. Kriegsmarine and Luftwaffe co-operation in the war against Britain. War in History Journal. 2003, Volume 10: pp. 448–463.
Overy, Richard. "Hitler and Air Strategy". Journal of Contemporary History 15 (3): 405–421. July 1980
Overy, Richard. The Air War, 1939–1945. Potomac Books, Washington, 1980. ISBN 978-1-57488-716-7.
Price, Alfred. Battle of Britain Day: 15 September 1940. Greenhill books. London. 1990. ISBN 978-1-85367-375-7
Price, Alfred. Blitz on Britain 1939–45, Sutton Publishing, 2000. ISBN 0-7509-2356-3
Price, Alfred. Instruments of darkness: the history of electronic warfare, 1939–1945. Greenhill, London, 1977. ISBN 978-1-85367-616-1
Raeder, Erich. Erich Rader, Grand Admiral. New York: Da Capo Press. United States Naval Institute, 2001. ISBN 0-306-80962-1.
Ramsey, Winston (1988). The Blitz Then and Now, Volume 2, After the Battle; First Editions edition. ISBN 978-0-90091-354-9.
Ray, John. The Battle of Britain: Dowding and the First Victory, 1940. London:Cassel Military Paperbacks, 2009. ISBN 978-1-4072-2131-1
Ray, John. The Night Blitz: 1940–1941. Cassell Military, London. 1996. ISBN 0-304-35676-X
Roberts, Andrew. Chapter 3: Last Hope Island in The Storm of War: A New History of the Second World War. ISBN 978-0-06-122859-9
Sansom, William. The Blitz: Westminster at war. Oxford University Press, 1990. ISBN 978-0-57-127271-6
Shores, Christopher. Duel for the Sky: Ten Crucial Battles of World War II. Grub Street, London 1985. ISBN 978-0-7137-1601-6
Stansky, Peter. The First Day of the Blitz. Yale University Press, 2007. ISBN 978-0-300-12556-6
Summerfield, Penny and Peniston-Bird, Corina. Contesting Home Defence: Men, Women and the Home Guard in the Second World War. Manchester University Press, Manchester, 2007. ISBN 978-0-7190-6202-5
Taylor, John W.R. Boulton Paul Defiant: Combat Aircraft of the World from 1909 to the present. New York: G.P. Putnam's Sons, 1969. ISBN 0-425-03633-2. p. 326
White, Ian. The History of Air Intercept Radar & the British Night fighter 1935–1939''. Pen & Sword, 2007, ISBN 978-1-84415-532-3
External links
The Blitz Original reports and pictures from The Times
Archive recordings from The Blitz, 1940–41 (audiobook)
The Blitz: Sorting the Myth from the Reality, BBC History
Exploring 20th century London – The Blitz Objects and photographs from the collections of the Museum of London, London Transport Museum, Jewish Museum and Museum of Croydon.
Liverpool Blitz Experience 24 hours in a city under fire in the Blitz.
First Hand Accounts of the Blitz StoryVault Oral History Project
Forgotten Voices of the Blitz and the Battle for Britain
War and peace and the price of cat fish World War II diary of resident in south-west London.
Oral history interview with Barry Fulford, recalling his childhood during the Blitz from the Veteran's History Project at Central Connecticut State University
Interactive bombing map of London
Interactive bombing map of a North East Town
Interactive bombing map of Buckinghamshire
Childhood Wartime memories, form "Memoro – The Bank of Memories" (Joy Irvin)
Category:Battle of Britain
Category:1940 in London
Category:1941 in London
Category:1940 in military history
Category:1941 in military history
Category:1940 in the United Kingdom
Category:1941 in the United Kingdom
Category:Battles and military actions in London
Category:Airstrikes
Category:United Kingdom home front during World War II | 112,274 | 2017-01 |
Han dynasty | The Han dynasty () was the second imperial dynasty of China (206 BC–220 AD), preceded by the Qin dynasty (221–206 BC) and succeeded by the Three Kingdoms period (220–280 AD). Spanning over four centuries, the Han period is considered a golden age in Chinese history. To this day, China's majority ethnic group refers to itself as the "Han people" and the Chinese script is referred to as "Han characters".
It was founded by the rebel leader Liu Bang, known posthumously as Emperor Gaozu of Han, and briefly interrupted by the Xin dynasty (9–23 AD) of the former regent Wang Mang. This interregnum separates the Han dynasty into two periods: the Western Han or Former Han (206 BC – 9 AD) and the Eastern Han or Later Han (25–220 AD).
The emperor was at the pinnacle of Han society. He presided over the Han government but shared power with both the nobility and appointed ministers who came largely from the scholarly gentry class. The Han Empire was divided into areas directly controlled by the central government using an innovation inherited from the Qin known as commanderies, and a number of semi-autonomous kingdoms. These kingdoms gradually lost all vestiges of their independence, particularly following the Rebellion of the Seven States. From the reign of Emperor Wu onward, the Chinese court officially sponsored Confucianism in education and court politics, synthesized with the cosmology of later scholars such as Dong Zhongshu. This policy endured until the fall of the Qing dynasty in 1911 AD.
The Han dynasty was an age of economic prosperity and saw a significant growth of the money economy first established during the Zhou dynasty (c. 1050–256 BC). The coinage issued by the central government mint in 119 BC remained the standard coinage of China until the Tang dynasty (618–907 AD). The period saw a number of limited institutional innovations. To pay for its military campaigns and the settlement of newly conquered frontier territories, the government nationalized the private salt and iron industries in 117 BC, but these government monopolies were repealed during the Eastern Han period. Science and technology during the Han period saw significant advances, including papermaking, the nautical steering rudder, the use of negative numbers in mathematics, the raised-relief map, the hydraulic-powered armillary sphere for astronomy, and a seismometer employing an inverted pendulum.
The Xiongnu, a nomadic steppe confederation, defeated the Han in 200 BC and forced the Han to submit as a de facto inferior partner, but continued their raids on the Han borders. Emperor Wu of Han (r. 141–87 BC) launched several military campaigns against them. The ultimate Han victory in these wars eventually forced the Xiongnu to accept vassal status as Han tributaries. These campaigns expanded Han sovereignty into the Tarim Basin of Central Asia, divided the Xiongnu into two separate confederations, and helped establish the vast trade network known as the Silk Road, which reached as far as the Mediterranean world. The territories north of Han's borders were quickly overrun by the nomadic Xianbei confederation. Emperor Wu also launched successful military expeditions in the south, annexing Nanyue in 111 BC and Dian in 109 BC, and in the Korean Peninsula where the Xuantu and Lelang Commanderies were established in 108 BC. After 92 AD, the palace eunuchs increasingly involved themselves in court politics, engaging in violent power struggles between the various consort clans of the empresses and empresses dowager, causing the Han's ultimate downfall. Imperial authority was also seriously challenged by large Daoist religious societies which instigated the Yellow Turban Rebellion and the Five Pecks of Rice Rebellion. Following the death of Emperor Ling (r. 168–189 AD), the palace eunuchs suffered wholesale massacre by military officers, allowing members of the aristocracy and military governors to become warlords and divide the empire. When Cao Pi, King of Wei, usurped the throne from Emperor Xian, the Han dynasty ceased to exist.
Etymology
According to the Records of the Grand Historian, after the collapse of the Qin dynasty the hegemon Xiang Yu appointed Liu Bang as prince of the small fief of Hanzhong, named after its location on the Han River (in modern southwest Shaanxi). Following Liu Bang's victory in the Chu–Han Contention, the resulting Han dynasty was named after the Hanzhong fief.
History
Western Han
China's first imperial dynasty was the Qin dynasty (221–206 BC). The Qin unified the Chinese Warring States by conquest, but their empire became unstable after the death of the first emperor Qin Shi Huangdi. Within four years, the dynasty's authority had collapsed in the face of rebellion. Two former rebel leaders, Xiang Yu (d. 202 BC) of Chu and Liu Bang (d. 195 BC) of Han, engaged in a war to decide who would become hegemon of China, which had fissured into 18 kingdoms, each claiming allegiance to either Xiang Yu or Liu Bang. Although Xiang Yu proved to be a capable commander, Liu Bang defeated him at Battle of Gaixia (202 BC), in modern-day Anhui. Liu Bang assumed the title "emperor" (huangdi) at the urging of his followers and is known posthumously as Emperor Gaozu (r. 202–195 BC). Chang'an was chosen as the new capital of the reunified empire under Han.
At the beginning of the Western Han dynasty, thirteen centrally controlled commanderies—including the capital region—existed in the western third of the empire, while the eastern two-thirds were divided into ten semi-autonomous kingdoms. To placate his prominent commanders from the war with Chu, Emperor Gaozu enfeoffed some of them as kings. By 157 BC, the Han court had replaced all of these kings with royal Liu family members, since the loyalty of non-relatives to the throne was questioned. After several insurrections by Han kings—the largest being the Rebellion of the Seven States in 154 BC—the imperial court enacted a series of reforms beginning in 145 BC limiting the size and power of these kingdoms and dividing their former territories into new centrally controlled commanderies. Kings were no longer able to appoint their own staff; this duty was assumed by the imperial court. Kings became nominal heads of their fiefs and collected a portion of tax revenues as their personal incomes.; . The kingdoms were never entirely abolished and existed throughout the remainder of Western and Eastern Han.
thumb|250px|Han dynasty in 100 BC
thumb|250px|Provinces controlled by Han dynasty in 190 AD
To the north of China proper, the nomadic Xiongnu chieftain Modu Chanyu (r. 209–174 BC) conquered various tribes inhabiting the eastern portion of the Eurasian Steppe. By the end of his reign, he controlled Manchuria, Mongolia, and the Tarim Basin, subjugating over twenty states east of Samarkand.; ; . Emperor Gaozu was troubled about the abundant Han-manufactured iron weapons traded to the Xiongnu along the northern borders, and he established a trade embargo against the group. Although the embargo was in place, the Xiongnu found traders willing to supply their needs. Chinese forces also mounted surprise attacks against Xiongnu who traded at the border markets.Jerry Bentley, Old World Encounters: Cross Cultural Contacts and Exchanges in Pre-Modern Times (New York: Oxford University Press, 1993), 37. In retaliation, the Xiongnu invaded what is now Shanxi province, where they defeated the Han forces at Baideng in 200 BC.; . After negotiations, the heqin agreement in 198 BC nominally held the leaders of the Xiongnu and the Han as equal partners in a royal marriage alliance, but the Han were forced to send large amounts of tribute items such as silk clothes, food, and wine to the Xiongnu.; ; .
thumb|150px|right|A silk banner from Mawangdui, Changsha, Hunan province. It was draped over the coffin of Lady Dai (d. 168 BC), wife of the Marquess Li Cang (利蒼) (d. 186 BC), chancellor for the Kingdom of Changsha.
Despite the tribute and a negotiation between Laoshang Chanyu (r. 174–160 BC) and Emperor Wen (r. 180–157 BC) to reopen border markets, many of the Chanyu's Xiongnu subordinates chose not to obey the treaty and periodically raided Han territories south of the Great Wall for additional goods.; ; . In a court conference assembled by Emperor Wu (r. 141–87 BC) in 135 BC, the majority consensus of the ministers was to retain the heqin agreement. Emperor Wu accepted this, despite continuing Xiongnu raids.; . However, a court conference the following year convinced the majority that a limited engagement at Mayi involving the assassination of the Chanyu would throw the Xiongnu realm into chaos and benefit the Han.; . When this plot failed in 133 BC, Emperor Wu launched a series of massive military invasions into Xiongnu territory. Chinese armies captured one stronghold after another and established agricultural colonies to strengthen their hold. The assault culminated in 119 BC at the Battle of Mobei, where the Han commanders Huo Qubing (d. 117 BC) and Wei Qing (d. 106 BC) forced the Xiongnu court to flee north of the Gobi Desert.; .
After Wu's reign, Han forces continued to prevail against the Xiongnu. The Xiongnu leader Huhanye Chanyu (呼韓邪) (r. 58–31 BC) finally submitted to Han as a tributary vassal in 51 BC. His rival claimant to the throne, Zhizhi Chanyu (r. 56–36 BC), was killed by Chen Tang and Gan Yanshou (甘延壽/甘延寿) at the Battle of Zhizhi, in modern Taraz, Kazakhstan.; .
thumb|left|A gilded bronze oil lamp in the shape of a kneeling female servant, dated 2nd century BC, found in the tomb of Dou Wan, wife of the Han prince Liu Sheng; its sliding shutter allows for adjustments in the direction and brightness in light while it also traps smoke within the body.; .
In 121 BC, Han forces expelled the Xiongnu from a vast territory spanning the Hexi Corridor to Lop Nur. They repelled a joint Xiongnu-Qiang invasion of this northwestern territory in 111 BC. In that year, the Han court established four new frontier commanderies in this region: Jiuquan, Zhangyi, Dunhuang, and Wuwei.; ; . The majority of people on the frontier were soldiers. On occasion, the court forcibly moved peasant farmers to new frontier settlements, along with government-owned slaves and convicts who performed hard labor. The court also encouraged commoners, such as farmers, merchants, landowners, and hired laborers, to voluntarily migrate to the frontier.
Even before Han's expansion into Central Asia, diplomat Zhang Qian's travels from 139 to 125 BC had established Chinese contacts with many surrounding civilizations. Zhang encountered Dayuan (Fergana), Kangju (Sogdiana), and Daxia (Bactria, formerly the Greco-Bactrian Kingdom); he also gathered information on Shendu (Indus River valley of North India) and Anxi (the Parthian Empire). All of these countries eventually received Han embassies.; ; ; ; . These connections marked the beginning of the Silk Road trade network that extended to the Roman Empire, bringing Han items like silk to Rome and Roman goods such as glasswares to China.; .
From roughly 115 to 60 BC, Han forces fought the Xiongnu over control of the oasis city-states in the Tarim Basin. Han was eventually victorious and established the Protectorate of the Western Regions in 60 BC, which dealt with the region's defense and foreign affairs.; ; ; . The Han also expanded southward. The naval conquest of Nanyue in 111 BC expanded the Han realm into what are now modern Guangdong, Guangxi, and northern Vietnam. Yunnan was brought into the Han realm with the conquest of the Dian Kingdom in 109 BC, followed by parts of the Korean Peninsula with the colonial establishments of Xuantu Commandery and Lelang Commandery in 108 BC.; . In China's first known nationwide census taken in 2 AD, the population was registered as having 57,671,400 individuals in 12,366,470 households.
To pay for his military campaigns and colonial expansion, Emperor Wu nationalized several private industries. He created central government monopolies administered largely by former merchants. These monopolies included salt, iron, and liquor production, as well as bronze-coin currency. The liquor monopoly lasted only from 98 to 81 BC, and the salt and iron monopolies were eventually abolished in early Eastern Han. The issuing of coinage remained a central government monopoly throughout the rest of the Han dynasty.; ; ; ; ; see also . The government monopolies were eventually repealed when a political faction known as the Reformists gained greater influence in the court. The Reformists opposed the Modernist faction that had dominated court politics in Emperor Wu's reign and during the subsequent regency of Huo Guang (d. 68 BC). The Modernists argued for an aggressive and expansionary foreign policy supported by revenues from heavy government intervention in the private economy. The Reformists, however, overturned these policies, favoring a cautious, non-expansionary approach to foreign policy, frugal budget reform, and lower tax-rates imposed on private entrepreneurs.; ; .
Wang Mang's reign and civil war
Wang Zhengjun (71 BC–13 AD) was first empress, then empress dowager, and finally grand empress dowager during the reigns of the Emperors Yuan (r. 49–33 BC), Cheng (r. 33–7 BC), and Ai (r. 7–1 BC), respectively. During this time, a succession of her male relatives held the title of regent.; . Following the death of Ai, Wang Zhengjun's nephew Wang Mang (45 BC–23 AD) was appointed regent as Marshall of State on 16 August under Emperor Ping (r. 1 BC – 6 AD).
When Ping died on 3 February 6 AD, Ruzi Ying (d. 25 AD) was chosen as the heir and Wang Mang was appointed to serve as acting emperor for the child. Wang promised to relinquish his control to Liu Ying once he came of age. Despite this promise, and against protest and revolts from the nobility, Wang Mang claimed on 10 January that the divine Mandate of Heaven called for the end of the Han dynasty and the beginning of his own: the Xin dynasty (9–23 AD).; ; .
Wang Mang initiated a series of major reforms that were ultimately unsuccessful. These reforms included outlawing slavery, nationalizing land to equally distribute between households, and introducing new currencies, a change which debased the value of coinage.; ; ; . Although these reforms provoked considerable opposition, Wang's regime met its ultimate downfall with the massive floods of c. 3 AD and 11 AD. Gradual silt buildup in the Yellow River had raised its water level and overwhelmed the flood control works. The Yellow River split into two new branches: one emptying to the north and the other to the south of the Shandong Peninsula, though Han engineers managed to dam the southern branch by 70 AD.; ; .
The flood dislodged thousands of peasant farmers, many of whom joined roving bandit and rebel groups such as the Red Eyebrows to survive. Wang Mang's armies were incapable of quelling these enlarged rebel groups. Eventually, an insurgent mob forced their way into the Weiyang Palace and killed Wang Mang.; .
thumb|left|A spade-shaped bronze coin issued during Wang Mang's (r. 9–23 AD) reign
The Gengshi Emperor (r. 23–25 AD), a descendant of Emperor Jing (r. 157–141 BC), attempted to restore the Han dynasty and occupied Chang'an as his capital. However, he was overwhelmed by the Red Eyebrow rebels who deposed, assassinated, and replaced him with the puppet monarch Liu Penzi.; . Emperor Gengshi's brother Liu Xiu, known posthumously as Emperor Guangwu (r. 25–57 AD), after distinguishing himself at the Battle of Kunyang in 23 AD, was urged to succeed Gengshi as emperor.; .
Under Guangwu's rule the Han Empire was restored. Guangwu made Luoyang his capital in 25 AD, and by 27 AD his officers Deng Yu and Feng Yi had forced the Red Eyebrows to surrender and executed their leaders for treason.; . From 26 until 36 AD, Emperor Guangwu had to wage war against other regional warlords who claimed the title of emperor; when these warlords were defeated, China reunified under the Han.; .
The period between the foundation of the Han dynasty and Wang Mang's reign is known as the Western Han dynasty () or Former Han dynasty () (206 BC – 9 AD). During this period the capital was at Chang'an (modern Xi'an). From the reign of Guangwu the capital was moved eastward to Luoyang. The era from his reign until the fall of Han is known as the Eastern Han dynasty () or the Later Han dynasty () (25–220 AD).
Eastern Han
The Eastern Han, also known as the Later Han, formally began on 5 August 25, when Liu Xiu became Emperor Guangwu of Han. During the widespread rebellion against Wang Mang, the state of Goguryeo was free to raid Han's Korean commanderies; Han did not reaffirm its control over the region until AD 30. The Trưng Sisters of Vietnam rebelled against Han in AD 40. Their rebellion was crushed by Han general Ma Yuan (d. AD 49) in a campaign from AD 42–43.; . Wang Mang renewed hostilities against the Xiongnu, who were estranged from Han until their leader Bi (比), a rival claimant to the throne against his cousin Punu (蒲奴), submitted to Han as a tributary vassal in AD 50. This created two rival Xiongnu states: the Southern Xiongnu led by Bi, an ally of Han, and the Northern Xiongnu led by Punu, an enemy of Han.; .
During the turbulent reign of Wang Mang, Han lost control over the Tarim Basin, which was conquered by the Northern Xiongnu in AD 63 and used as a base to invade Han's Hexi Corridor in Gansu. Dou Gu (d. 88 AD) defeated the Northern Xiongnu at the Battle of Yiwulu in AD 73, evicting them from Turpan and chasing them as far as Lake Barkol before establishing a garrison at Hami. After the new Protector General of the Western Regions Chen Mu (d. AD 75) was killed by allies of the Xiongnu in Karasahr and Kucha, the garrison at Hami was withdrawn.; . At the Battle of Ikh Bayan in AD 89, Dou Xian (d. AD 92) defeated the Northern Xiongnu chanyu who then retreated into the Altai Mountains.; . After the Northern Xiongnu fled into the Ili River valley in AD 91, the nomadic Xianbei occupied the area from the borders of the Buyeo Kingdom in Manchuria to the Ili River of the Wusun people. The Xianbei reached their apogee under Tanshihuai (檀石槐) (d. AD 180), who consistently defeated Chinese armies. However, Tanshihuai's confederation disintegrated after his death.
Ban Chao (d. AD 102) enlisted the aid of the Kushan Empire, occupying the area of modern India, Pakistan, Afghanistan, and Tajikistan, to subdue Kashgar and its ally Sogdiana. When a request by Kushan ruler Vima Kadphises (r. c. 90–c. 100 AD) for a marriage alliance with the Han was rejected in AD 90, he sent his forces to Wakhan (Afghanistan) to attack Ban Chao. The conflict ended with the Kushans withdrawing because of lack of supplies.; . In AD 91, the office of Protector General of the Western Regions was reinstated when it was bestowed on Ban Chao.
Foreign travelers to Eastern-Han China include Buddhist monks who translated works into Chinese, such as An Shigao from Parthia, and Lokaksema from Kushan-era Gandhara, India.; . In addition to tributary relations with the Kushans, the Han Empire received gifts from the Parthian Empire, from a king in modern Burma, from a ruler in Japan, and initiated an unsuccessful mission to Daqin (Rome) in AD 97 with Gan Ying as emissary.; . A Roman embassy of Emperor Marcus Aurelius (r. 161–180 AD) is recorded in the Weilüe and Hou Hanshu to have reached the court of Emperor Huan of Han (r. AD 146–168) in AD 166, yet Rafe de Crespigny asserts that this was most likely a group of Roman merchants.; . In addition to Roman glasswares and coins found in China,; Roman medallions from the reign of Antoninus Pius and his adopted son Marcus Aurelius have been found at Óc Eo in Vietnam.; This was near the commandery of Rinan (also Jiaozhi) where Chinese sources claim the Romans first landed, as well as embassies from Tianzhu (in northern India) in the years 159 and 161.; Óc Eo is also thought to be the port city "Cattigara" described by Ptolemy in his Geography (c. 150 AD) as lying east of the Golden Chersonese (Malay Peninsula) along the Magnus Sinus (i.e. Gulf of Thailand and South China Sea), where a Greek sailor had visited.; ; ;
left|thumb|200px|The Gansu Flying Horse, depicted in full gallop, bronze sculpture, h 34,5 cm. Wuwei, Gansu, China, AD 25–220
Emperor Zhang's (r. 75–88 AD) reign came to be viewed by later Eastern Han scholars as the high point of the dynastic house. Subsequent reigns were increasingly marked by eunuch intervention in court politics and their involvement in the violent power struggles of the imperial consort clans.; . With the aid of the eunuch Zheng Zhong (d. 107 AD), Emperor He (r. 88–105 AD) had Empress Dowager Dou (d. 97 AD) put under house arrest and her clan stripped of power. This was in revenge for Dou's purging of the clan of his natural mother—Consort Liang—and then concealing her identity from him.; . After Emperor He's death, his wife Empress Deng Sui (d. 121 AD) managed state affairs as the regent empress dowager during a turbulent financial crisis and widespread Qiang rebellion that lasted from 107 to 118 AD.; .
When Empress Dowager Deng died, Emperor An (r. 106–125 AD) was convinced by the accusations of the eunuchs Li Run (李閏) and Jiang Jing (江京) that Deng and her family had planned to depose him. An dismissed Deng's clan members from office, exiled them and forced many to commit suicide.; . After An's death, his wife, Empress Dowager Yan (d. 126 AD) placed the child Marquess of Beixiang on the throne in an attempt to retain power within her family. However, palace eunuch Sun Cheng (d. 132 AD) masterminded a successful overthrow of her regime to enthrone Emperor Shun of Han (r. 125–144 AD). Yan was placed under house arrest, her relatives were either killed or exiled, and her eunuch allies were slaughtered.; . The regent Liang Ji (d. 159 AD), brother of Empress Liang Na (d. 150 AD), had the brother-in-law of Consort Deng Mengnü (later empress) (d. 165 AD) killed after Deng Mengnü resisted Liang Ji's attempts to control her. Afterward, Emperor Huan employed eunuchs to depose Liang Ji, who was then forced to commit suicide.; .
thumb|These rammed earth ruins of a granary in Hecang Fortress (Chinese: 河仓城; Pinyin: Hécāngchéng), located ~11 km (7 miles) northeast of the Western-Han-era Yumen Pass, were built during the Western Han (202 BC – 9 AD) and significantly rebuilt during the Western Jin (280–316 AD).
Students from the Imperial University organized a widespread student protest against the eunuchs of Emperor Huan's court. Huan further alienated the bureaucracy when he initiated grandiose construction projects and hosted thousands of concubines in his harem at a time of economic crisis.; . Palace eunuchs imprisoned the official Li Ying (李膺) and his associates from the Imperial University on a dubious charge of treason. In 167 AD, the Grand Commandant Dou Wu (d. 168 AD) convinced his son-in-law, Emperor Huan, to release them. However the emperor permanently barred Li Ying and his associates from serving in office, marking the beginning of the Partisan Prohibitions.
Following Huan's death, Dou Wu and the Grand Tutor Chen Fan (陳蕃) (d. 168 AD) attempted a coup d'état against the eunuchs Hou Lan (d. 172 AD), Cao Jie (d. 181 AD), and Wang Fu (王甫). When the plot was uncovered, the eunuchs arrested Empress Dowager Dou (d. 172 AD) and Chen Fan. General Zhang Huan (張奐) favored the eunuchs. He and his troops confronted Dou Wu and his retainers at the palace gate where each side shouted accusations of treason against the other. When the retainers gradually deserted Dou Wu, he was forced to commit suicide.
Under Emperor Ling (r. 168–189 AD) the eunuchs had the partisan prohibitions renewed and expanded, while themselves auctioning off top government offices.; . Many affairs of state were entrusted to the eunuchs Zhao Zhong (d. 189 AD) and Zhang Rang (d. 189 AD) while Emperor Ling spent much of his time roleplaying with concubines and participating in military parades.
End of the Han dynasty
The Partisan Prohibitions were repealed during the Yellow Turban Rebellion and Five Pecks of Rice Rebellion in 184 AD, largely because the court did not want to continue to alienate a significant portion of the gentry class who might otherwise join the rebellions. The Yellow Turbans and Five-Pecks-of-Rice adherents belonged to two different hierarchical Daoist religious societies led by faith healers Zhang Jue (d. 184 AD) and Zhang Lu (d. 216 AD), respectively. Zhang Lu's rebellion, in modern northern Sichuan and southern Shaanxi, was not quelled until 215 AD. Zhang Jue's massive rebellion across eight provinces was annihilated by Han forces within a year, however the following decades saw much smaller recurrent uprisings. Although the Yellow Turbans were defeated, many generals appointed during the crisis never disbanded their assembled militia forces and used these troops to amass power outside of the collapsing imperial authority.
General-in-Chief He Jin (d. 189 AD), half-brother to Empress He (d. 189 AD), plotted with Yuan Shao (d. 202 AD) to overthrow the eunuchs by having several generals march to the outskirts of the capital. There, in a written petition to Empress He, they demanded the eunuchs' execution. After a period of hesitation, Empress He consented. When the eunuchs discovered this, however, they had her brother He Miao (何苗) rescind the order.; Zizhi Tongjian, vol. 59. The eunuchs assassinated He Jin on September 22, 189 AD. Yuan Shao then besieged Luoyang's Northern Palace while his brother Yuan Shu (d. 199 AD) besieged the Southern Palace. On September 25 both palaces were breached and approximately two thousand eunuchs were killed.; . Zhang Rang had previously fled with Emperor Shao (r. 189 AD) and his brother Liu Xie—the future Emperor Xian of Han (r. 189–220 AD). While being pursued by the Yuan brothers, Zhang committed suicide by jumping into the Yellow River.
General Dong Zhuo (d. 192 AD) found the young emperor and his brother wandering in the countryside. He escorted them safely back to the capital and was made Minister of Works, taking control of Luoyang and forcing Yuan Shao to flee. After Dong Zhuo demoted Emperor Shao and promoted his brother Liu Xie as Emperor Xian, Yuan Shao led a coalition of former officials and officers against Dong, who burned Luoyang to the ground and resettled the court at Chang'an in May 191 AD. Dong Zhuo later poisoned Emperor Shao.
Dong was killed by his adopted son Lü Bu (d. 198 AD) in a plot hatched by Wang Yun (d. 192 AD). Emperor Xian fled from Chang'an in 195 AD to the ruins of Luoyang. Xian was persuaded by Cao Cao (155–220 AD), then Governor of Yan Province in modern western Shandong and eastern Henan, to move the capital to Xuchang in 196 AD.; .
Yuan Shao challenged Cao Cao for control over the emperor. Yuan's power was greatly diminished after Cao defeated him at the Battle of Guandu in 200 AD. After Yuan died, Cao killed Yuan Shao's son Yuan Tan (173–205 AD), who had fought with his brothers over the family inheritance.; . His brothers Yuan Shang and Yuan Xi were killed in 207 AD by Gongsun Kang (d. 221 AD), who sent their heads to Cao Cao.
After Cao's defeat at the naval Battle of Red Cliffs in 208 AD, China was divided into three spheres of influence, with Cao Cao dominating the north, Sun Quan (182–252 AD) dominating the south, and Liu Bei (161–223 AD) dominating the west.; . Cao Cao died in March 220 AD. By December his son Cao Pi (187–226 AD) had Emperor Xian relinquish the throne to him and is known posthumously as Emperor Wen of Wei. This formally ended the Han dynasty and initiated an age of conflict between three states: Cao Wei, Eastern Wu, and Shu Han.; .
Society and culture
Social class
In the hierarchical social order, the emperor was at the apex of Han society and government. However the emperor was often a minor, ruled over by a regent such as the empress dowager or one of her male relatives. Ranked immediately below the emperor were the kings who were of the same Liu family clan.; . The rest of society, including nobles lower than kings and all commoners excluding slaves belonged to one of twenty ranks (ershi gongcheng 二十公乘).
Each successive rank gave its holder greater pensions and legal privileges. The highest rank, of full marquess, came with a state pension and a territorial fiefdom. Holders of the rank immediately below, that of ordinary marquess, received a pension, but had no territorial rule.; . Officials who served in government belonged to the wider commoner social class and were ranked just below nobles in social prestige. The highest government officials could be enfeoffed as marquesses. By the Eastern Han period, local elites of unattached scholars, teachers, students, and government officials began to identify themselves as members of a larger, nationwide gentry class with shared values and a commitment to mainstream scholarship.; . When the government became noticeably corrupt in mid-to-late Eastern Han, many gentrymen even considered the cultivation of morally grounded personal relationships more important than serving in public office.; .
The farmer, or specifically the small landowner-cultivator, was ranked just below scholars and officials in the social hierarchy. Other agricultural cultivators were of a lower status, such as tenants, wage laborers, and in rare cases slaves.; ; ; . Artisans and craftsmen had a legal and socioeconomic status between that of owner-cultivator farmers and common merchants. State-registered merchants, who were forced by law to wear white-colored clothes and pay high commercial taxes, were considered by the gentry as social parasites with a contemptible status.; . These were often petty shopkeepers of urban marketplaces; merchants such as industrialists and itinerant traders working between a network of cities could avoid registering as merchants and were often wealthier and more powerful than the vast majority of government officials.; . Wealthy landowners, such as nobles and officials, often provided lodging for retainers who provided valuable work or duties, sometimes including fighting bandits or riding into battle. Unlike slaves, retainers could come and go from their master's home as they pleased. Medical physicians, pig breeders, and butchers had a fairly high social status, while occultist diviners, runners, and messengers had low status.; .
Marriage, gender, and kinship
The Han-era family was patrilineal and typically had four to five nuclear family members living in one household. Multiple generations of extended family members did not occupy the same house, unlike families of later dynasties.; . According to Confucian family norms, various family members were treated with different levels of respect and intimacy. For example, there were different accepted time frames for mourning the death of a father versus a paternal uncle. Arranged marriages were normal, with the father's input on his offspring's spouse being considered more important than the mother's.; . Monogamous marriages were also normal, although nobles and high officials were wealthy enough to afford and support concubines as additional lovers.; . Under certain conditions dictated by custom, not law, both men and women were able to divorce their spouses and remarry.; .
Apart from the passing of noble titles or ranks, inheritance practices did not involve primogeniture; each son received an equal share of the family property. Unlike the practice in later dynasties, the father usually sent his adult married sons away with their portions of the family fortune. Daughters received a portion of the family fortune through their marriage dowries, though this was usually much less than the shares of sons. A different distribution of the remainder could be specified in a will, but it is unclear how common this was.
Women were expected to obey the will of their father, then their husband, and then their adult son in old age. However, it is known from contemporary sources that there were many deviations to this rule, especially in regard to mothers over their sons, and empresses who ordered around and openly humiliated their fathers and brothers. Women were exempt from the annual corvée labor duties, but often engaged in a range of income-earning occupations aside from their domestic chores of cooking and cleaning.
The most common occupation for women was weaving clothes for the family, sale at market or for large textile enterprises that employed hundreds of women. Other women helped on their brothers' farms or became singers, dancers, sorceresses, respected medical physicians, and successful merchants who could afford their own silk clothes.; . Some women formed spinning collectives, aggregating the resources of several different families.
Education, literature, and philosophy
thumb|upright|A fragment of the 'Stone Classics' (熹平石經); these stone-carved Five Classics installed during Emperor Ling's reign along the roadside of the Imperial University (right outside Luoyang) were made at the instigation of Cai Yong (132–192 AD), who feared the Classics housed in the imperial library were being interpolated by University Academicians.; ; .
The early Western Han court simultaneously accepted the philosophical teachings of Legalism, Huang-Lao Daoism, and Confucianism in making state decisions and shaping government policy.; . However, the Han court under Emperor Wu gave Confucianism exclusive patronage. He abolished all academic chairs or erudites (bóshì 博士) not dealing with the Confucian Five Classics in 136 BC and encouraged nominees for office to receive a Confucian-based education at the Imperial University that he established in 124 BC.; ; ; . Unlike the original ideology espoused by Confucius, or Kongzi (551–479 BC), Han Confucianism in Emperor Wu's reign was the creation of Dong Zhongshu (179–104 BC). Dong was a scholar and minor official who aggregated the ethical Confucian ideas of ritual, filial piety, and harmonious relationships with five phases and yin-yang cosmologies.; . Much to the interest of the ruler, Dong's synthesis justified the imperial system of government within the natural order of the universe. The Imperial University grew in importance as the student body grew to over 30,000 by the 2nd century AD.; . A Confucian-based education was also made available at commandery-level schools and private schools opened in small towns, where teachers earned respectable incomes from tuition payments.
Some important texts were created and studied by scholars. Philosophical works written by Yang Xiong (53 BC – 18 AD), Huan Tan (43 BC – 28 AD), Wang Chong (27–100 AD), and Wang Fu (78–163 AD) questioned whether human nature was innately good or evil and posed challenges to Dong's universal order. The Records of the Grand Historian by Sima Tan (d. 110 BC) and his son Sima Qian (145–86 BC) established the standard model for all of imperial China's Standard Histories, such as the Book of Han written by Ban Biao (3–54 AD), his son Ban Gu (32–92 AD), and his daughter Ban Zhao (45–116 AD).; . There were dictionaries such as the Shuowen Jiezi by Xu Shen (c. 58 – c. 147 AD) and the Fangyan by Yang Xiong.; . Biographies on important figures were written by various gentrymen. Han dynasty poetry was dominated by the fu genre, which achieved its greatest prominence during the reign of Emperor Wu.; ; ; ; .
Law and order
Han scholars such as Jia Yi (201–169 BC) portrayed the previous Qin dynasty as a brutal regime. However, archaeological evidence from Zhangjiashan and Shuihudi reveal that many of the statutes in the Han law code compiled by Chancellor Xiao He (d. 193 BC) were derived from Qin law.; ; .
Various cases for rape, physical abuse and murder were prosecuted in court. Women, although usually having fewer rights by custom, were allowed to level civil and criminal charges against men.; . While suspects were jailed, convicted criminals were never imprisoned. Instead, punishments were commonly monetary fines, periods of forced hard labor for convicts, and the penalty of death by beheading. Early Han punishments of torturous mutilation were borrowed from Qin law. A series of reforms abolished mutilation punishments with progressively less-severe beatings by the bastinado.
Acting as a judge in lawsuits was one of many duties of the county magistrate and Administrators of commanderies. Complex, high-profile or unresolved cases were often deferred to the Minister of Justice in the capital or even the emperor. In each Han county was several districts, each overseen by a chief of police. Order in the cities was maintained by government officers in the marketplaces and constables in the neighborhoods.; .
Food
The most common staple crops consumed during Han were wheat, barley, foxtail millet, proso millet, rice, and beans. Commonly eaten fruits and vegetables included chestnuts, pears, plums, peaches, melons, apricots, strawberries, red bayberries, jujubes, calabash, bamboo shoots, mustard plant and taro. Domesticated animals that were also eaten included chickens, Mandarin ducks, geese, cows, sheep, pigs, camels and dogs (various types were bred specifically for food, while most were used as pets). Turtles and fish were taken from streams and lakes. Commonly hunted game, such as owl, pheasant, magpie, sika deer, and Chinese bamboo partridge were consumed. Seasonings included sugar, honey, salt and soy sauce. Beer and wine were regularly consumed.; .
Clothing
The types of clothing worn and the materials used during the Han period depended upon social class. Wealthy folk could afford silk robes, skirts, socks, and mittens, coats made of badger or fox fur, duck plumes, and slippers with inlaid leather, pearls, and silk lining. Peasants commonly wore clothes made of hemp, wool, and ferret skins.; ; .
Religion, cosmology, and metaphysics
thumb|right|An Eastern-Han bronze statuette of a mythical chimera (qilin), 1st century AD
Families throughout Han China made ritual sacrifices of animals and food to deities, spirits, and ancestors at temples and shrines, in the belief that these items could be utilized by those in the spiritual realm. It was thought that each person had a two-part soul: the spirit-soul (hun 魂) which journeyed to the afterlife paradise of immortals (xian), and the body-soul (po 魄) which remained in its grave or tomb on earth and was only reunited with the spirit-soul through a ritual ceremony.; . These tombs were commonly adorned with uniquely decorated hollow clay tiles that function also as a doorjamb to the tomb. Otherwise known as tomb tiles, these artifacts feature holes in the top and bottom of the tile allowing it to pivot. Similar tiles have been found in the Chengdu area of Sichuan province in south-central China.
In addition to his many other roles, the emperor acted as the highest priest in the land who made sacrifices to Heaven, the main deities known as the Five Powers, and the spirits (shen 神) of mountains and rivers. It was believed that the three realms of Heaven, Earth, and Mankind were linked by natural cycles of yin and yang and the five phases.; ; ; . If the emperor did not behave according to proper ritual, ethics, and morals, he could disrupt the fine balance of these cosmological cycles and cause calamities such as earthquakes, floods, droughts, epidemics, and swarms of locusts.; ; .
thumb|left|A rubbing of a Han pictorial stone showing an ancestral worship hall (citang 祠堂)
It was believed that immortality could be achieved if one reached the lands of the Queen Mother of the West or Mount Penglai.; . Han-era Daoists assembled into small groups of hermits who attempted to achieve immortality through breathing exercises, sexual techniques and use of medical elixirs. By the 2nd century AD, Daoists formed large hierarchical religious societies such as the Way of the Five Pecks of Rice. Its followers believed that the sage-philosopher Laozi (fl. 6th century BC) was a holy prophet who would offer salvation and good health if his devout followers would confess their sins, ban the worship of unclean gods who accepted meat sacrifices and chant sections of the Daodejing.
Buddhism first entered China during the Eastern Han and was first mentioned in 65 AD.; . Liu Ying (d. 71 AD), a half-brother to Emperor Ming of Han (r. 57–75 AD), was one of its earliest Chinese adherents, although Chinese Buddhism at this point was heavily associated with Huang-Lao Daoism. China's first known Buddhist temple, the White Horse Temple, was constructed outside the wall of the capital, Luoyang, during Emperor Ming's reign. Important Buddhist canons were translated into Chinese during the 2nd century AD, including the Sutra of Forty-two Chapters, Perfection of Wisdom, Shurangama Sutra, and Pratyutpanna Sutra.; see also .
Government
Central government
In Han government, the emperor was the supreme judge and lawgiver, the commander-in-chief of the armed forces and sole designator of official nominees appointed to the top posts in central and local administrations; those who earned a 600-bushel salary-rank or higher.; . Theoretically, there were no limits to his power. However, state organs with competing interests and institutions such as the court conference (tingyi 廷議)—where ministers were convened to reach majority consensus on an issue—pressured the emperor to accept the advice of his ministers on policy decisions.; . If the emperor rejected a court conference decision, he risked alienating his high ministers. Nevertheless, emperors sometimes did reject the majority opinion reached at court conferences.
Below the emperor were his cabinet members known as the Three Councillors of State (San gong 三公). These were the Chancellor or Minister over the Masses (Chengxiang 丞相 or Da situ 大司徒), the Imperial Counselor or Excellency of Works (Yushi dafu 御史大夫 or Da sikong 大司空), and Grand Commandant or Grand Marshal (Taiwei 太尉 or Da sima 大司馬).; .
The Chancellor, whose title was changed to 'Minister over the Masses' in 8 BC, was chiefly responsible for drafting the government budget. The Chancellor's other duties included managing provincial registers for land and population, leading court conferences, acting as judge in lawsuits and recommending nominees for high office. He could appoint officials below the salary-rank of 600 bushels.; .
The Imperial Counselor's chief duty was to conduct disciplinary procedures for officials. He shared similar duties with the Chancellor, such as receiving annual provincial reports. However, when his title was changed to Minister of Works in 8 BC, his chief duty became oversight of public works projects.; .
thumb|left|A scene of historic paragons of filial piety conversing with one another, Chinese painted artwork on a lacquered basketwork box, excavated from an Eastern-Han tomb of what was the Chinese Lelang Commandery in modern North Korea
The Grand Commandant, whose title was changed to Grand Marshal in 119 BC before reverting to Grand Commandant in 51 AD, was the irregularly posted commander of the military and then regent during the Western Han period. In the Eastern Han era he was chiefly a civil official who shared many of the same censorial powers as the other two Councillors of State.; .
Ranked below the Three Councillors of State were the Nine Ministers (Jiu qing 九卿), who each headed a specialized ministry. The Minister of Ceremonies (Taichang 太常) was the chief official in charge of religious rites, rituals, prayers and the maintenance of ancestral temples and altars.; ; . The Minister of the Household (Guang lu xun 光祿勳) was in charge of the emperor's security within the palace grounds, external imperial parks and wherever the emperor made an outing by chariot.; . The Minister of the Guards (Weiwei 衛尉) was responsible for securing and patrolling the walls, towers, and gates of the imperial palaces.; . The Minister Coachman (Taipu 太僕) was responsible for the maintenance of imperial stables, horses, carriages and coach-houses for the emperor and his palace attendants, as well as the supply of horses for the armed forces.; . The Minister of Justice (Tingwei 廷尉) was the chief official in charge of upholding, administering, and interpreting the law.; . The Minister Herald (Da honglu 大鴻臚) was the chief official in charge of receiving honored guests at the imperial court, such as nobles and foreign ambassadors.; . The Minister of the Imperial Clan (Zongzheng 宗正) oversaw the imperial court's interactions with the empire's nobility and extended imperial family, such as granting fiefs and titles.; . The Minister of Finance (Da sinong 大司農) was the treasurer for the official bureaucracy and the armed forces who handled tax revenues and set standards for units of measurement.; . The Minister Steward (Shaofu 少府) served the emperor exclusively, providing him with entertainment and amusements, proper food and clothing, medicine and physical care, valuables and equipment.; .
Local government
The Han Empire, excluding kingdoms and marquessates, was divided, in descending order of size, into political units of provinces (zhou), commanderies (jun), and counties (xian). A county was divided into several districts, the latter composed of a group of hamlets, each containing about a hundred families.
The heads of provinces, whose official title was changed from Inspector to Governor and vice versa several times during Han, were responsible for inspecting several commandery-level and kingdom-level administrations.; . On the basis of their reports, the officials in these local administrations would be promoted, demoted, dismissed or prosecuted by the imperial court.
A governor could take various actions without permission from the imperial court. The lower-ranked inspector had executive powers only during times of crisis, such as raising militias across the commanderies under his jurisdiction to suppress a rebellion.
A commandery consisted of a group of counties, and was headed by an Administrator. He was the top civil and military leader of the commandery and handled defense, lawsuits, seasonal instructions to farmers and recommendations of nominees for office sent annually to the capital in a quota system first established by Emperor Wu.; ; . The head of a large county of about 10,000 households was called a Prefect, while the heads of smaller counties were called Chiefs, and both could be referred to as Magistrates.; . A Magistrate maintained law and order in his county, registered the populace for taxation, mobilized commoners for annual corvée duties, repaired schools and supervised public works.
Kingdoms and marquessates
Kingdoms—roughly the size of commanderies—were ruled exclusively by the emperor's male relatives as semi-autonomous fiefdoms. Before 157 BC some kingdoms were ruled by non-relatives, granted to them in return for their services to Emperor Gaozu. The administration of each kingdom was very similar to that of the central government.; ; . Although the emperor appointed the Chancellor of each kingdom, kings appointed all the remaining civil officials in their fiefs.; .
However, in 145 BC, after several insurrections by the kings, Emperor Jing removed the kings' rights to appoint officials whose salaries were higher than 400 bushels. The Imperial Counselors and Nine Ministers (excluding the Minister Coachman) of every kingdom were abolished, although the Chancellor was still appointed by the central government.
With these reforms, kings were reduced to being nominal heads of their fiefs, gaining a personal income from only a portion of the taxes collected in their kingdom. Similarly, the officials in the administrative staff of a full marquess's fief were appointed by the central government. A marquess's Chancellor was ranked as the equivalent of a county Prefect. Like a king, the marquess collected a portion of the tax revenues in his fief as personal income.; .
thumb|upright|An Eastern-Han pottery soldier, with a now-faded coating of paint, is missing a weapon.
Military
At the beginning of the Han dynasty, every male commoner aged twenty-three was liable for conscription into the military. The minimum age for the military draft was reduced to twenty after Emperor Zhao's (r. 87–74 BC) reign. Conscripted soldiers underwent one year of training and one year of service as non-professional soldiers. The year of training was served in one of three branches of the armed forces: infantry, cavalry or navy.; . The year of active service was served either on the frontier, in a king's court or under the Minister of the Guards in the capital. A small professional (paid) standing army was stationed near the capital.
During the Eastern Han, conscription could be avoided if one paid a commutable tax. The Eastern Han court favored the recruitment of a volunteer army. The volunteer army comprised the Southern Army (Nanjun 南軍), while the standing army stationed in and near the capital was the Northern Army (Beijun 北軍). Led by Colonels (Xiaowei 校尉), the Northern Army consisted of five regiments, each composed of several thousand soldiers.; . When central authority collapsed after 189 AD, wealthy landowners, members of the aristocracy/nobility, and regional military-governors relied upon their retainers to act as their own personal troops (buqu 部曲).
During times of war, the volunteer army was increased, and a much larger militia was raised across the country to supplement the Northern Army. In these circumstances, a General (Jiangjun 將軍) led a division, which was divided into regiments led by Colonels and sometimes Majors (Sima 司馬). Regiments were divided into companies and led by Captains. Platoons were the smallest units of soldiers.; .
Economy
Variations in currency
thumb|A wuzhu (五銖) coin issued during the reign of Emperor Wu (r. 141–87 BC), 25.5 mm in diameter
The Han dynasty inherited the ban liang coin type from the Qin. In the beginning of the Han, Emperor Gaozu closed the government mint in favor of private minting of coins. This decision was reversed in 186 BC by his widow Grand Empress Dowager Lü Zhi (d. 180 BC), who abolished private minting. In 182 BC, Lü Zhi issued a bronze coin that was much lighter in weight than previous coins. This caused widespread inflation that was not reduced until 175 BC when Emperor Wen allowed private minters to manufacture coins that were precisely 2.6 g (0.09 oz) in weight.
In 144 BC Emperor Jing abolished private minting in favor of central-government and commandery-level minting; he also introduced a new coin. Emperor Wu introduced another in 120 BC, but a year later he abandoned the ban liangs entirely in favor of the wuzhu (五銖) coin, weighing 3.2 g (0.11 oz). The wuzhu became China's standard coin until the Tang dynasty (618–907 AD). Its use was interrupted briefly by several new currencies introduced during Wang Mang's regime until it was reinstated in 40 AD by Emperor Guangwu.; ; .
Since commandery-issued coins were often of inferior quality and lighter weight, the central government closed commandery mints and monopolized the issue of coinage in 113 BC. This Central government issuance of coinage was overseen by the Superintendent of Waterways and Parks, this duty being transferred to the Minister of Finance during Eastern Han.; .
Taxation and property
Aside from the landowner's land tax paid in a portion of their crop yield, the poll tax and property taxes were paid in coin cash. The annual poll tax rate for adult men and women was 120 coins and 20 coins for minors. Merchants were required to pay a higher rate of 240 coins. The poll tax stimulated a money economy that necessitated the minting of over 28,000,000,000 coins from 118 BC to 5 AD, an average of 220,000,000 coins a year.
The widespread circulation of coin cash allowed successful merchants to invest money in land, empowering the very social class the government attempted to suppress through heavy commercial and property taxes. Emperor Wu even enacted laws which banned registered merchants from owning land, yet powerful merchants were able to avoid registration and own large tracts of land.; .
The small landowner-cultivators formed the majority of the Han tax base; this revenue was threatened during the latter half of Eastern Han when many peasants fell into debt and were forced to work as farming tenants for wealthy landlords.; ; . The Han government enacted reforms in order to keep small landowner-cultivators out of debt and on their own farms. These reforms included reducing taxes, temporary remissions of taxes, granting loans and providing landless peasants temporary lodging and work in agricultural colonies until they could recover from their debts.; .
In 168 BC, the land tax rate was reduced from one-fifteenth of a farming household's crop yield to one-thirtieth,; . and later to a one-hundredth of a crop yield for the last decades of the dynasty. The consequent loss of government revenue was compensated for by increasing property taxes.
The labor tax took the form of conscripted labor for one month per year, which was imposed upon male commoners aged fifteen to fifty-six. This could be avoided in Eastern Han with a commutable tax, since hired labor became more popular.; .
Private manufacture and government monopolies
thumb|A Han-dynasty iron Ji (halberd) and iron dagger
In the early Western Han, a wealthy salt or iron industrialist, whether a semi-autonomous king or wealthy merchant, could boast funds that rivaled the imperial treasury and amass a peasant workforce of over a thousand. This kept many peasants away from their farms and denied the government a significant portion of its land tax revenue.; . To eliminate the influence of such private entrepreneurs, Emperor Wu nationalized the salt and iron industries in 117 BC and allowed many of the former industrialists to become officials administering the monopolies.; ; . By Eastern Han times, the central government monopolies were repealed in favor of production by commandery and county administrations, as well as private businessmen.; .
Liquor was another profitable private industry nationalized by the central government in 98 BC. However, this was repealed in 81 BC and a property tax rate of two coins for every 0.2 L (0.05 gallons) was levied for those who traded it privately.; . By 110 BC Emperor Wu also interfered with the profitable trade in grain when he eliminated speculation by selling government-stored grain at a lower price than demanded by merchants. Apart from Emperor Ming's creation of a short-lived Office for Price Adjustment and Stabilization, which was abolished in 68 AD, central-government price control regulations were largely absent during the Eastern Han.
Science, technology, and engineering
thumb|The ruins of a Han-dynasty watchtower made of rammed earth at Dunhuang, Gansu province, the eastern edge of the Silk Road
The Han dynasty was a unique period in the development of premodern Chinese science and technology, comparable to the level of scientific and technological growth during the Song dynasty (960–1279).; .
Writing materials
In the 1st millennium BC, typical ancient Chinese writing materials were bronzewares, animal bones, and bamboo slips or wooden boards. By the beginning of the Han dynasty, the chief writing materials were clay tablets, silk cloth, and rolled scrolls made from bamboo strips sewn together with hempen string; these were passed through drilled holes and secured with clay stamps.; ; .
The oldest known Chinese piece of hard, hempen wrapping paper dates to the 2nd century BC. The standard papermaking process was invented by Cai Lun (50–121 AD) in 105 AD.; ; . The oldest known surviving piece of paper with writing on it was found in the ruins of a Han watchtower that had been abandoned in 110 AD, in Inner Mongolia.
Metallurgy and agriculture
Evidence suggests that blast furnaces, that convert raw iron ore into pig iron, which can be remelted in a cupola furnace to produce cast iron by means of a cold blast and hot blast, were operational in China by the late Spring and Autumn period (722–481 BC).; . The bloomery was nonexistent in ancient China; however, the Han-era Chinese produced wrought iron by injecting excess oxygen into a furnace and causing decarburization. Cast iron and pig iron could be converted into wrought iron and steel using a fining process.; .
thumb|upright|left|A pair of Eastern-Han iron scissors
The Han-era Chinese used bronze and iron to make a range of weapons, culinary tools, carpenters' tools and domestic wares.; . A significant product of these improved iron-smelting techniques was the manufacture of new agricultural tools. The three-legged iron seed drill, invented by the 2nd century BC, enabled farmers to carefully plant crops in rows instead of casting seeds out by hand.; ; . The heavy moldboard iron plow, also invented during the Han dynasty, required only one man to control it, two oxen to pull it. It had three plowshares, a seed box for the drills, a tool which turned down the soil and could sow roughly 45,730 m2 (11.3 acres) of land in a single day.; .
To protect crops from wind and drought, the Grain Intendant Zhao Guo (趙過) created the alternating fields system (daitianfa 代田法) during Emperor Wu's reign. This system switched the positions of furrows and ridges between growing seasons. Once experiments with this system yielded successful results, the government officially sponsored it and encouraged peasants to use it. Han farmers also used the pit field system (aotian 凹田) for growing crops, which involved heavily fertilized pits that did not require plows or oxen and could be placed on sloping terrain.; . In southern and small parts of central Han-era China, paddy fields were chiefly used to grow rice, while farmers along the Huai River used transplantation methods of rice production.
Structural engineering
thumb|A stone-carved pillar-gate, or que (闕), 6 m (20 ft) in total height, located at the tomb of Gao Yi in Ya'an, Sichuan province, Eastern Han dynasty
Timber was the chief building material during the Han dynasty; it was used to build palace halls, multi-story residential towers and halls and single-story houses. Because wood decays rapidly, the only remaining evidence of Han wooden architecture is a collection of scattered ceramic roof tiles.; . The oldest surviving wooden halls in China date to the Tang dynasty (618–907 AD). Architectural historian Robert L. Thorp points out the scarcity of Han-era archaeological remains, and claims that often unreliable Han-era literary and artistic sources are used by historians for clues about lost Han architecture.
Though Han wooden structures decayed, some Han-dynasty ruins made of brick, stone, and rammed earth remain intact. This includes stone pillar-gates, brick tomb chambers, rammed-earth city walls, rammed-earth and brick beacon towers, rammed-earth sections of the Great Wall, rammed-earth platforms where elevated halls once stood, and two rammed-earth castles in Gansu.; ; ; see also ; see . The ruins of rammed-earth walls that once surrounded the capitals Chang'an and Luoyang still stand, along with their drainage systems of brick arches, ditches, and ceramic water pipes. Monumental stone pillar-gates, twenty-nine of which survive from the Han period, formed entrances of walled enclosures at shrine and tomb sites.; . These pillars feature artistic imitations of wooden and ceramic building components such as roof tiles, eaves, and balustrades.; .
The courtyard house is the most common type of home portrayed in Han artwork. Ceramic architectural models of buildings, like houses and towers, were found in Han tombs, perhaps to provide lodging for the dead in the afterlife. These provide valuable clues about lost wooden architecture. The artistic designs found on ceramic roof tiles of tower models are in some cases exact matches to Han roof tiles found at archaeological sites.
thumb|left|An Eastern-Han vaulted tomb chamber at Luoyang made of small bricks
Over ten Han-era underground tombs have been found, many of them featuring archways, vaulted chambers, and domed roofs. Underground vaults and domes did not require buttress supports since they were held in place by earthen pits. The use of brick vaults and domes in aboveground Han structures is unknown.
From Han literary sources, it is known that wooden-trestle beam bridges, arch bridges, simple suspension bridges, and floating pontoon bridges existed in Han China. However, there are only two known references to arch bridges in Han literature, and only a single Han relief sculpture in Sichuan depicts an arch bridge.
Underground mine shafts, some reaching depths over , were created for the extraction of metal ores.; . Borehole drilling and derricks were used to lift brine to iron pans where it was distilled into salt. The distillation furnaces were heated by natural gas funneled to the surface through bamboo pipelines.; ; . Dangerous amounts of additional gas were siphoned off via carburetor chambers and exhaust pipes.
Mechanical and hydraulic engineering
thumb|A Han-dynasty pottery model of two men operating a winnowing machine with a crank handle and a tilt hammer used to pound grain.
Chinese scholars and officials traditionally considered scientific and engineering pursuits to be the domain of artisans and craftsmen (gongren 工人), far beneath the ideal Confucian literary gentleman. Accordingly, evidence of Han-era mechanical engineering comes largely from the choice observational writings of sometimes disinterested Confucian scholars. Professional artisan-engineers (jiang 匠) did not leave behind detailed records of their work.; see also . Han scholars, who often had little or no expertise in mechanical engineering, sometimes provided insufficient information on the various technologies they described. Nevertheless, some Han literary sources provide crucial information. For example, in 15 BC the philosopher Yang Xiong described the invention of the belt drive for a quilling machine, which was of great importance to early textile manufacturing. The inventions of the artisan-engineer Ding Huan (丁緩) are mentioned in the Miscellaneous Notes on the Western Capital. Around 180 AD, Ding created a manually operated rotary fan used for air conditioning within palace buildings. Ding also used gimbals as pivotal supports for one of his incense burners and invented the world's first known zoetrope lamp.
Modern archaeology has led to the discovery of Han artwork portraying inventions which were otherwise absent in Han literary sources. As observed in Han miniature tomb models, but not in literary sources, the crank handle was used to operate the fans of winnowing machines that separated grain from chaff. The odometer cart, invented during Han, measured journey lengths, using mechanical figures banging drums and gongs to indicate each distance traveled. This invention is depicted in Han artwork by the 2nd century AD, yet detailed written descriptions were not offered until the 3rd century AD. Modern archaeologists have also unearthed specimens of devices used during the Han dynasty, for example a pair of sliding metal calipers used by craftsmen for making minute measurements. These calipers contain inscriptions of the exact day and year they were manufactured. These tools are not mentioned in any Han literary sources.
thumb|upright|left|A modern replica of Zhang Heng's seismometer
The waterwheel appeared in Chinese records during the Han. As mentioned by Huan Tan in about 20 AD, they were used to turn gears that lifted iron trip hammers, and were used in pounding, threshing and polishing grain. However, there is no sufficient evidence for the watermill in China until about the 5th century. The Nanyang Commandery Administrator Du Shi (d. 38 AD) created a waterwheel-powered reciprocator that worked the bellows for the smelting of iron.; . Waterwheels were also used to power chain pumps that lifted water to raised irrigation ditches. The chain pump was first mentioned in China by the philosopher Wang Chong in his 1st-century-AD Balanced Discourse.
The armillary sphere, a three-dimensional representation of the movements in the celestial sphere, was invented in Han China by the 1st century BC. Using a water clock, waterwheel and a series of gears, the Court Astronomer Zhang Heng (78–139 AD) was able to mechanically rotate his metal-ringed armillary sphere.; ; ; . To address the problem of slowed timekeeping in the pressure head of the inflow water clock, Zhang was the first in China to install an additional tank between the reservoir and inflow vessel.; . Zhang also invented a device he termed an "earthquake weathervane" (houfeng didong yi 候風地動儀), which the British scientist Joseph Needham described as "the ancestor of all seismographs".Cited in . This device was able to detect the exact cardinal or ordinal direction of earthquakes from hundreds of kilometers away.; ; . It employed an inverted pendulum that, when disturbed by ground tremors, would trigger a set of gears that dropped a metal ball from one of eight dragon mouths (representing all eight directions) into a metal toad's mouth. The account of this device in the Book of the Later Han (Hou Han shu 後漢書) describes how, on one occasion, one of the metal balls was triggered without any of the observers feeling a disturbance. Several days later, a messenger arrived bearing news that an earthquake had struck in Longxi Commandery (in modern Gansu Province), the direction the device had indicated, which forced the officials at court to admit the efficacy of Zhang's device.
Mathematics
Three Han mathematical treatises still exist. These are the Book on Numbers and Computation (Suan shu shu 算數書), the Arithmetical Classic of the Gnomon and the Circular Paths of Heaven (Zhoubi Suanjing 周髀算經) and the Nine Chapters on the Mathematical Art (Jiu zhang suan shu 九章算術). Han-era mathematical achievements include solving problems with right-angle triangles, square roots, cube roots, and matrix methods,; . finding more accurate approximations for pi,; . providing mathematical proof of the Pythagorean theorem,; . use of the decimal fraction, Gaussian elimination to solve linear equations,; ; . and continued fractions to find the roots of equations.
One of the Han's greatest mathematical advancements was the world's first use of negative numbers. Negative numbers first appeared in the Nine Chapters on the Mathematical Art as black counting rods, where positive numbers were represented by red counting rods. Negative numbers were also used by the Greek mathematician Diophantus, c 275 AD, on the c7th century AD Bakhshali manuscript of Gandhara, South Asia, but were not widely accepted in Europe until the 16th century AD.
thumb|left|A Han-dynasty era mold for making bronze gear wheels (Shanghai Museum)
The Han applied mathematics to various diverse disciplines. In musical tuning, Jing Fang (78–37 BC) realized that 53 perfect fifths was approximate to 31 octaves while creating a musical scale of 60 tones, calculating the difference at 177147⁄176776 (the same value of 53 equal temperament discovered by the German mathematician Nicholas Mercator [1620–1687], i.e. 353/284).; .
Astronomy
Mathematics were essential in drafting the astronomical calendar, a lunisolar calendar that used the Sun and Moon as time-markers throughout the year.; . Use of the ancient Sifen calendar (古四分曆), which measured the tropical year at 3651⁄4 days, was replaced in 104 BC with the Taichu calendar (太初曆) that measured the tropical year at 365385⁄1539 days and the lunar month at 2943⁄81 days. However, Emperor Zhang later reinstated the Sifen calendar.
Han Chinese astronomers made star catalogues and detailed records of comets that appeared in the night sky, including recording the 12 BC appearance of the comet now known as Halley's comet.; ; ; .
Han-era astronomers adopted a geocentric model of the universe, theorizing that it was shaped like a sphere surrounding the earth in the center.; ; . They assumed that the Sun, Moon, and planets were spherical and not disc-shaped. They also thought that the illumination of the Moon and planets was caused by sunlight, that lunar eclipses occurred when the Earth obstructed sunlight falling onto the Moon, and that a solar eclipse occurred when the Moon obstructed sunlight from reaching the Earth. Although others disagreed with his model, Wang Chong accurately described the water cycle of the evaporation of water into clouds.
Cartography, ships, and vehicles
thumb|left|An early Western-Han silk map found in tomb 3 of Mawangdui, depicting the Kingdom of Changsha and Kingdom of Nanyue in southern China (note: the south direction is oriented at the top).
thumb|right|An Eastern-Han pottery ship model with a steering rudder at the stern and anchor at the bow
Evidence found in Chinese literature, and archaeological evidence, show that cartography existed in China before the Han.; . Some of the earliest Han maps discovered were ink-penned silk maps found amongst the Mawangdui Silk Texts in a 2nd-century-BC tomb.; . The general Ma Yuan created the world's first known raised-relief map from rice in the 1st century AD. This date could be revised if the tomb of Qin Shi Huang is excavated and the account in the Records of the Grand Historian concerning a model map of the empire is proven to be true.
Although the use of the graduated scale and grid reference for maps was not thoroughly described until the published work of Pei Xiu (224–271 AD), there is evidence that in the early 2nd century AD, cartographer Zhang Heng was the first to use scales and grids for maps.; ; ; .
The Han-era Chinese sailed in a variety of ships differing from those of previous eras, such as the tower ship. The junk design was developed and realized during Han. Junks featured a square-ended bow and stern, a flat-bottomed hull or carvel-shaped hull with no keel or sternpost, and solid transverse bulkheads in the place of structural ribs found in Western vessels.; . Moreover, Han ships were the first in the world to be steered using a rudder at the stern, in contrast to the simpler steering oar used for riverine transport, allowing them to sail on the high seas.; ; ; ; ; .
Although ox-carts and chariots were previously used in China, the wheelbarrow was first used in Han China in the 1st century BC.; . Han artwork of horse-drawn chariots shows that the Warring-States-Era heavy wooden yoke placed around a horse's chest was replaced by the softer breast strap. Later, during the Northern Wei (386–534 AD), the fully developed horse collar was invented.
Medicine
thumb|The physical exercise chart; a painting on silk depicting the practice of Qigong Taiji; unearthed in 1973 in Hunan Province, China, from the 2nd-century BC Western Han burial site of Mawangdui, Tomb Number 3.
Han-era medical physicians believed that the human body was subject to the same forces of nature that governed the greater universe, namely the cosmological cycles of yin and yang and the five phases. Each organ of the body was associated with a particular phase. Illness was viewed as a sign that qi or "vital energy" channels leading to a certain organ had been disrupted. Thus, Han-era physicians prescribed medicine that was believed to counteract this imbalance.; ; . For example, since the wood phase was believed to promote the fire phase, medicinal ingredients associated with the wood phase could be used to heal an organ associated with the fire phase. Besides dieting, Han physicians also prescribed moxibustion, acupuncture, and calisthenics as methods of maintaining one's health.; ; ; . When surgery was performed by the physician Hua Tuo (d. 208 AD), he used anesthesia to numb his patients' pain and prescribed a rubbing ointment that allegedly sped the process of healing surgical wounds. Whereas the physician Zhang Zhongjing (c. 150–c. 219 AD) is known to have written the Shanghan lun ("Dissertation on Typhoid Fever"), it is thought that both he and Hua Tuo collaborated in compiling the Shennong Ben Cao Jing medical text.
See also
List of emperors of the Han dynasty
Han Emperors family tree
Battle of Jushi
Campaign against Dong Zhuo
Early Imperial China
Four Commanderies of Han on the northern Korean Peninsula
First Chinese domination (History of Vietnam)
Mawangdui
Shuanggudui
Southward expansion of the Han dynasty
Ten Attendants
Sino-Roman relations
References
Citations
Sources
(an abridgement of Joseph Needham's work)
External links
Han dynasty by Minnesota State University
Han dynasty art with video commentary, Minneapolis Institute of Arts
Early Imperial China: A Working Collection of Resources
"Han Culture," Hanyangling Museum Website
Category:200s BC establishments
Category:206 BC
Category:220 disestablishments
Category:3rd-century BC establishments in China
Category:Former countries in Chinese history | 43,460 | 2017-01 |
Light-emitting diode | thumb|Parts of an LED. Although unlabeled, the flat bottom surfaces of the anvil and post embedded inside the epoxy act as anchors, to prevent the conductors from being forcefully pulled out via mechanical strain or vibration.
thumb|alt=Modern LED retrofit with E27 screw in base|A bulb-shaped modern retrofit LED lamp with aluminium heat sink, a light diffusing dome and E27 screw base, using a built-in power supply working on mains voltage
A light-emitting diode (LED) is a two-lead semiconductor light source. It is a p–n junction diode, which emits light when activated. led and LED. When a suitable voltage is applied to the leads, electrons are able to recombine with electron holes within the device, releasing energy in the form of photons. This effect is called electroluminescence, and the color of the light (corresponding to the energy of the photon) is determined by the energy band gap of the semiconductor.
An LED is often small in area (less than 1 mm2) and integrated optical components may be used to shape its radiation pattern.
Appearing as practical electronic components in 1962, the earliest LEDs emitted low-intensity infrared light.
Infrared LEDs are still frequently used as transmitting elements in remote-control circuits, such as those in remote controls for a wide variety of consumer electronics.
The first visible-light LEDs were also of low intensity and limited to red. Modern LEDs are available across the visible, ultraviolet, and infrared wavelengths, with very high brightness.
Early LEDs were often used as indicator lamps for electronic devices, replacing small incandescent bulbs. They were soon packaged into numeric readouts in the form of seven-segment displays and were commonly seen in digital clocks.
Recent developments in LEDs permit them to be used in environmental and task lighting. LEDs have many advantages over incandescent light sources including lower energy consumption, longer lifetime, improved physical robustness, smaller size, and faster switching. Light-emitting diodes are now used in applications as diverse as aviation lighting, automotive headlamps, advertising, general lighting, traffic signals, camera flashes, and lighted wallpaper. , LEDs powerful enough for room lighting remain somewhat more expensive, and require more precise current and heat management, than compact fluorescent lamp sources of comparable output. They are, however, significantly more energy efficient and, arguably, have fewer environmental concerns linked to their disposal.
LEDs have allowed new displays and sensors to be developed, while their high switching rates are also used in advanced communications technology.
History
Discoveries and early devices
thumb|Green electroluminescence from a point contact on a crystal of SiC recreates Round's original experiment from 1907.
Electroluminescence as a phenomenon was discovered in 1907 by the British experimenter H. J. Round of Marconi Labs, using a crystal of silicon carbide and a cat's-whisker detector.
Russian inventor Oleg Losev reported creation of the first LED in 1927. His research was distributed in Soviet, German and British scientific journals, but no practical use was made of the discovery for several decades. Kurt Lehovec, Carl Accardo, and Edward Jamgochian explained these first light-emitting diodes in 1951 using an apparatus employing SiC crystals with a current source of battery or pulse generator and with a comparison to a variant, pure, crystal in 1953.
Rubin BraunsteinRubin Braunstein. physics.ucla.edu of the Radio Corporation of America reported on infrared emission from gallium arsenide (GaAs) and other semiconductor alloys in 1955. Braunstein observed infrared emission generated by simple diode structures using gallium antimonide (GaSb), GaAs, indium phosphide (InP), and silicon-germanium (SiGe) alloys at room temperature and at 77 Kelvin.
In 1957, Braunstein further demonstrated that the rudimentary devices could be used for non-radio communication across a short distance. As noted by Kroemer Braunstein "…had set up a simple optical communications link: Music emerging from a record player was used via suitable electronics to modulate the forward current of a GaAs diode. The emitted light was detected by a PbS diode some distance away. This signal was fed into an audio amplifier and played back by a loudspeaker. Intercepting the beam stopped the music. We had a great deal of fun playing with this setup." This setup presaged the use of LEDs for optical communication applications.
thumb|A Texas Instruments SNX-100 GaAs LED contained in a TO-18 transistor metal case.
In September 1961, while working at Texas Instruments in Dallas, Texas, James R. Biard and Gary Pittman discovered near-infrared (900 nm) light emission from a tunnel diode they had constructed on a GaAs substrate. By October 1961, they had demonstrated efficient light emission and signal coupling between a GaAs p-n junction light emitter and an electrically-isolated semiconductor photodetector.W. T. Matzen, Ed., "Semiconductor Single-Crystal Circuit Development," Texas Instruments Inc., Contract No. AF33(616)-6600, Rept. No ASD-TDR-63-281; March, 1963. On August 8, 1962, Biard and Pittman filed a patent titled "Semiconductor Radiant Diode" based on their findings, which described a zinc diffused p–n junction LED with a spaced cathode contact to allow for efficient emission of infrared light under forward bias. After establishing the priority of their work based on engineering notebooks predating submissions from G.E. Labs, RCA Research Labs, IBM Research Labs, Bell Labs, and Lincoln Lab at MIT, the U.S. patent office issued the two inventors the patent for the GaAs infrared (IR) light-emitting diode (U.S. Patent US3293513), the first practical LED. Immediately after filing the patent, Texas Instruments (TI) began a project to manufacture infrared diodes. In October 1962, TI announced the first commercial LED product (the SNX-100), which employed a pure GaAs crystal to emit a 890 nm light output. In October 1963, TI announced the first commercial hemispherical LED, the SNX-110.
The first visible-spectrum (red) LED was developed in 1962 by Nick Holonyak, Jr. while working at General Electric. Holonyak first reported his LED in the journal Applied Physics Letters on December 1, 1962.
M. George Craford, a former graduate student of Holonyak, invented the first yellow LED and improved the brightness of red and red-orange LEDs by a factor of ten in 1972. In 1976, T. P. Pearsall created the first high-brightness, high-efficiency LEDs for optical fiber telecommunications by inventing new semiconductor materials specifically adapted to optical fiber transmission wavelengths.
Initial commercial development
The first commercial LEDs were commonly used as replacements for incandescent and neon indicator lamps, and in seven-segment displays, first in expensive equipment such as laboratory and electronics test equipment, then later in such appliances as TVs, radios, telephones, calculators, as well as watches (see list of signal uses).
Until 1968, visible and infrared LEDs were extremely costly, in the order of US$200 per unit, and so had little practical use.
The Monsanto Company was the first organization to mass-produce visible LEDs, using gallium arsenide phosphide (GaAsP) in 1968 to produce red LEDs suitable for indicators. Hewlett Packard (HP) introduced LEDs in 1968, initially using GaAsP supplied by Monsanto. These red LEDs were bright enough only for use as indicators, as the light output was not enough to illuminate an area. Readouts in calculators were so small that plastic lenses were built over each digit to make them legible. Later, other colors became widely available and appeared in appliances and equipment. In the 1970s commercially successful LED devices at less than five cents each were produced by Fairchild Optoelectronics. These devices employed compound semiconductor chips fabricated with the planar process invented by Dr. Jean Hoerni at Fairchild Semiconductor.Patent number: 3025589 Retrieved May 17, 2013 The combination of planar processing for chip fabrication and innovative packaging methods enabled the team at Fairchild led by optoelectronics pioneer Thomas Brandt to achieve the needed cost reductions. These methods continue to be used by LED producers.
thumb|LED display of a TI-30 scientific calculator (ca. 1978), which uses plastic lenses to increase the visible digit size
Most LEDs were made in the very common 5 mm T1¾ and 3 mm T1 packages, but with rising power output, it has grown increasingly necessary to shed excess heat to maintain reliability,LED Thermal Management. Lunaraccents.com. Retrieved on March 16, 2012. so more complex packages have been adapted for efficient heat dissipation. Packages for state-of-the-art high-power LEDs bear little resemblance to early LEDs.
Blue LED
Blue LEDs were first developed by Herbert Paul Maruska at RCA in 1972 using gallium nitride (GaN) on a sapphire substrate. SiC-types were first commercially sold in the United States by Cree in 1989. However, neither of these initial blue LEDs were very bright.
The first high-brightness blue LED was demonstrated by Shuji Nakamura of Nichia Corporation in 1994 and was based on InGaN. In parallel, Isamu Akasaki and Hiroshi Amano in Nagoya were working on developing the important GaN nucleation on sapphire substrates and the demonstration of p-type doping of GaN. Nakamura, Akasaki, and Amano were awarded the 2014 Nobel prize in physics for their work. In 1995, Alberto Barbieri at the Cardiff University Laboratory (GB) investigated the efficiency and reliability of high-brightness LEDs and demonstrated a "transparent contact" LED using indium tin oxide (ITO) on (AlGaInP/GaAs).
In 2001 and 2002, processes for growing gallium nitride (GaN) LEDs on silicon were successfully demonstrated. In January 2012, Osram demonstrated high-power InGaN LEDs grown on silicon substrates commercially.. www.osram.de, January 12, 2012
White LEDs and the Illumination breakthrough
The attainment of high efficiency in blue LEDs was quickly followed by the development of the first white LED. In this device a :Ce (known as "YAG") phosphor coating on the emitter absorbs some of the blue emission and produces yellow light through fluorescence. The combination of that yellow with remaining blue light appears white to the eye. However, using different phosphors (fluorescent materials) it also became possible to instead produce green and red light through fluorescence. The resulting mixture of red, green and blue is not only perceived by humans as white light but is superior for illumination in terms of color rendering, whereas one cannot appreciate the color of red or green objects illuminated only by the yellow (and remaining blue) wavelengths from the YAG phosphor.
thumb|320px|Illustration of Haitz's law, showing improvement in light output per LED over time, with a logarithmic scale on the vertical axis
The first white LEDs were expensive and inefficient. However, the light output of LEDs has increased exponentially, with a doubling occurring approximately every 36 months since the 1960s (similar to Moore's law). This trend is generally attributed to the parallel development of other semiconductor technologies and advances in optics and materials science and has been called Haitz's law after Dr. Roland Haitz.
The light output and efficiency of blue and near-ultraviolet LEDs rose as the cost of reliable devices fell: this led to the use of (relatively) high-power white-light LEDs for the purpose of illumination which are replacing incandescent and fluorescent lighting.
Experimental white LEDs have been demonstrated to produce over 300 lumens per watt of electricity; some can last up to 100,000 hours.Press Release, Official Nobel Prize website, 7 October 2014 Compared to incandescent bulbs, this is not only a huge increase in electrical efficiency but – over time – a similar or lower cost per bulb.https://www.eia.gov/todayinenergy/detail.cfm?id=15471 2016-06-11
Working principle
thumb|300px|The inner workings of an LED, showing circuit (top) and band diagram (bottom)
A P-N junction can convert absorbed light energy into a proportional electric current. The same process is reversed here (i.e. the P-N junction emits light when electrical energy is applied to it). This phenomenon is generally called electroluminescence, which can be defined as the emission of light from a semi-conductor under the influence of an electric field. The charge carriers recombine in a forward-biased P-N junction as the electrons cross from the N-region and recombine with the holes existing in the P-region. Free electrons are in the conduction band of energy levels, while holes are in the valence energy band. Thus the energy level of the holes will be lesser than the energy levels of the electrons. Some portion of the energy must be dissipated in order to recombine the electrons and the holes. This energy is emitted in the form of heat and light.
The electrons dissipate energy in the form of heat for silicon and germanium diodes but in gallium arsenide phosphide (GaAsP) and gallium phosphide (GaP) semiconductors, the electrons dissipate energy by emitting photons. If the semiconductor is translucent, the junction becomes the source of light as it is emitted, thus becoming a light-emitting diode, but when the junction is reverse biased no light will be produced by the LED and, on the contrary, the device may also be damaged.
Technology
thumb|300px|I-V diagram for a diode. An LED will begin to emit light when more than 2 or 3 volts is applied to it. The reverse bias region uses a different vertical scale from the forward bias region, in order to show that the leakage current is nearly constant with voltage until breakdown occurs. In forward bias, the current is small but increases exponentially with voltage.
Physics
The LED consists of a chip of semiconducting material doped with impurities to create a p-n junction. As in other diodes, current flows easily from the p-side, or anode, to the n-side, or cathode, but not in the reverse direction. Charge-carriers—electrons and holes—flow into the junction from electrodes with different voltages. When an electron meets a hole, it falls into a lower energy level and releases energy in the form of a photon.
The wavelength of the light emitted, and thus its color, depends on the band gap energy of the materials forming the p-n junction. In silicon or germanium diodes, the electrons and holes usually recombine by a non-radiative transition, which produces no optical emission, because these are indirect band gap materials. The materials used for the LED have a direct band gap with energies corresponding to near-infrared, visible, or near-ultraviolet light.
LED development began with infrared and red devices made with gallium arsenide. Advances in materials science have enabled making devices with ever-shorter wavelengths, emitting light in a variety of colors.
LEDs are usually built on an n-type substrate, with an electrode attached to the p-type layer deposited on its surface. P-type substrates, while less common, occur as well. Many commercial LEDs, especially GaN/InGaN, also use sapphire substrate.
Most materials used for LED production have very high refractive indices. This means that much of the light will be reflected back into the material at the material/air surface interface. Thus, light extraction in LEDs is an important aspect of LED production, subject to much research and development.
Refractive index
thumb|300px|Idealized example of light emission cones in a semiconductor, for a single point-source emission zone. The left illustration is for a fully translucent wafer, while the right illustration shows the half-cones formed when the bottom layer is fully opaque. The light is actually emitted equally in all directions from the point-source, so the areas between the cones show the large amount of trapped light energy that is wasted as heat.
thumb|300px|
The light emission cones of a real LED wafer are far more complex than a single point-source light emission. The light emission zone is typically a two-dimensional plane between the wafers. Every atom across this plane has an individual set of emission cones.
Drawing the billions of overlapping cones is impossible, so this is a simplified diagram showing the extents of all the emission cones combined. The larger side cones are clipped to show the interior features and reduce image complexity; they would extend to the opposite edges of the two-dimensional emission plane.
Bare uncoated semiconductors such as silicon exhibit a very high refractive index relative to open air, which prevents passage of photons arriving at sharp angles relative to the air-contacting surface of the semiconductor due to total internal reflection. This property affects both the light-emission efficiency of LEDs as well as the light-absorption efficiency of photovoltaic cells. The refractive index of silicon is 3.96 (at 590 nm), while air is 1.0002926.Refraction — Snell's Law. Interactagram.com. Retrieved on March 16, 2012.
In general, a flat-surface uncoated LED semiconductor chip will emit light only perpendicular to the semiconductor's surface, and a few degrees to the side, in a cone shape referred to as the light cone, cone of light,Lipták, Bela G. (2005) Instrument Engineers' Handbook: Process control and optimization, CRC Press, ISBN 0-8493-1081-4 p. 537, "cone of light" in context of optical fibers or the escape cone.Mueller, Gerd (2000) Electroluminescence I, Academic Press, ISBN 0-12-752173-9, p. 67, "escape cone of light" from semiconductor, illustrations of light cones on p. 69 The maximum angle of incidence is referred to as the critical angle. When this angle is exceeded, photons no longer escape the semiconductor but are instead reflected internally inside the semiconductor crystal as if it were a mirror.
Internal reflections can escape through other crystalline faces if the incidence angle is low enough and the crystal is sufficiently transparent to not re-absorb the photon emission. But for a simple square LED with 90-degree angled surfaces on all sides, the faces all act as equal angle mirrors. In this case, most of the light can not escape and is lost as waste heat in the crystal.
A convoluted chip surface with angled facets similar to a jewel or fresnel lens can increase light output by allowing light to be emitted perpendicular to the chip surface while far to the sides of the photon emission point.
The ideal shape of a semiconductor with maximum light output would be a microsphere with the photon emission occurring at the exact center, with electrodes penetrating to the center to contact at the emission point. All light rays emanating from the center would be perpendicular to the entire surface of the sphere, resulting in no internal reflections. A hemispherical semiconductor would also work, with the flat back-surface serving as a mirror to back-scattered photons.Dakin, John and Brown, Robert G. W. (eds.) Handbook of optoelectronics, Volume 2, Taylor & Francis, 2006 ISBN 0-7503-0646-7 p. 356, "Die shaping is a step towards the ideal solution, that of a point light source at the center of a spherical semiconductor die."
Transition coatings
After the doping of the wafer, it is cut apart into individual dies. Each die is commonly called a chip.
Many LED semiconductor chips are encapsulated or potted in clear or colored molded plastic shells. The plastic shell has three purposes:
Mounting the semiconductor chip in devices is easier to accomplish.
The tiny fragile electrical wiring is physically supported and protected from damage.
The plastic acts as a refractive intermediary between the relatively high-index semiconductor and low-index open air.Schubert, E. Fred (2006) Light-emitting diodes, Cambridge University Press, ISBN 0-521-86538-7 p. 97, "Epoxy Encapsulants", "The light extraction efficiency can be enhanced by using dome-shaped encapsulants with a large refractive index."
The third feature helps to boost the light emission from the semiconductor by acting as a diffusing lens, allowing light to be emitted at a much higher angle of incidence from the light cone than the bare chip is able to emit alone.
Efficiency and operational parameters
Typical indicator LEDs are designed to operate with no more than 30–60 milliwatts (mW) of electrical power. Around 1999, Philips Lumileds introduced power LEDs capable of continuous use at one watt. These LEDs used much larger semiconductor die sizes to handle the large power inputs. Also, the semiconductor dies were mounted onto metal slugs to allow for heat removal from the LED die.
One of the key advantages of LED-based lighting sources is high luminous efficacy. White LEDs quickly matched and overtook the efficacy of standard incandescent lighting systems. In 2002, Lumileds made five-watt LEDs available with luminous efficacy of 18–22 lumens per watt (lm/W). For comparison, a conventional incandescent light bulb of 60–100 watts emits around 15 lm/W, and standard fluorescent lights emit up to 100 lm/W.
, Philips had achieved the following efficacies for each color. The efficiency values show the physics – light power out per electrical power in. The lumen-per-watt efficacy value includes characteristics of the human eye and is derived using the luminosity function.
Color Wavelength range (nm) Typical efficiency coefficient Typical efficacy (lm/W)Red 620 < λ < 645 0.39 72Red-orange 610 < λ < 620 0.29 98Green 520 < λ < 550 0.15 93Cyan 490 < λ < 520 0.26 75Blue 460 < λ < 490 0.35 37
In September 2003, a new type of blue LED was demonstrated by Cree that consumes 24 mW at 20 milliamperes (mA). This produced a commercially packaged white light giving 65 lm/W at 20 mA, becoming the brightest white LED commercially available at the time, and more than four times as efficient as standard incandescents. In 2006, they demonstrated a prototype with a record white LED luminous efficacy of 131 lm/W at 20 mA. Nichia Corporation has developed a white LED with luminous efficacy of 150 lm/W at a forward current of 20 mA. Cree's XLamp XM-L LEDs, commercially available in 2011, produce 100 lm/W at their full power of 10 W, and up to 160 lm/W at around 2 W input power. In 2012, Cree announced a white LED giving 254 lm/W,"Cree Sets New Record for White LED Efficiency", Tech-On, April 23, 2012. and 303 lm/W in March 2014."Cree First to Break 300 Lumens-Per-Watt Barrier", Cree news
Practical general lighting needs high-power LEDs, of one watt or more. Typical operating currents for such devices begin at 350 mA.
These efficiencies are for the light-emitting diode only, held at low temperature in a lab. Since LEDs installed in real fixtures operate at higher temperature and with driver losses, real-world efficiencies are much lower. United States Department of Energy (DOE) testing of commercial LED lamps designed to replace incandescent lamps or CFLs showed that average efficacy was still about 46 lm/W in 2009 (tested performance ranged from 17 lm/W to 79 lm/W).
Efficiency droop
Efficiency droop is the decrease in luminous efficiency of LEDs as the electric current increases above tens of milliamperes.
This effect was initially theorized to be related to elevated temperatures. Scientists proved the opposite to be true: although the life of an LED would be shortened, the efficiency droop is less severe at elevated temperatures.Identifying the Causes of LED Efficiency Droop, By Steven Keeping, Digi-Key Corporation Tech Zone The mechanism causing efficiency droop was identified in 2007 as Auger recombination, which was taken with mixed reaction. In 2013, a study confirmed Auger recombination as the cause of efficiency droop.
In addition to being less efficient, operating LEDs at higher electric currents creates higher heat levels which compromise the lifetime of the LED. Because of this increased heating at higher currents, high-brightness LEDs have an industry standard of operating at only 350 mA, which is a compromise between light output, efficiency, and longevity.Stevenson, Richard (August 2009) The LED’s Dark Secret: Solid-state lighting won't supplant the lightbulb until it can overcome the mysterious malady known as droop. IEEE SpectrumThe LED's dark secret. EnergyDaily. Retrieved on March 16, 2012.Smart Lighting: New LED Drops The 'Droop'. Sciencedaily.com (January 13, 2009). Retrieved on March 16, 2012.
Possible solutions
Instead of increasing current levels, luminance is usually increased by combining multiple LEDs in one bulb. Solving the problem of efficiency droop would mean that household LED light bulbs would need fewer LEDs, which would significantly reduce costs.
Researchers at the U.S. Naval Research Laboratory have found a way to lessen the efficiency droop. They found that the droop arises from non-radiative Auger recombination of the injected carriers. They created quantum wells with a soft confinement potential to lessen the non-radiative Auger processes.A Roadmap to Efficient Green-Blue-Ultraviolet Light-Emitting Diodes, U.S. Naval Research Laboratory, 19 February 2014, Donna McKinney
Researchers at Taiwan National Central University and Epistar Corp are developing a way to lessen the efficiency droop by using ceramic aluminium nitride (AlN) substrates, which are more thermally conductive than the commercially used sapphire. The higher thermal conductivity reduces self-heating effects.Enabling high-voltage InGaN LED operation with ceramic substrate, //Semiconductor Today//, 11 February 2014, Mike Cooke
Lifetime and failure
Solid-state devices such as LEDs are subject to very limited wear and tear if operated at low currents and at low temperatures. Typical lifetimes quoted are 25,000 to 100,000 hours, but heat and current settings can extend or shorten this time significantly., US Department of Energy
The most common symptom of LED (and diode laser) failure is the gradual lowering of light output and loss of efficiency. Sudden failures, although rare, can also occur. Early red LEDs were notable for their short service life. With the development of high-power LEDs, the devices are subjected to higher junction temperatures and higher current densities than traditional devices. This causes stress on the material and may cause early light-output degradation. To quantitatively classify useful lifetime in a standardized manner it has been suggested to use L70 or L50, which are the runtimes (typically given in thousands of hours) at which a given LED reaches 70% and 50% of initial light output, respectively.
Whereas in most previous sources of light (incandescent lamps, discharge lamps, and those that burn combustible fuel, e.g. candles and oil lamps) the light results from heat, LEDs only operate if they are kept cool enough. The manufacturer commonly specifies a maximum junction temperature of 125 or 150 °C, and lower temperatures are advisable in the interests of long life. At these temperatures, relatively little heat is lost by radiation, which means that the light beam generated by an LED is cool.
The waste heat in a high-power LED (which as of 2015 can be less than half the power that it consumes) is conveyed by conduction through the substrate and package of the LED to a heat sink, which gives up the heat to the ambient air by convection. Careful thermal design is, therefore, essential, taking into account the thermal resistances of the LED’s package, the heat sink and the interface between the two. Medium-power LEDs are often designed to be soldered directly to a printed circuit board that contains a thermally conductive metal layer. High-power LEDs are packaged in large-area ceramic packages designed to be attached to a metal heat sink, the interface being a material with high thermal conductivity (thermal grease, phase-change material, thermally conductive pad or thermal adhesive).
If an LED-based lamp is installed in an unventilated luminaire, or a luminaire is located in an environment that does not have free air circulation, the LED is likely to overheat, resulting in reduced life or early catastrophic failure. Thermal design is often based on an ambient temperature of . LEDs used in outdoor applications, such as traffic signals or in-pavement signal lights, and in climates where the temperature within the light fixture gets very high, could experience reduced output or even failure.Conway, K. M. and J. D. Bullough. 1999. Will LEDs transform traffic signals as they did exit signs? Proceedings of the Illuminating Engineering Society of North America Annual Conference (pp. 1–9), New Orleans, Louisiana, August 9–11. New York, NY: Illuminating Engineering Society of North America.
Since LED efficacy is higher at low temperatures, LED technology is well suited for supermarket freezer lighting.Narendran, N., J. Brons, and J. Taylor. 2006. Energy-efficient Alternative for Commercial Refrigeration. Project report prepared for the New York State Energy Research and Development Authority.ASSIST. 2008. Recommendations for Testing and Evaluating Luminaires for Refrigerated and Freezer Display Cases. Vol. 5, Issue 1. Troy, N.Y.: Lighting Research Center.Narendran, N. 2006. Field Test DELTA Snapshots: LED Lighting In Freezer Cases. Troy, N.Y.: Lighting Research Center. Because LEDs produce less waste heat than incandescent lamps, their use in freezers can save on refrigeration costs as well. However, they may be more susceptible to frost and snow buildup than incandescent lamps, so some LED lighting systems have been designed with an added heating circuit. Additionally, research has developed heat sink technologies that will transfer heat produced within the junction to appropriate areas of the light fixture.Gu, Y., A. Baker, and N. Narendran. 2007. Investigation of thermal management technique in blue LED airport taxiway fixtures. Seventh International Conference on Solid State Lighting, Proceedings of SPIE 6669: 66690U.
Colors and materials
Conventional LEDs are made from a variety of inorganic semiconductor materials. The following table shows the available colors with wavelength range, voltage drop, and material:
Color Wavelength [nm] Voltage drop [ΔV] Semiconductor material Infrared λ > 760 ΔV < 1.63 Gallium arsenide (GaAs) Aluminium gallium arsenide (AlGaAs) Red 610 < λ < 760 1.63 < ΔV < 2.03 Aluminium gallium arsenide (AlGaAs)Gallium arsenide phosphide (GaAsP)Aluminium gallium indium phosphide (AlGaInP) Gallium(III) phosphide (GaP) Orange 590 < λ < 610 2.03 < ΔV < 2.10 Gallium arsenide phosphide (GaAsP)Aluminium gallium indium phosphide (AlGaInP) Gallium(III) phosphide (GaP) Yellow 570 < λ < 590 2.10 < ΔV < 2.18 Gallium arsenide phosphide (GaAsP)Aluminium gallium indium phosphide (AlGaInP) Gallium(III) phosphide (GaP) Green 500 < λ < 570 1.9OSRAM: green LED. osram-os.com. Retrieved on March 16, 2012. < ΔV < 4.0 Traditional green: Gallium(III) phosphide (GaP)Aluminium gallium indium phosphide (AlGaInP)Aluminium gallium phosphide (AlGaP) Pure green: Indium gallium nitride (InGaN) / Gallium(III) nitride (GaN) Blue 450 < λ < 500 2.48 < ΔV < 3.7 Zinc selenide (ZnSe)Indium gallium nitride (InGaN)Silicon carbide (SiC) as substrateSilicon (Si) as substrate—under development Violet 400 < λ < 450 2.76 < ΔV < 4.0 Indium gallium nitride (InGaN) Purple Multiple types 2.48 < ΔV < 3.7 Dual blue/red LEDs, blue with red phosphor, or white with purple plastic Ultraviolet λ < 400 3 < ΔV < 4.1 Indium gallium nitride (InGaN) (385-400 nm)
Diamond (235 nm)Boron nitride (215 nm) Aluminium nitride (AlN) (210 nm) Aluminium gallium nitride (AlGaN) Aluminium gallium indium nitride (AlGaInN)—down to 210 nm Pink Multiple types ΔV ~ 3.3How to Wire/Connect LEDs. Llamma.com. Retrieved on March 16, 2012. Blue with one or two phosphor layers, yellow with red, orange or pink phosphor added afterwards,
white with pink plastic, or white phosphors with pink pigment or dye over top.LED types by Color, Brightness, and Chemistry. Donklipstein.com. Retrieved on March 16, 2012. White Broad spectrum 2.8 < ΔV < 4.2 Cool / Pure White: Blue/UV diode with yellow phosphor Warm White: Blue diode with orange phosphor
Blue and ultraviolet
thumb|upright|Blue LEDs
The first blue-violet LED using magnesium-doped gallium nitride was made at Stanford University in 1972 by Herb Maruska and Wally Rhines, doctoral students in materials science and engineering."Nobel Shocker: RCA Had the First Blue LED in 1972". IEEE Spectrum. October 9, 2014"Oregon tech CEO says Nobel Prize in Physics overlooks the actual inventors". The Oregonian. October 16, 2014 At the time Maruska was on leave from RCA Laboratories, where he collaborated with Jacques Pankove on related work. In 1971, the year after Maruska left for Stanford, his RCA colleagues Pankove and Ed Miller demonstrated the first blue electroluminescence from zinc-doped gallium nitride, though the subsequent device Pankove and Miller built, the first actual gallium nitride light-emitting diode, emitted green light.Schubert, E. Fred Light-emitting diodes 2nd ed., Cambridge University Press, 2006 ISBN 0-521-86538-7 pp. 16–17Maruska, H. (2005). "A Brief History of GaN Blue Light-Emitting Diodes". LIGHTimes Online – LED Industry News. In 1974 the U.S. Patent Office awarded Maruska, Rhines and Stanford professor David Stevenson a patent for their work in 1972 (U.S. Patent US3819974 A) and today magnesium-doping of gallium nitride continues to be the basis for all commercial blue LEDs and laser diodes. These devices built in the early 1970s had too little light output to be of practical use and research into gallium nitride devices slowed. In August 1989, Cree introduced the first commercially available blue LED based on the indirect bandgap semiconductor, silicon carbide (SiC).Major Business and Product Milestones. Cree.com. Retrieved on March 16, 2012. SiC LEDs had very low efficiency, no more than about 0.03%, but did emit in the blue portion of the visible light spectrum.
In the late 1980s, key breakthroughs in GaN epitaxial growth and p-type doping ushered in the modern era of GaN-based optoelectronic devices. Building upon this foundation, Theodore Moustakas at Boston University patented a method for producing high-brightness blue LEDs using a new two-step process.Moustakas, Theodore D. "Highly insulating monocrystalline gallium nitride thin films " Issue date: Mar 18, 1991 Two years later, in 1993, high-brightness blue LEDs were demonstrated again by Shuji Nakamura of Nichia Corporation using a gallium nitride growth process similar to Moustakas's.Iwasa, Naruhito; Mukai, Takashi and Nakamura, Shuji "Light-emitting gallium nitride-based compound semiconductor device" Issue date: November 26, 1996 Both Moustakas and Nakamura were issued separate patents, which confused the issue of who was the original inventor (partly because although Moustakas invented his first, Nakamura filed first). This new development revolutionized LED lighting, making high-power blue light sources practical, leading to the development of technologies like Blu-ray, as well as allowing the bright high-resolution screens of modern tablets and phones.
Nakamura was awarded the 2006 Millennium Technology Prize for his invention.2006 Millennium technology prize awarded to UCSB's Shuji Nakamura. Ia.ucsb.edu (June 15, 2006). Retrieved on March 16, 2012.
Nakamura, Hiroshi Amano and Isamu Akasaki were awarded the Nobel Prize in Physics in 2014 for the invention of the blue LED. In 2015, a US court ruled that three companies (i.e. the litigants who had not previously settled out of court) that had licensed Nakamura's patents for production in the United States had infringed Moustakas's prior patent, and ordered them to pay licensing fees of not less than 13 million USD.
By the late 1990s, blue LEDs became widely available. They have an active region consisting of one or more InGaN quantum wells sandwiched between thicker layers of GaN, called cladding layers. By varying the relative In/Ga fraction in the InGaN quantum wells, the light emission can in theory be varied from violet to amber. Aluminium gallium nitride (AlGaN) of varying Al/Ga fraction can be used to manufacture the cladding and quantum well layers for ultraviolet LEDs, but these devices have not yet reached the level of efficiency and technological maturity of InGaN/GaN blue/green devices. If un-alloyed GaN is used in this case to form the active quantum well layers, the device will emit near-ultraviolet light with a peak wavelength centred around 365 nm. Green LEDs manufactured from the InGaN/GaN system are far more efficient and brighter than green LEDs produced with non-nitride material systems, but practical devices still exhibit efficiency too low for high-brightness applications.
With nitrides containing aluminium, most often AlGaN and AlGaInN, even shorter wavelengths are achievable. Ultraviolet LEDs in a range of wavelengths are becoming available on the market. Near-UV emitters at wavelengths around 375–395 nm are already cheap and often encountered, for example, as black light lamp replacements for inspection of anti-counterfeiting UV watermarks in some documents and paper currencies. Shorter-wavelength diodes, while substantially more expensive, are commercially available for wavelengths down to 240 nm. As the photosensitivity of microorganisms approximately matches the absorption spectrum of DNA, with a peak at about 260 nm, UV LED emitting at 250–270 nm are to be expected in prospective disinfection and sterilization devices. Recent research has shown that commercially available UVA LEDs (365 nm) are already effective disinfection and sterilization devices.
UV-C wavelengths were obtained in laboratories using aluminium nitride (210 nm), boron nitride (215 nm) and diamond (235 nm).
RGB
thumb|RGB-SMD-LED
RGB LEDs consist of one red, one green, and one blue LED. By independently adjusting each of the three, RGB LEDs are capable of producing a wide color gamut. Unlike dedicated-color LEDs, however, these obviously do not produce pure wavelengths. Moreover, such modules as commercially available are often not optimized for smooth color mixing.
White
There are two primary ways of producing white light-emitting diodes (WLEDs), LEDs that generate high-intensity white light. One is to use individual LEDs that emit three primary colors—red, green, and blue—and then mix all the colors to form white light. The other is to use a phosphor material to convert monochromatic light from a blue or UV LED to broad-spectrum white light, much in the same way a fluorescent light bulb works. It is important to note that the 'whiteness' of the light produced is essentially engineered to suit the human eye, and depending on the situation it may not always be appropriate to think of it as white light.
There are three main methods of mixing colors to produce white light from an LED:
blue LED + green LED + red LED (color mixing; can be used as backlighting for displays, extremely poor for illumination due to gaps in spectrum)
near-UV or UV LED + RGB phosphor (an LED producing light with a wavelength shorter than blue's is used to excite an RGB phosphor)
blue LED + yellow phosphor (two complementary colors combine to form white light; more efficient than first two methods and more commonly used)
Because of metamerism, it is possible to have quite different spectra that appear white. However, the appearance of objects illuminated by that light may vary as the spectrum varies, this is the issue of Colour rendition, quite separate from Colour Temperature, where a really orange or cyan object could appear with the wrong colour and much darker as the LED or phosphor does not emit the wavelength. The best colour rendition CFL and LEDs use a mix of phosphors, resulting in less efficiency but better quality of light. Though halogen is more orange Colour Temperature, it's still the best easily available artificial light source in terms of colour rendition.
RGB systems
thumb|380px|Combined spectral curves for blue, yellow-green, and high-brightness red solid-state semiconductor LEDs. FWHM spectral bandwidth is approximately 24–27 nm for all three colors.
thumb|RGB LED
White light can be formed by mixing differently colored lights; the most common method is to use red, green, and blue (RGB). Hence the method is called multi-color white LEDs (sometimes referred to as RGB LEDs). Because these need electronic circuits to control the blending and diffusion of different colors, and because the individual color LEDs typically have slightly different emission patterns (leading to variation of the color depending on direction) even if they are made as a single unit, these are seldom used to produce white lighting. Nonetheless, this method has many applications because of the flexibility of mixing different colors, and in principle, this mechanism also has higher quantum efficiency in producing white light.
There are several types of multi-color white LEDs: di-, tri-, and tetrachromatic white LEDs. Several key factors that play among these different methods include color stability, color rendering capability, and luminous efficacy. Often, higher efficiency will mean lower color rendering, presenting a trade-off between the luminous efficacy and color rendering. For example, the dichromatic white LEDs have the best luminous efficacy (120 lm/W), but the lowest color rendering capability. However, although tetrachromatic white LEDs have excellent color rendering capability, they often have poor luminous efficacy. Trichromatic white LEDs are in between, having both good luminous efficacy (>70 lm/W) and fair color rendering capability.
One of the challenges is the development of more efficient green LEDs. The theoretical maximum for green LEDs is 683 lumens per watt but as of 2010 few green LEDs exceed even 100 lumens per watt. The blue and red LEDs get closer to their theoretical limits.
Multi-color LEDs offer not merely another means to form white light but a new means to form light of different colors. Most perceivable colors can be formed by mixing different amounts of three primary colors. This allows precise dynamic color control. As more effort is devoted to investigating this method, multi-color LEDs should have profound influence on the fundamental method that we use to produce and control light color. However, before this type of LED can play a role on the market, several technical problems must be solved. These include that this type of LED's emission power decays exponentially with rising temperature,
resulting in a substantial change in color stability. Such problems inhibit and may preclude industrial use. Thus, many new package designs aimed at solving this problem have been proposed and their results are now being reproduced by researchers and scientists. However multi-colour LEDs without phosphors can never provide good quality lighting because each LED is a narrow band source (see graph). LEDs without phosphor while a poorer solution for general lighting are the best solution for displays, either backlight of LCD, or direct LED based pixels.
Correlated color temperature (CCT) dimming for LED technology is regarded as a difficult task since binning, age and temperature drift effects of LEDs change the actual color value output. Feedback loop systems are used for example with color sensors, to actively monitor and control the color output of multiple color mixing LEDs.
Phosphor-based LEDs
thumb|right|350px|Spectrum of a white LED showing blue light directly emitted by the GaN-based LED (peak at about 465 nm) and the more broadband Stokes-shifted light emitted by the Ce3+:YAG phosphor, which emits at roughly 500–700 nm
This method involves coating LEDs of one color (mostly blue LEDs made of InGaN) with phosphors of different colors to form white light; the resultant LEDs are called phosphor-based or phosphor-converted white LEDs (pcLEDs). A fraction of the blue light undergoes the Stokes shift being transformed from shorter wavelengths to longer. Depending on the color of the original LED, phosphors of different colors can be employed. If several phosphor layers of distinct colors are applied, the emitted spectrum is broadened, effectively raising the color rendering index (CRI) value of a given LED.
Phosphor-based LED efficiency losses are due to the heat loss from the Stokes shift and also other phosphor-related degradation issues. Their luminous efficacies compared to normal LEDs depend on the spectral distribution of the resultant light output and the original wavelength of the LED itself. For example, the luminous efficacy of a typical YAG yellow phosphor based white LED ranges from 3 to 5 times the luminous efficacy of the original blue LED because of the human eye's greater sensitivity to yellow than to blue (as modeled in the luminosity function). Due to the simplicity of manufacturing, the phosphor method is still the most popular method for making high-intensity white LEDs. The design and production of a light source or light fixture using a monochrome emitter with phosphor conversion is simpler and cheaper than a complex RGB system, and the majority of high-intensity white LEDs presently on the market are manufactured using phosphor light conversion.
Among the challenges being faced to improve the efficiency of LED-based white light sources is the development of more efficient phosphors. As of 2010, the most efficient yellow phosphor is still the YAG phosphor, with less than 10% Stokes shift loss. Losses attributable to internal optical losses due to re-absorption in the LED chip and in the LED packaging itself account typically for another 10% to 30% of efficiency loss. Currently, in the area of phosphor LED development, much effort is being spent on optimizing these devices to higher light output and higher operation temperatures. For instance, the efficiency can be raised by adapting better package design or by using a more suitable type of phosphor. Conformal coating process is frequently used to address the issue of varying phosphor thickness.
Some phosphor-based white LEDs encapsulate InGaN blue LEDs inside phosphor-coated epoxy. Alternatively, the LED might be paired with a remote phosphor, a preformed polycarbonate piece coated with the phosphor material. Remote phosphors provide more diffuse light, which is desirable for many applications. Remote phosphor designs are also more tolerant of variations in the LED emissions spectrum. A common yellow phosphor material is cerium-doped yttrium aluminium garnet (Ce3+:YAG).
White LEDs can also be made by coating near-ultraviolet (NUV) LEDs with a mixture of high-efficiency europium-based phosphors that emit red and blue, plus copper and aluminium-doped zinc sulfide (ZnS:Cu, Al) that emits green. This is a method analogous to the way fluorescent lamps work. This method is less efficient than blue LEDs with YAG:Ce phosphor, as the Stokes shift is larger, so more energy is converted to heat, but yields light with better spectral characteristics, which render color better. Due to the higher radiative output of the ultraviolet LEDs than of the blue ones, both methods offer comparable brightness. A concern is that UV light may leak from a malfunctioning light source and cause harm to human eyes or skin.
Other white LEDs
Another method used to produce experimental white light LEDs used no phosphors at all and was based on homoepitaxially grown zinc selenide (ZnSe) on a ZnSe substrate that simultaneously emitted blue light from its active region and yellow light from the substrate.
A new style of wafers composed of gallium-nitride-on-silicon (GaN-on-Si) is being used to produce white LEDs using 200-mm silicon wafers. This avoids the typical costly sapphire substrate in relatively small 100- or 150-mm wafer sizes.Next-Generation GaN-on-Si White LEDs Suppress Costs, Electronic Design, 19 November 2013 The sapphire apparatus must be coupled with a mirror-like collector to reflect light that would otherwise be wasted. It is predicted that by 2020, 40% of all GaN LEDs will be made with GaN-on-Si. Manufacturing large sapphire material is difficult, while large silicon material is cheaper and more abundant. LED companies shifting from using sapphire to silicon should be a minimal investment.GaN-on-Silicon LEDs Forecast to Increase Market Share to 40 Percent by 2020, iSuppli, 4 December 2013
Organic light-emitting diodes (OLEDs)
thumb|Demonstration of a flexible OLED device
thumb|Orange light-emitting diode
In an organic light-emitting diode (OLED), the electroluminescent material comprising the emissive layer of the diode is an organic compound. The organic material is electrically conductive due to the delocalization of pi electrons caused by conjugation over all or part of the molecule, and the material therefore functions as an organic semiconductor. The organic materials can be small organic molecules in a crystalline phase, or polymers.
The potential advantages of OLEDs include thin, low-cost displays with a low driving voltage, wide viewing angle, and high contrast and color gamut. Polymer LEDs have the added benefit of printable and flexible displays. OLEDs have been used to make visual displays for portable electronic devices such as cellphones, digital cameras, and MP3 players while possible future uses include lighting and televisions.
Quantum dot LEDs
Quantum dots (QD) are semiconductor nanocrystals whose optical properties allow their emission color to be tuned from the visible into the infrared spectrum.Quantum-dot LED may be screen of choice for future electronics Massachusetts Institute of Technology News Office, December 18, 2002 This allows quantum dot LEDs to create almost any color on the CIE diagram. This provides more color options and better color rendering than white LEDs since the emission spectrum is much narrower, characteristic of quantum confined states.
There are two types of schemes for QD excitation. One uses photo excitation with a primary light source LED (typically blue or UV LEDs are used). The other is direct electrical excitation first demonstrated by Alivisatos et al.
One example of the photo-excitation scheme is a method developed by Michael Bowers, at Vanderbilt University in Nashville, involving coating a blue LED with quantum dots that glow white in response to the blue light from the LED. This method emits a warm, yellowish-white light similar to that made by incandescent light bulbs. Quantum dots are also being considered for use in white light-emitting diodes in liquid crystal display (LCD) televisions.Nanoco Signs Agreement with Major Japanese Electronics Company, September 23, 2009
In February 2011 scientists at PlasmaChem GmbH were able to synthesize quantum dots for LED applications and build a light converter on their basis, which was able to efficiently convert light from blue to any other color for many hundred hours.Nanotechnologie Aktuell, pp. 98–99, v. 4, 2011, ISSN 1866-4997 Such QDs can be used to emit visible or near infrared light of any wavelength being excited by light with a shorter wavelength.
The structure of QD-LEDs used for the electrical-excitation scheme is similar to basic design of OLEDs. A layer of quantum dots is sandwiched between layers of electron-transporting and hole-transporting materials. An applied electric field causes electrons and holes to move into the quantum dot layer and recombine forming an exciton that excites a QD. This scheme is commonly studied for quantum dot display. The tunability of emission wavelengths and narrow bandwidth is also beneficial as excitation sources for fluorescence imaging. Fluorescence near-field scanning optical microscopy
(NSOM) utilizing an integrated QD-LED has been demonstrated.
In February 2008, a luminous efficacy of 300 lumens of visible light per watt of radiation (not per electrical watt) and warm-light emission was achieved by using nanocrystals.
Types
thumb|center|750px|LEDs are produced in a variety of shapes and sizes. The color of the plastic lens is often the same as the actual color of light emitted, but not always. For instance, purple plastic is often used for infrared LEDs, and most blue devices have colorless housings. Modern high-power LEDs such as those used for lighting and backlighting are generally found in surface-mount technology (SMT) packages (not shown).
The main types of LEDs are miniature, high-power devices and custom designs such as alphanumeric or multi-color.What is the difference between 3528 LEDs and 5050 LEDs |SMD 5050 SMD 3528. Flexfireleds.com. Retrieved on March 16, 2012.
Miniature
thumb|Photo of miniature surface mount LEDs in most common sizes. They can be much smaller than a traditional 5 mm lamp type LED which is shown on the upper left corner.
thumb|Very small (1.6x1.6x0.35 mm) red, green, and blue surface mount miniature LED package with gold wire bonding details.
These are mostly single-die LEDs used as indicators, and they come in various sizes from 2 mm to 8 mm, through-hole and surface mount packages. They usually do not use a separate heat sink.LED-design. Elektor.com. Retrieved on March 16, 2012. Typical current ratings range from around 1 mA to above 20 mA. The small size sets a natural upper boundary on power consumption due to heat caused by the high current density and need for a heat sink. Often daisy chained as used in LED tapes.
Common package shapes include round, with a domed or flat top, rectangular with a flat top (as used in bar-graph displays), and triangular or square with a flat top.
The encapsulation may also be clear or tinted to improve contrast and viewing angle.
Researchers at the University of Washington have invented the thinnest LED. It is made of two-dimensional (2-D) flexible materials. It is three atoms thick, which is 10 to 20 times thinner than three-dimensional (3-D) LEDs and is also 10,000 times smaller than the thickness of a human hair. These 2-D LEDs are going to make it possible to create smaller, more energy-efficient lighting, optical communication and nano lasers.Researchers build thinnest-known LED, The State Column, 10 March 2014, Aaron Sims
There are three main categories of miniature single die LEDs:
Low-current
Typically rated for 2 mA at around 2 V (approximately 4 mW consumption)
Standard
20 mA LEDs (ranging from approximately 40 mW to 90 mW) at around:
1.9 to 2.1 V for red, orange, yellow, and traditional green
3.0 to 3.4 V for pure green and blue
2.9 to 4.2 V for violet, pink, purple and white
Ultra-high-output
20 mA at approximately 2 or 4–5 V, designed for viewing in direct sunlight
5 V and 12 V LEDs are ordinary miniature LEDs that incorporate a suitable series resistor for direct connection to a 5 V or 12 V supply.
High-power
thumb|High-power light-emitting diodes attached to an LED star base (Luxeon, Lumileds)
High-power LEDs (HP-LEDs) or high-output LEDs (HO-LEDs) can be driven at currents from hundreds of mA to more than an ampere, compared with the tens of mA for other LEDs. Some can emit over a thousand lumens. LED power densities up to 300 W/cm2 have been achieved.Poensgen, Tobias (January 22, 2013) InfiniLED MicroLEDs achieve Ultra-High Light Intensity. infiniled.com Since overheating is destructive, the HP-LEDs must be mounted on a heat sink to allow for heat dissipation. If the heat from an HP-LED is not removed, the device will fail in seconds. One HP-LED can often replace an incandescent bulb in a flashlight, or be set in an array to form a powerful LED lamp.
Some well-known HP-LEDs in this category are the Nichia 19 series, Lumileds Rebel Led, Osram Opto Semiconductors Golden Dragon, and Cree X-lamp. As of September 2009, some HP-LEDs manufactured by Cree now exceed 105 lm/W.
Examples for Haitz's law, which predicts an exponential rise in light output and efficacy of LEDs over time, are the CREE XP-G series LED which achieved 105 lm/W in 2009 and the Nichia 19 series with a typical efficacy of 140 lm/W, released in 2010.High Power Point Source White Led NVSx219A. Nichia.co.jp, November 2, 2010.
AC driven
LEDs have been developed by Seoul Semiconductor that can operate on AC power without the need for a DC converter. For each half-cycle, part of the LED emits light and part is dark, and this is reversed during the next half-cycle. The efficacy of this type of HP-LED is typically 40 lm/W. A large number of LED elements in series may be able to operate directly from line voltage. In 2009, Seoul Semiconductor released a high DC voltage LED, named as 'Acrich MJT', capable of being driven from AC power with a simple controlling circuit. The low-power dissipation of these LEDs affords them more flexibility than the original AC LED design.
Application-specific variations
Flashing
Flashing LEDs are used as attention seeking indicators without requiring external electronics. Flashing LEDs resemble standard LEDs but they contain an integrated multivibrator circuit that causes the LED to flash with a typical period of one second. In diffused lens LEDs, this circuit is visible as a small black dot. Most flashing LEDs emit light of one color, but more sophisticated devices can flash between multiple colors and even fade through a color sequence using RGB color mixing.
Bi-color
Bi-color LEDs contain two different LED emitters in one case. There are two types of these. One type consists of two dies connected to the same two leads antiparallel to each other. Current flow in one direction emits one color, and current in the opposite direction emits the other color. The other type consists of two dies with separate leads for both dies and another lead for common anode or cathode so that they can be controlled independently. The most common bi-color combination is red/traditional green, however, other available combinations include amber/traditional green, red/pure green, red/blue, and blue/pure green.
Tri-color
Tri-color LEDs contain three different LED emitters in one case. Each emitter is connected to a separate lead so they can be controlled independently. A four-lead arrangement is typical with one common lead (anode or cathode) and an additional lead for each color.
RGB
RGB LEDs are tri-color LEDs with red, green, and blue emitters, in general using a four-wire connection with one common lead (anode or cathode). These LEDs can have either common positive or common negative leads. Others, however, have only two leads (positive and negative) and have a built-in tiny electronic control unit.
Decorative-multicolor
Decorative-multicolor LEDs incorporate several emitters of different colors supplied by only two lead-out wires. Colors are switched internally by varying the supply voltage.
Alphanumeric
Alphanumeric LEDs are available in seven-segment, starburst, and dot-matrix format. Seven-segment displays handle all numbers and a limited set of letters. Starburst displays can display all letters. Dot-matrix displays typically use 5x7 pixels per character. Seven-segment LED displays were in widespread use in the 1970s and 1980s, but rising use of liquid crystal displays, with their lower power needs and greater display flexibility, has reduced the popularity of numeric and alphanumeric LED displays.
Digital-RGB
Digital-RGB LEDs are RGB LEDs that contain their own "smart" control electronics. In addition to power and ground, these provide connections for data-in, data-out, and sometimes a clock or strobe signal. These are connected in a daisy chain, with the data in of the first LED sourced by a microprocessor, which can control the brightness and color of each LED independently of the others. They are used where a combination of maximum control and minimum visible electronics are needed such as strings for Christmas and LED matrices. Some even have refresh rates in the kHz range, allowing for basic video applications.
Filament
An LED filament consists of multiple LED chips connected in series on a common longitudinal substrate that forms a thin rod reminiscent of a traditional incandescent filament. These are being used as a low-cost decorative alternative for traditional light bulbs that are being phased out in many countries. The filaments require a rather high voltage to light to nominal brightness, allowing them to work efficiently and simply with mains voltages. Often a simple rectifier and capacitive current limiting are employed to create a low-cost replacement for a traditional light bulb without the complexity of creating a low voltage, high current converter which is required by single die LEDs. Usually, they are packaged in a sealed enclosure with a shape similar to lamps they were designed to replace (e.g. a bulb) and filled with inert nitrogen or carbon dioxide gas to remove heat efficiently.
Considerations for use
Power sources
thumb|upright|Simple LED circuit with resistor for current limiting
The current–voltage characteristic of an LED is similar to other diodes, in that the current is dependent exponentially on the voltage (see Shockley diode equation). This means that a small change in voltage can cause a large change in current. If the applied voltage exceeds the LED's forward voltage drop by a small amount, the current rating may be exceeded by a large amount, potentially damaging or destroying the LED. The typical solution is to use constant-current power supplies to keep the current below the LED's maximum current rating. Since most common power sources (batteries, mains) are constant-voltage sources, most LED fixtures must include a power converter, at least a current-limiting resistor.
However, the high resistance of three-volt coin cells combined with the high differential resistance of nitride-based LEDs makes it possible to power such an LED from such a coin cell without an external resistor.
Electrical polarity
As with all diodes, current flows easily from p-type to n-type material.
However, no current flows and no light is emitted if a small voltage is applied in the reverse direction. If the reverse voltage grows large enough to exceed the breakdown voltage, a large current flows and the LED may be damaged. If the reverse current is sufficiently limited to avoid damage, the reverse-conducting LED is a useful noise diode.
Safety and health
The vast majority of devices containing LEDs are "safe under all conditions of normal use", and so are classified as "Class 1 LED product"/"LED Klasse 1". At present, only a few LEDs—extremely bright LEDs that also have a tightly focused viewing angle of 8° or less—could, in theory, cause temporary blindness, and so are classified as "Class 2"."Visible LED Device Classifications". Datasheetarchive.com. Retrieved on March 16, 2012.
The opinion of the French Agency for Food, Environmental and Occupational Health & Safety (ANSES) of 2010, on the health issues concerning LEDs, suggested banning public use of lamps which were in the moderate Risk Group 2, especially those with a high blue component in places frequented by children.Opinion of the French Agency for Food, Environmental and Occupational Health & Safety, ANSES Opinion, October 19, 2010.
In general, laser safety regulations—and the "Class 1", "Class 2", etc. system—also apply to LEDs."Eye Safety and LED (Light Emitting Diode) diffusion": "The relevant standard for LED lighting is EN 60825-1:2001 (Safety of laser products) ... The standard states that throughout the standard "light emitting diodes (LED) are included whenever the word "laser" is used."
While LEDs have the advantage over fluorescent lamps that they do not contain mercury, they may contain other hazardous metals such as lead and arsenic. Regarding the toxicity of LEDs when treated as waste, a study published in 2011 stated: "According to federal standards, LEDs are not hazardous except for low-intensity red LEDs, which leached Pb [lead] at levels exceeding regulatory limits (186 mg/L; regulatory limit: 5). However, according to California regulations, excessive levels of copper (up to 3892 mg/kg; limit: 2500), lead (up to 8103 mg/kg; limit: 1000), nickel (up to 4797 mg/kg; limit: 2000), or silver (up to 721 mg/kg; limit: 500) render all except low-intensity yellow LEDs hazardous."
Advantages
Efficiency: LEDs emit more lumens per watt than incandescent light bulbs. The efficiency of LED lighting fixtures is not affected by shape and size, unlike fluorescent light bulbs or tubes.
Color: LEDs can emit light of an intended color without using any color filters as traditional lighting methods need. This is more efficient and can lower initial costs.
Size: LEDs can be very small (smaller than 2 mm2) and are easily attached to printed circuit boards.
Warmup time: LEDs light up very quickly. A typical red indicator LED will achieve full brightness in under a microsecond. LEDs used in communications devices can have even faster response times.
Cycling: LEDs are ideal for uses subject to frequent on-off cycling, unlike incandescent and fluorescent lamps that fail faster when cycled often, or high-intensity discharge lamps (HID lamps) that require a long time before restarting.
Dimming: LEDs can very easily be dimmed either by pulse-width modulation or lowering the forward current. This pulse-width modulation is why LED lights, particularly headlights on cars, when viewed on camera or by some people, appear to be flashing or flickering. This is a type of stroboscopic effect.
Cool light: In contrast to most light sources, LEDs radiate very little heat in the form of IR that can cause damage to sensitive objects or fabrics. Wasted energy is dispersed as heat through the base of the LED.
Slow failure: LEDs mostly fail by dimming over time, rather than the abrupt failure of incandescent bulbs.
Lifetime: LEDs can have a relatively long useful life. One report estimates 35,000 to 50,000 hours of useful life, though time to complete failure may be longer.Lifetime of White LEDs. US Department of Energy. (PDF) . Retrieved on March 16, 2012. Fluorescent tubes typically are rated at about 10,000 to 15,000 hours, depending partly on the conditions of use, and incandescent light bulbs at 1,000 to 2,000 hours. Several DOE demonstrations have shown that reduced maintenance costs from this extended lifetime, rather than energy savings, is the primary factor in determining the payback period for an LED product.
Shock resistance: LEDs, being solid-state components, are difficult to damage with external shock, unlike fluorescent and incandescent bulbs, which are fragile.
Focus: The solid package of the LED can be designed to focus its light. Incandescent and fluorescent sources often require an external reflector to collect light and direct it in a usable manner. For larger LED packages total internal reflection (TIR) lenses are often used to the same effect. However, when large quantities of light are needed many light sources are usually deployed, which are difficult to focus or collimate towards the same target.
Disadvantages
Initial price: LEDs are currently slightly more expensive (price per lumen) on an initial capital cost basis, than other lighting technologies. , at least one manufacturer claims to have reached $1 per kilolumen. The additional expense partially stems from the relatively low lumen output and the drive circuitry and power supplies needed.
Temperature dependence: LED performance largely depends on the ambient temperature of the operating environment – or thermal management properties. Overdriving an LED in high ambient temperatures may result in overheating the LED package, eventually leading to device failure. An adequate heat sink is needed to maintain long life. This is especially important in automotive, medical, and military uses where devices must operate over a wide range of temperatures, which require low failure rates. Toshiba has produced LEDs with an operating temperature range of −40 to 100 °C, which suits the LEDs for both indoor and outdoor use in applications such as lamps, ceiling lighting, street lights, and floodlights.
Voltage sensitivity: LEDs must be supplied with a voltage above their threshold voltage and a current below their rating. Current and lifetime change greatly with a small change in applied voltage. They thus require a current-regulated supply (usually just a series resistor for indicator LEDs).The Led Museum. The Led Museum. Retrieved on March 16, 2012.
Color rendition: Most cool-white LEDs have spectra that differ significantly from a black body radiator like the sun or an incandescent light. The spike at 460 nm and dip at 500 nm can cause the color of objects to be perceived differently under cool-white LED illumination than sunlight or incandescent sources, due to metamerism, red surfaces being rendered particularly poorly by typical phosphor-based cool-white LEDs.
Area light source: Single LEDs do not approximate a point source of light giving a spherical light distribution, but rather a lambertian distribution. So LEDs are difficult to apply to uses needing a spherical light field; however, different fields of light can be manipulated by the application of different optics or "lenses". LEDs cannot provide divergence below a few degrees. In contrast, lasers can emit beams with divergences of 0.2 degrees or less.
Electrical polarity: Unlike incandescent light bulbs, which illuminate regardless of the electrical polarity, LEDs will only light with correct electrical polarity. To automatically match source polarity to LED devices, rectifiers can be used.
Blue hazard: There is a concern that blue LEDs and cool-white LEDs are now capable of exceeding safe limits of the so-called blue-light hazard as defined in eye safety specifications such as ANSI/IESNA RP-27.1–05: Recommended Practice for Photobiological Safety for Lamp and Lamp Systems.
Light pollution: Because white LEDs, especially those with high color temperature, emit much more short wavelength light than conventional outdoor light sources such as high-pressure sodium vapor lamps, the increased blue and green sensitivity of scotopic vision means that white LEDs used in outdoor lighting cause substantially more sky glow. The American Medical Association warned on the use of high blue content white LEDs in street lighting, due to their higher impact on human health and environment, compared to low blue content light sources (e.g. High-Pressure Sodium, PC amber LEDs, and low CCT LEDs).
Efficiency droop: The efficiency of LEDs decreases as the electric current increases. Heating also increases with higher currents which compromises the lifetime of the LED. These effects put practical limits on the current through an LED in high power applications.
Impact on insects: LEDs are much more attractive to insects than sodium-vapor lights, so much so that there has been speculative concern about the possibility of disruption to food webs.
Use in winter conditions: Since they do not give off much heat in comparison to incandescent lights, LED lights used for traffic control can have snow obscuring them, leading to accidents.
Applications
LED uses fall into four major categories:
Visual signals where light goes more or less directly from the source to the human eye, to convey a message or meaning
Illumination where light is reflected from objects to give visual response of these objects
Measuring and interacting with processes involving no human visionEuropean Photonics Industry Consortium (EPIC). This includes use in data communications over fiber optics as well as "broadcast" data or signaling.
Narrow band light sensors where LEDs operate in a reverse-bias mode and respond to incident light, instead of emitting lightForrest M. Mims III. "An Inexpensive and Accurate Student Sun Photometer with Light-Emitting Diodes as Spectrally Selective Detectors".(1998 ?)"Water Vapor Measurements with LED Detectors". cs.drexel.edu (2002).Dziekan, Mike (February 6, 2009) "Using Light-Emitting Diodes as Sensors". soamsci.or. Ben-ezra, Moshe; Wang, Jiaping; Wilburn, Bennett; Li, Xiaoyang and Ma, Le. "An LED-only BRDF Measurement Device"
Indicators and signs
The low energy consumption, low maintenance and small size of LEDs has led to uses as status indicators and displays on a variety of equipment and installations. Large-area LED displays are used as stadium displays, dynamic decorative displays, and dynamic message signs on freeways. Thin, lightweight message displays are used at airports and railway stations, and as destination displays for trains, buses, trams, and ferries.
thumb|upright|Red and green LED traffic signals
One-color light is well suited for traffic lights and signals, exit signs, emergency vehicle lighting, ships' navigation lights or lanterns (chromacity and luminance standards being set under the Convention on the International Regulations for Preventing Collisions at Sea 1972, Annex I and the CIE) and LED-based Christmas lights. In cold climates, LED traffic lights may remain snow-covered.LED advantages outweigh potential snow hazards in traffic signals, LEDs magazine January 7, 2010 Red or yellow LEDs are used in indicator and alphanumeric displays in environments where night vision must be retained: aircraft cockpits, submarine and ship bridges, astronomy observatories, and in the field, e.g. night time animal watching and military field use.
thumb|Automotive applications for LEDs continue to grow.
Because of their long life, fast switching times, and their ability to be seen in broad daylight due to their high output and focus, LEDs have been used in brake lights for cars' high-mounted brake lights, trucks, and buses, and in turn signals for some time, but many vehicles now use LEDs for their rear light clusters. The use in brakes improves safety, due to a great reduction in the time needed to light fully, or faster rise time, up to 0.5 second faster than an incandescent bulb. This gives drivers behind more time to react. In a dual intensity circuit (rear markers and brakes) if the LEDs are not pulsed at a fast enough frequency, they can create a phantom array, where ghost images of the LED will appear if the eyes quickly scan across the array. White LED headlamps are starting to be used. Using LEDs has styling advantages because LEDs can form much thinner lights than incandescent lamps with parabolic reflectors.
Due to the relative cheapness of low output LEDs, they are also used in many temporary uses such as glowsticks, throwies, and the photonic textile Lumalive. Artists have also used LEDs for LED art.
Weather and all-hazards radio receivers with Specific Area Message Encoding (SAME) have three LEDs: red for warnings, orange for watches, and yellow for advisories and statements whenever issued.
Lighting
With the development of high-efficiency and high-power LEDs, it has become possible to use LEDs in lighting and illumination. To encourage the shift to LED lamps and other high-efficiency lighting, the US Department of Energy has created the L Prize competition. The Philips Lighting North America LED bulb won the first competition on August 3, 2011, after successfully completing 18 months of intensive field, lab, and product testing."L-Prize U.S. Department of Energy", L-Prize Website, August 3, 2011
LEDs are used as street lights and in other architectural lighting. The mechanical robustness and long lifetime are used in automotive lighting on cars, motorcycles, and bicycle lights. LED light emission may be efficiently controlled by using nonimaging optics principles.
LED street lights are employed on poles and in parking garages. In 2007, the Italian village of Torraca was the first place to convert its entire illumination system to LEDs.LED There Be Light, Scientific American, March 18, 2009
LEDs are used in aviation lighting. Airbus has used LED lighting in its Airbus A320 Enhanced since 2007, and Boeing uses LED lighting in the 787. LEDs are also being used now in airport and heliport lighting. LED airport fixtures currently include medium-intensity runway lights, runway centerline lights, taxiway centerline and edge lights, guidance signs, and obstruction lighting.
LEDs are also used as a light source for DLP projectors, and to backlight LCD televisions (referred to as LED TVs) and laptop displays. RGB LEDs raise the color gamut by as much as 45%. Screens for TV and computer displays can be made thinner using LEDs for backlighting.
The lack of IR or heat radiation makes LEDs ideal for stage lights using banks of RGB LEDs that can easily change color and decrease heating from traditional stage lighting, as well as medical lighting where IR-radiation can be harmful. In energy conservation, the lower heat output of LEDs also means air conditioning (cooling) systems have less heat in need of disposal.
LEDs are small, durable and need little power, so they are used in handheld devices such as flashlights. LED strobe lights or camera flashes operate at a safe, low voltage, instead of the 250+ volts commonly found in xenon flashlamp-based lighting. This is especially useful in cameras on mobile phones, where space is at a premium and bulky voltage-raising circuitry is undesirable.
LEDs are used for infrared illumination in night vision uses including security cameras. A ring of LEDs around a video camera, aimed forward into a retroreflective background, allows chroma keying in video productions.
thumb|LED to be used for miners, to increase visibility inside mines
LEDs are used in mining operations, as cap lamps to provide light for miners. Research has been done to improve LEDs for mining, to reduce glare and to increase illumination, reducing risk of injury to the miners.
LEDs are now used commonly in all market areas from commercial to home use: standard lighting, AV, stage, theatrical, architectural, and public installations, and wherever artificial light is used.
LEDs are increasingly finding uses in medical and educational applications, for example as mood enhancement, and new technologies such as AmBX, exploiting LED versatility. NASA has even sponsored research for the use of LEDs to promote health for astronauts.
Data communication and other signalling
Light can be used to transmit data and analog signals. For example, lighting white LEDs can be used in systems assisting people to navigate in closed spaces while searching necessary rooms or objects.
Assistive listening devices in many theaters and similar spaces use arrays of infrared LEDs to send sound to listeners' receivers. Light-emitting diodes (as well as semiconductor lasers) are used to send data over many types of fiber optic cable, from digital audio over TOSLINK cables to the very high bandwidth fiber links that form the Internet backbone. For some time, computers were commonly equipped with IrDA interfaces, which allowed them to send and receive data to nearby machines via infrared.
Because LEDs can cycle on and off millions of times per second, very high data bandwidth can be achieved.
Sustainable lighting
Efficient lighting is needed for sustainable architecture. In 2009, US Department of Energy testing results on LED lamps showed an average efficacy of 35 lm/W, below that of typical CFLs, and as low as 9 lm/W, worse than standard incandescent bulbs. A typical 13-watt LED lamp emitted 450 to 650 lumens, which is equivalent to a standard 40-watt incandescent bulb.
However, as of 2011, there are LED bulbs available as efficient as 150 lm/W and even inexpensive low-end models typically exceed 50 lm/W, so that a 6-watt LED could achieve the same results as a standard 40-watt incandescent bulb. The latter has an expected lifespan of 1,000 hours, whereas an LED can continue to operate with reduced efficiency for more than 50,000 hours.
See the chart below for a comparison of common light types:
LEDCFLIncandescentLightbulb Projected Lifespan50,000 hours10,000 hours1,200 hoursWatts Per Bulb (equiv. 60 watts)101460Cost Per Bulb$2.00$7.00$1.25KWh of Electricity Used Over 50,000 Hours5007003000Cost of Electricity (@ 0.10 per KWh)$50$70$300Bulbs Needed for 50,000 Hours of Use1542Equivalent 50,000 Hours Bulb Expense$2.00$35.00$52.50TOTAL Cost for 50,000 Hours$52.00$105.00$352.50
Energy consumption
In the US, one kilowatt-hour (3.6 MJ) of electricity currently causes an average of emission.US DOE EIA: Electricity Emission Factors. Eia.doe.gov. Retrieved on March 16, 2012. Assuming the average light bulb is on for 10 hours a day, a 40-watt bulb will cause of emission per year. The 6-watt LED equivalent will only cause of over the same time span. A building’s carbon footprint from lighting can, therefore, be reduced by 85% by exchanging all incandescent bulbs for new LEDs if a building previously used only incandescent bulbs.
In practice, most buildings that use a lot of lighting use fluorescent lighting, which has 22% luminous efficiency compared with 5% for filaments, so changing to LED lighting would still give a 34% reduction in electrical power use and carbon emissions.
The reduction in carbon emissions depends on the source of electricity. Nuclear power in the United States produced 19.2% of electricity in 2011, so reducing electricity consumption in the U.S. reduces carbon emissions more than in France (75% nuclear electricity) or Norway (almost entirely hydroelectric).
Replacing lights that spend the most time lit results in the most savings, so LED lights in infrequently used locations bring a smaller return on investment.
Light sources for machine vision systems
Machine vision systems often require bright and homogeneous illumination, so features of interest are easier to process.
LEDs are often used for this purpose, and this is likely to remain one of their major uses until the price drops low enough to make signaling and illumination uses more widespread. Barcode scanners are the most common example of machine vision, and many low-cost products use red LEDs instead of lasers. Optical computer mice are an example of LEDs in machine vision, as it is used to provide an even light source on the surface for the miniature camera within the mouse. LEDs constitute a nearly ideal light source for machine vision systems for several reasons:
The size of the illuminated field is usually comparatively small and machine vision systems are often quite expensive, so the cost of the light source is usually a minor concern. However, it might not be easy to replace a broken light source placed within complex machinery, and here the long service life of LEDs is a benefit.
LED elements tend to be small and can be placed with high density over flat or even-shaped substrates (PCBs etc.) so that bright and homogeneous sources that direct light from tightly controlled directions on inspected parts can be designed. This can often be obtained with small, low-cost lenses and diffusers, helping to achieve high light densities with control over lighting levels and homogeneity. LED sources can be shaped in several configurations (spot lights for reflective illumination; ring lights for coaxial illumination; backlights for contour illumination; linear assemblies; flat, large format panels; dome sources for diffused, omnidirectional illumination).
LEDs can be easily strobed (in the microsecond range and below) and synchronized with imaging. High-power LEDs are available allowing well-lit images even with very short light pulses. This is often used to obtain crisp and sharp "still" images of quickly moving parts.
LEDs come in several different colors and wavelengths, allowing easy use of the best color for each need, where different color may provide better visibility of features of interest. Having a precisely known spectrum allows tightly matched filters to be used to separate informative bandwidth or to reduce disturbing effects of ambient light. LEDs usually operate at comparatively low working temperatures, simplifying heat management, and dissipation. This allows using plastic lenses, filters, and diffusers. Waterproof units can also easily be designed, allowing use in harsh or wet environments (food, beverage, oil industries).
Other applications
thumb|LED costume for stage performers
thumb|LED wallpaper by Meystyle
The light from LEDs can be modulated very quickly so they are used extensively in optical fiber and free space optics communications. This includes remote controls, such as for TVs, VCRs, and LED Computers, where infrared LEDs are often used. Opto-isolators use an LED combined with a photodiode or phototransistor to provide a signal path with electrical isolation between two circuits. This is especially useful in medical equipment where the signals from a low-voltage sensor circuit (usually battery-powered) in contact with a living organism must be electrically isolated from any possible electrical failure in a recording or monitoring device operating at potentially dangerous voltages. An optoisolator also allows information to be transferred between circuits not sharing a common ground potential.
Many sensor systems rely on light as the signal source. LEDs are often ideal as a light source due to the requirements of the sensors. LEDs are used as motion sensors, for example in optical computer mice. The Nintendo Wii's sensor bar uses infrared LEDs. Pulse oximeters use them for measuring oxygen saturation. Some flatbed scanners use arrays of RGB LEDs rather than the typical cold-cathode fluorescent lamp as the light source. Having independent control of three illuminated colors allows the scanner to calibrate itself for more accurate color balance, and there is no need for warm-up. Further, its sensors only need be monochromatic, since at any one time the page being scanned is only lit by one color of light. Since LEDs can also be used as photodiodes, they can be used for both photo emission and detection. This could be used, for example, in a touchscreen that registers reflected light from a finger or stylus. Many materials and biological systems are sensitive to, or dependent on, light. Grow lights use LEDs to increase photosynthesis in plants, and bacteria and viruses can be removed from water and other substances using UV LEDs for sterilization.
LEDs have also been used as a medium-quality voltage reference in electronic circuits. The forward voltage drop (e.g. about 1.7 V for a normal red LED) can be used instead of a Zener diode in low-voltage regulators. Red LEDs have the flattest I/V curve above the knee. Nitride-based LEDs have a fairly steep I/V curve and are useless for this purpose. Although LED forward voltage is far more current-dependent than a Zener diode, Zener diodes with breakdown voltages below 3 V are not widely available.
The progressive miniaturization of low-voltage lighting technology, such as LEDs and OLEDs, suitable to be incorporated into low-thickness materials has fostered in recent years the experimentation on combining light sources and wall covering surfaces to be applied onto interior walls. The new possibilities offered by these developments have prompted some designers and companies, such as Meystyle, Ingo Maurer, Lomox and Philips, to research and develop proprietary LED wallpaper technologies, some of which are currently available for commercial purchase. Other solutions mainly exist as prototypes or are in the process of being further refined.
See also
History of display technology
Laser diode
LED circuit
LED lamp
LED tattoo
Li-Fi
Light-emitting electrochemical cell
List of LED failure modes
Nixie tube
OLED
Photovoltaics
Seven-segment display
SMD LED Module
Solar lamp
Solid-state lighting
Thermal management of high-power LEDs
UV curing
References
Further reading
External links
Category:Optical diodes
Category:Signage
Category:LED lamps | 18,290 | 2017-01 |
Alsace | Alsace (,Random House Unabridged Dictionary ; Alsatian: ’s Elsass ; German: ElsassGerman spelling before 1996: Elsaß ; ) is a cultural and historical region in eastern France, now located in the administrative region of Grand Est. Alsace is located on France's eastern border and on the west bank of the upper Rhine adjacent to Germany and Switzerland.
From 1982 until January 2016, Alsace was the smallest of 22 administrative regions in metropolitan France, consisting of the Bas-Rhin and Haut-Rhin departments. Territorial reform passed by the French legislature in 2014 resulted in the merger of the Alsace administrative region with Champagne-Ardenne and Lorraine to form Grand Est.
The predominant historical language of Alsace is Alsatian, a Germanic (mainly Alemannic) dialect also spoken across the Rhine, but today most Alsatians primarily speak French, the official language of France. The political status of Alsace has been heavily influenced by historical decisions, wars, and strategic politics. The economic and cultural capital as well as largest city of Alsace is Strasbourg. The city is the seat of several international organizations and bodies.
Etymology
The name "Alsace" can be traced to the Old High German Ali-saz or Elisaz, meaning "foreign domain". An alternative explanation is from a Germanic Ell-sass, meaning "seated on the Ill",Roland Kaltenbach: Le guide de l’Alsace, La Manufacture 1992, ISBN 2-7377-0308-5, page 36 a river in Alsace.
Alsace-Lorraine
The region, as part of Lorraine, was part of the Holy Roman Empire, and then was gradually annexed by France in the 17th century, and formalized as one of the provinces of France. The Calvinist manufacturing republic of Mulhouse, known as Stadtrepublik Mülhausen, joined France and became a part of Alsace after a vote by its citizens on 4 January 1798. Alsace is frequently mentioned with and as part of Lorraine and the former duchy of Lorraine, since it was a vital part of the duchy, and later because German possession as the imperial province (Alsace-Lorraine, 1871–1918) was contested in the 19th and 20th centuries; France and Germany exchanged control of parts of Lorraine (including Alsace) four times in 75 years.
History
In prehistoric times, Alsace was inhabited by nomadic hunters.
Pre-Roman Alsace
By 1500 BC, Celts began to settle in Alsace, clearing and cultivating the land. It should be noted that Alsace is a plain surrounded by the Vosges mountains (west) and the Black Forest mountains (east). It creates Foehn winds which, along with natural irrigation, contributes to the fertility of the soil. In a world of agriculture, Alsace has always been a rich region which explains why it suffered so many invasions and annexations in its history.
Roman Alsace
By 58 BC, the Romans had invaded and established Alsace as a center of viticulture. To protect this highly valued industry, the Romans built fortifications and military camps that evolved into various communities which have been inhabited continuously to the present day. While part of the Roman Empire, Alsace was part of Germania Superior.
Alemannic and Frankish Alsace
With the decline of the Roman Empire, Alsace became the territory of the Germanic Alemanni. The Alemanni were agricultural people, and their Germanic language formed the basis of modern-day dialects spoken along the Upper Rhine (Alsatian, Alemannian, Swabian, Swiss). Clovis and the Franks defeated the Alemanni during the 5th century AD, culminating with the Battle of Tolbiac, and Alsace became part of the Kingdom of Austrasia. Under Clovis' Merovingian successors the inhabitants were Christianized. Alsace remained under Frankish control until the Frankish realm, following the Oaths of Strasbourg of 842, was formally dissolved in 843 at the Treaty of Verdun; the grandsons of Charlemagne divided the realm into three parts. Alsace formed part of the Middle Francia, which was ruled by the eldest grandson Lothar I. Lothar died early in 855 and his realm was divided into three parts. The part known as Lotharingia, or Lorraine, was given to Lothar's son. The rest was shared between Lothar's brothers Charles the Bald (ruler of the West Frankish realm) and Louis the German (ruler of the East Frankish realm). The Kingdom of Lotharingia was short-lived, however, becoming the stem duchy of Lorraine in Eastern Francia after the Treaty of Ribemont in 880. Alsace was united with the other Alemanni east of the Rhine into the stem duchy of Swabia.
Alsace within the Holy Roman Empire
At about this time the surrounding areas experienced recurring fragmentation and reincorporations among a number of feudal secular and ecclesiastical lordships, a common process in the Holy Roman Empire. Alsace experienced great prosperity during the 12th and 13th centuries under Hohenstaufen emperors. Frederick I set up Alsace as a province (a procuratio, not a provincia) to be ruled by ministeriales, a non-noble class of civil servants. The idea was that such men would be more tractable and less likely to alienate the fief from the crown out of their own greed. The province had a single provincial court (Landgericht) and a central administration with its seat at Hagenau. Frederick II designated the Bishop of Strasbourg to administer Alsace, but the authority of the bishop was challenged by Count Rudolf of Habsburg, who received his rights from Frederick II's son Conrad IV. Strasbourg began to grow to become the most populous and commercially important town in the region. In 1262, after a long struggle with the ruling bishops, its citizens gained the status of free imperial city. A stop on the Paris-Vienna-Orient trade route, as well as a port on the Rhine route linking southern Germany and Switzerland to the Netherlands, England and Scandinavia, it became the political and economic center of the region. Cities such as Colmar and Hagenau also began to grow in economic importance and gained a kind of autonomy within the "Decapole" or "Dekapolis", a federation of ten free towns.
As in much of Europe, the prosperity of Alsace came to an end in the 14th century by a series of harsh winters, bad harvests, and the Black Death. These hardships were blamed on Jews, leading to the pogroms of 1336 and 1339. In 1349, Jews of Alsace were accused of poisoning the wells with plague, leading to the massacre of thousands of Jews during the Strasbourg pogrom. Jews were subsequently forbidden to settle in the town. An additional natural disaster was the Rhine rift earthquake of 1356, one of Europe's worst which made ruins of Basel. Prosperity returned to Alsace under Habsburg administration during the Renaissance.
thumb|Petite France, Strasbourg
Holy Roman Empire central power had begun to decline following years of imperial adventures in Italian lands, often ceding hegemony in Western Europe to France, which had long since centralized power. France began an aggressive policy of expanding eastward, first to the rivers Rhône and Meuse, and when those borders were reached, aiming for the Rhine. In 1299, the French proposed a marriage alliance between Philip IV of France's sister Blanche and Albert I of Germany's son Rudolf, with Alsace to be the dowry; however, the deal never came off. In 1307, the town of Belfort was first chartered by the Counts of Montbéliard. During the next century, France was to be militarily shattered by the Hundred Years' War, which prevented for a time any further tendencies in this direction. After the conclusion of the war, France was again free to pursue its desire to reach the Rhine and in 1444 a French army appeared in Lorraine and Alsace. It took up winter quarters, demanded the submission of Metz and Strasbourg and launched an attack on Basel.
In 1469, following the , Upper Alsace was sold by Archduke Sigismund of Austria to Charles the Bold, Duke of Burgundy. Although Charles was the nominal landlord, taxes were paid to Frederick III, Holy Roman Emperor. The latter was able to use this tax and a dynastic marriage to his advantage to gain back full control of Upper Alsace (apart from the free towns, but including Belfort) in 1477 when it became part of the demesne of the Habsburg family, who were also rulers of the empire. The town of Mulhouse joined the Swiss Confederation in 1515, where it was to remain until 1798.
By the time of the Protestant Reformation in the 16th century, Strasbourg was a prosperous community, and its inhabitants accepted Protestantism in 1523. Martin Bucer was a prominent Protestant reformer in the region. His efforts were countered by the Roman Catholic Habsburgs who tried to eradicate heresy in Upper Alsace. As a result, Alsace was transformed into a mosaic of Catholic and Protestant territories. On the other hand, Mömpelgard (Montbéliard) to the southwest of Alsace, belonging to the Counts of Württemberg since 1397, remained a Protestant enclave in France until 1793.
Incorporation into France
This situation prevailed until 1639, when most of Alsace was conquered by France so as to keep it out of the hands of the Spanish Habsburgs, who by secret treaty in 1617 had gained a clear road to their valuable and rebellious possessions in the Spanish Netherlands: the Spanish Road. Beset by enemies and seeking to gain a free hand in Hungary, the Habsburgs sold their Sundgau territory (mostly in Upper Alsace) to France in 1646, which had occupied it, for the sum of 1.2 million Thalers. When hostilities were concluded in 1648 with the Treaty of Westphalia, most of Alsace was recognized as part of France, although some towns remained independent. The treaty stipulations regarding Alsace were complex; although the French king gained sovereignty, existing rights and customs of the inhabitants were largely preserved. France continued to maintain its customs border along the Vosges mountains where it had been, leaving Alsace more economically oriented to neighbouring German-speaking lands. The German language remained in use in local administration, in schools, and at the (Lutheran) University of Strasbourg, which continued to draw students from other German-speaking lands. The 1685 Edict of Fontainebleau, by which the French king ordered the suppression of French Protestantism, was not applied in Alsace. France did endeavour to promote Catholicism; Strasbourg Cathedral, for example, which had been Lutheran from 1524 to 1681, was returned to the Catholic Church. However, compared to the rest of France, Alsace enjoyed a climate of religious tolerance.
The warfare that had partially depopulated the region created opportunities for a stream of immigrants from Switzerland, Germany, Austria, Lorraine, Savoy and other lands that continued until the mid-18th century.
thumb|right|Louis XIV receiving the keys of Strasbourg in 1681
France consolidated its hold with the 1679 Treaties of Nijmegen, which brought most remaining towns under its control. France seized Strasbourg in 1681 in an unprovoked action. These territorial changes were recognised in the 1697 Treaty of Ryswick that ended the War of the Grand Alliance.
French Revolution
thumb|Alsatian sign, 1792:
Freiheit Gleichheit Brüderlichk. od. Tod (Liberty Equality Fraternity or Death)
Tod den Tyranen (Death to Tyrants)
Heil den Völkern (Long live the Peoples)
The year 1789 brought the French Revolution and with it the first division of Alsace into the départements of Haut- and Bas-Rhin. Alsatians played an active role in the French Revolution. On 21 July 1789, after receiving news of the Storming of the Bastille in Paris, a crowd of people stormed the Strasbourg city hall, forcing the city administrators to flee and putting symbolically an end to the feudal system in Alsace. In 1792, Rouget de Lisle composed in Strasbourg the Revolutionary marching song "La Marseillaise" (as Marching song for the Army of the Rhine), which later became the anthem of France. "La Marseillaise" was played for the first time in April of that year in front of the mayor of Strasbourg Philippe-Frédéric de Dietrich. Some of the most famous generals of the French Revolution also came from Alsace, notably Kellermann, the victor of Valmy, Kléber, who led the armies of the French Republic in Vendée and Westermann, who also fought in the Vendée.
At the same time, some Alsatians were in opposition to the Jacobins and sympathetic to the invading forces of Austria and Prussia who sought to crush the nascent revolutionary republic. Many of the residents of the Sundgau made "pilgrimages" to places like Mariastein Abbey, near Basel, in Switzerland, for baptisms and weddings. When the French Revolutionary Army of the Rhine was victorious, tens of thousands fled east before it. When they were later permitted to return (in some cases not until 1799), it was often to find that their lands and homes had been confiscated. These conditions led to emigration by hundreds of families to newly vacant lands in the Russian Empire in 1803–4 and again in 1808. A poignant retelling of this event based on what Goethe had personally witnessed can be found in his long poem Hermann and Dorothea.
In response to the restoration of Napoleon I of France in 1815, Alsace along with other frontier provinces of France was occupied by foreign forces from 1815 to 1818,Veve, Thomas Dwight (1992). The Duke of Wellington and the British army of occupation in France, 1815–1818, pp. 20–21. Greenwood Press, Westport, Connecticut, United States. including over 280,000 soldiers and 90,000 horses in Bas-Rhin alone. This had grave effects on trade and the economy of the region since former overland trade routes were switched to newly opened Mediterranean and Atlantic seaports.
The population grew rapidly, from 800,000 in 1814 to 914,000 in 1830 and 1,067,000 in 1846. The combination of economic and demographic factors led to hunger, housing shortages and a lack of work for young people. Thus, it is not surprising that people left Alsace, not only for Paris – where the Alsatian community grew in numbers, with famous members such as Baron Haussmann – but also for more distant places like Russia and the Austrian Empire, to take advantage of the new opportunities offered there: Austria had conquered lands in Eastern Europe from the Ottoman Empire and offered generous terms to colonists as a way of consolidating its hold on the new territories. Many Alsatians also began to sail to the United States, settling in many areas from 1820 to 1850.Cox.net In 1843 and 1844, sailing ships bringing immigrant families from Alsace arrived at the port of New York. Some settled in Illinois, many to farm or to seek success in commercial ventures: for example, the sailing ships Sully (in May 1843) and Iowa (in June 1844) brought families who set up homes in northern Illinois and northern Indiana. Some Alsatian immigrants were noted for their roles in 19th-century American economic development.Ilgenweb.net Others ventured to Canada to settle in southwestern Ontario, notably Waterloo County.
Jews
By 1790, the Jewish population of Alsace was approximately 22,500, about 3% of the provincial population. They were highly segregated and subject to long-standing anti-Jewish regulations. They maintained their own customs, Yiddish language, and historic traditions within the tightly-knit ghettos; they adhered to Talmudic law enforced by their rabbis. Jews were barred from most cities and instead lived in villages. They concentrated in trade, services, and especially in money lending. They financed about a third of the mortgages in Alsace. Official tolerance grew during the French Revolution, with full emancipation in 1791. However, local antisemitism also increased and Napoleon turned hostile in 1806, imposing a one-year moratorium on all debts owed to Jews. In the 1830-1870 era most Jews moved to the cities, where they integrated and acculturated, as antisemitism sharply declined. By 1831, the state began paying salaries to official rabbis, and in 1846 a special legal oath for Jews was discontinued. Antisemitic local riots occasionally occurred, especially during the Revolution of 1848. The merger of Alsace into Germany in 1871-1918 lessened antisemitic violence.
Between France and Germany
thumb|Traditional costumes of Alsace
The Franco-Prussian War, which started in July 1870, saw France defeated in May 1871 by the Kingdom of Prussia and other German states. The end of the war led to the unification of Germany. Otto von Bismarck annexed Alsace and northern Lorraine to the new German Empire in 1871; France ceded more than nine-tenths of Alsace and one-fourth of Lorraine as stipulated in the treaty of Frankfurt, de jure, that wasn't an annexation any more. Unlike other members states of the German federation, which had governments of their own, the new Imperial territory of Alsace-Lorraine was under the sole authority of the Kaiser, administered directly by the imperial government in Berlin. Between 100,000 and 130,000 Alsatians (of a total population of about a million and a half) chose to remain French citizens and leave Reichsland Elsaß-Lothringen, many of them resettling in French Algeria as Pieds-Noirs. Only in 1911 was Alsace-Lorraine granted some measure of autonomy, which was manifested also in a flag and an anthem (Elsässisches Fahnenlied). In 1913, however, the Saverne Affair (French: Incident de Saverne) showed the limits of this new tolerance of the Alsatian identity.
thumb|left|upright|An Alsatian woman in traditional costume, photographed by Adolphe Braun
During the First World War, to avoid ground fights between brothers, many Alsatians served as sailors in the Kaiserliche Marine and took part in the Naval mutinies that led to the abdication of the Kaiser in November 1918, which left Alsace-Lorraine without a nominal head of state. The sailors returned home and tried to found a republic. While Jacques Peirotes, at this time deputy at the Landrat Elsass-Lothringen and just elected mayor of Strasbourg, proclaimed the forfeiture of the German Empire and the advent of the French Republic, a self-proclaimed government of Alsace-Lorraine declared independence as the "Republic of Alsace-Lorraine". French troops entered Alsace less than two weeks later to quash the worker strikes and remove the newly established Soviets and revolutionaries from power. At the arrival of the French soldiers, many Alsatians and local Prussian/German administrators and bureaucrats cheered the re-establishment of orderArchive video Although U.S. President Woodrow Wilson had insisted that the région was self-ruling by legal status, as its constitution had stated it was bound to the sole authority of the Kaiser and not to the German state, France tolerated no plebiscite, as granted by the League of Nations to some eastern German territories at this time, because Alsatians were considered by the French public as fellow Frenchmen liberated from German rule. Germany ceded the region to France under the Treaty of Versailles.
Policies forbidding the use of German and requiring that of French were introduced.However, propaganda for elections was allowed to go with a German translation from 1919 to 2008. However, in order not to antagonize the Alsatians, the region was not subjected to some legal changes that had occurred in the rest of France between 1871 and 1919, such as the 1905 French Law of Separation of Church and State.
thumb|upright|German stamps of Hindenburg marked with "Elsaß" (1940)
Alsace-Lorraine was occupied by Germany in 1940 during the Second World War. Although Germany never formally annexed Alsace-Lorraine, it was incorporated into the Greater German Reich, which had been restructured into Reichsgaue. Alsace was merged with Baden, and Lorraine with the Saarland, to become part of a planned Westmark. During the war, 130,000 young men from Alsace and Lorraine were inducted into the German army against their will (malgré-nous) and in some cases, the Waffen SS.Stéphane Courtois, Mark Kramer. Livre noir du Communisme: crimes, terreur, répression. Harvard University Press, 1999. p.323. ISBN 0-674-07608-7 Some of the latter were involved in war crimes such as the Oradour-sur-Glane massacre. Most of them perished on the eastern front. The few that could escape fled to Switzerland or joined the resistance. In July 1944, 1500 malgré-nous were released from Soviet captivity and sent to Algiers, where they joined the Free French Forces.
Today the territory is in certain areas subject to some laws that are significantly different from the rest of France – this is known as the local law.
In more recent years, Alsatian is again being promoted by local, national and European authorities as an element of the region's identity. Alsatian is taught in schools (but not mandatory) as one of the regional languages of France. German is also taught as a foreign language in local kindergartens and schools. However, the Constitution of France still requires that French be the only official language of the Republic.
Timeline
Year(s) EventRuled byOfficial or common language 5400–4500 BC Bandkeramiker/Linear Pottery cultures—Unknown 2300–750 BC Bell Beaker cultures—Proto-Celtic spoken 750–450 BC Hallstatt culture early Iron Age (early Celts)—None; Old Celtic spoken 450–58 BC Celts/Gauls firmly secured in entire Gaul, Alsace; trade with Greece is evident (Vix) Celts/Gauls None; Gaulish variety of Celtic widely spoken 58 / 44 BC–AD 260 Alsace and Gaul conquered by Caesar, provinciated to Germania Superior Roman Empire Latin; Gallic widely spoken 260–274 Postumus founds breakaway Gallic Empire Gallic Empire Latin, Gallic 274–286 Rome reconquers the Gallic Empire, Alsace Roman Empire Latin, Germanic (only in Argentoratum) 286–378 Diocletian divides the Roman Empire into Western and Eastern sectors Roman Empire around 300 Beginning of Germanic migrations to the Roman Empire Roman Empire 378–395 The Visigoths rebel, precursor to waves of German, and Hun invasions Roman Empire 395–436 Death of Theodosius I, causing a permanent division between Western and Eastern Rome Western Roman Empire 436–486 Germanic invasions of the Western Roman Empire Roman Tributary of Gaul 486–511 Lower Alsace conquered by the Franks Frankish Realm Old Frankish, Latin 531–614 Upper Alsace conquered by the Franks Frankish Realm 614–795 Totality of Alsace to the Frankish Kingdom Frankish Realm 795–814 Charlemagne begins reign, Charlemagne crowned Emperor of the Romans on 25 December 800 Frankish Empire Old Frankish 814 Death of Charlemagne Carolingian Empire Old Frankish, Old High German 847–870 Treaty of Verdun gives Alsace and Lotharingia to Lothar I Middle Francia (Carolingian Empire) Frankish, Old High German 870–889 Treaty of Mersen gives Alsace to East Francia East Francia (German Kingdom of the Carolingian Empire) Frankish, Old High German 889–962 Carolingian Empire breaks up into five Kingdoms, Magyars and Vikings periodically raid Alsace Kingdom of Germany Old High German, Frankish 962–1618 Otto I crowned Holy Roman Emperor Holy Roman Empire Old High German, Modern High German (Alemannic spoken widely) 1618–1674 Louis XIII annexes portions of Alsace during the Thirty Years' War Holy Roman Empire German 1674–1871 Louis XIV annexes the rest of Alsace during the Franco-Dutch War, establishing full French sovereignty over the region Kingdom of France French (Alsatian and German tolerated) 1871–1918 Franco-Prussian War causes French cession of Alsace to German Empire German Empire German 1919–1940 Treaty of Versailles causes German cession of Alsace to France France French 1940–1944 Nazi Germany conquers Alsace, establishing Gau Baden-Elsaß Nazi Germany German 1945–present French control France French
Geography
Climate
Alsace has a semi-continental climate with cold and dry winters and hot summers. There is little precipitation because the Vosges protect it from the west. The city of Colmar has a sunny microclimate; it is the second driest city in France, with an annual precipitation of just 550 mm, making it ideal for vin d'Alsace (Alsatian wine).
Topography
thumb|Topographic map of Alsace
Alsace has an area of 8,283 km2, making it the smallest région of metropolitan France. It is almost four times longer than it is wide, corresponding to a plain between the Rhine in the east and the Vosges mountains in the west.
It includes the départements of Haut-Rhin and Bas-Rhin (known previously as Sundgau and Nordgau). It borders Germany on the north and the east, Switzerland and Franche-Comté on the south, and Lorraine on the west.
Several valleys are also found in the région. Its highest point is the Grand Ballon in Haut-Rhin, which reaches a height of 1426 m.
The ried lies along the Rhine.
Geology
thumb|left|The Grand Ballon, southern face, seen from the valley of the Thur
Alsace is the part of the plain of the Rhine located at the west of the Rhine, on its left bank. It is a rift or graben, from the Oligocene epoch, associated with its horsts: the Vosges and the Black Forest.
The Jura Mountains, formed by slip (induced by the alpine uplift) of the Mesozoic cover on the Triassic formations, goes through the area of Belfort.
Vosges and Jura coal mining basins
Flora
It contains many forests, primarily in the Vosges and in Bas-Rhin (Haguenau Forest).
Governance
Administrative divisions
350px|thumb|Administrative map of Alsace showing départements, arrondissements and communes
The Alsace region is divided into 2 departments, 13 departmental arrondissements, 75 cantons (not shown here), and 904 communes:
Department of Bas-Rhin
(Number of communes in parentheses)
Arrondissement of Haguenau (56)
Arrondissement of Molsheim (69)
Arrondissement of Saverne (128)
Arrondissement of Sélestat-Erstein (101)
Arrondissement of Strasbourg-Campagne (104)Note: the commune of Strasbourg is not inside the arrondissement of Strasbourg-Campagne but it is nonetheless the seat of the Strasbourg-Campagne sous-préfecture buildings and administration.
Arrondissement of Strasbourg-Ville (1)
Arrondissement of Wissembourg (68)
Department of Haut-Rhin
(Number of communes in parentheses)
Arrondissement of Altkirch (111)
Arrondissement of Colmar (62)
Arrondissement of Guebwiller (47)
Arrondissement of Mulhouse (73)
Arrondissement of Ribeauvillé (32)
Arrondissement of Thann (52)
Politics
Alsace is one of the most conservative régions of France. It is one of just two régions in metropolitan France where the conservative right won the 2004 région elections and thus controls the Alsace Regional Council. Conservative leader Nicolas Sarkozy got his best score in Alsace (over 65%) in the second round of the French presidential elections of 2007. The president of the Regional Council is Philippe Richert, a member of the Union for a Popular Movement, elected in the 2010 regional election. The frequently changing status of the région throughout history has left its mark on modern day politics in terms of a particular interest in national identity issues.
Alsace is also one of the most pro-EU regions of France. It was one of the few French regions that voted 'yes' to the European Constitution in 2005.
Society
Demographics
Alsace's population increased to 1,868,183 in 2013. It has regularly increased over time, except in wartime, by both natural growth and migration. This growth has even accelerated at the end of the 20th century. INSEE estimates that its population will grow 12.9% to 19.5% between 1999 and 2030.
Immigration
Place of birth of residents of Alsace(at the 1968, 1975, 1982, 1990, 1999, and 2011 censuses) Census {{nowrap|Born in the rest of}} Immigrants 2011 71.3% 15.6% 0.4% 2.2% 10.5% 4.6% 2.4% 1.6% 1.9% 1999 73.6% 15.4% 0.4% 2.1% 8.5% 4.2% 1.9% 1.3% 1.1% 1990 75.9% 13.4% 0.3% 2.4% 7.9% 1982 76.8% 12.5% 0.3% 2.6% 7.8% 1975 78.3% 11.6% 0.2% 2.6% 7.3% 1968 81.7% 9.8% 0.1% 2.8% 5.6% Persons born abroad of French parents, such as Pieds-Noirs and children of French expatriates.An immigrant is by French definition a person born in a foreign country and who didn't have French citizenship at birth. Note that an immigrant may have acquired French citizenship since moving to France, but is still listed as an immigrant in French statistics. On the other hand, persons born in France with foreign citizenship (the children of immigrants) are not listed as immigrants.Morocco, Tunisia, Algeria Source: INSEE
Religion
thumb|right|Temple Saint-Étienne (architect : Jean-Baptiste Schacre), the main Calvinist church of Mulhouse.
Most of the Alsatian population is Roman Catholic, but, largely because of the region's German heritage, a significant Protestant community also exists: today, the EPCAAL (a Lutheran church) is France's second largest Protestant church, also forming an administrative union (UEPAL) with the much smaller Calvinist EPRAL. Unlike the rest of France, the Local law in Alsace-Moselle still provides for to the Napoleonic Concordat of 1801 and the organic articles, which provides public subsidies to the Roman Catholic, Lutheran, and Calvinist churches, as well as to Jewish synagogues; religion classes in one of these faiths is compulsory in public schools. This divergence in policy from the French majority is due to the region having been part of Imperial Germany when the 1905 law separating the French church and state was instituted (for a more comprehensive history, see: Alsace-Lorraine). Controversy erupts periodically on the appropriateness of this legal disposition, as well as on the exclusion of other religions from this arrangement.
Following the Protestant Reformation, promoted by local reformer Martin Bucer, the principle of cuius regio, eius religio led to a certain amount of religious diversity in the highlands of northern Alsace. Landowners, who as "local lords" had the right to decide which religion was allowed on their land, were eager to entice populations from the more attractive lowlands to settle and develop their property. Many accepted without discrimination Catholics, Lutherans, Calvinists, Jews and Anabaptists. Multiconfessional villages appeared, particularly in the region of Alsace bossue. Alsace became one of the French regions boasting a thriving Jewish community, and the only region with a noticeable Anabaptist population. The schism of the Amish under the lead of Jacob Amman from the Mennonites occurred in 1693 in Sainte-Marie-aux-Mines. The strongly Catholic Louis XIV tried in vain to drive them from Alsace. When Napoleon imposed military conscription without religious exception, most emigrated to the American continent.
In 1707, the simultaneum forced many Reformed and Lutheran church buildings to also allow Catholic services. About 50 such "simultaneous churches" still exist in modern Alsace, though with the Catholic church's general lack of priests they tend to hold Catholic services only occasionally.
Culture
Alsace historically was part of the Holy Roman Empire and the German realm of culture. Since the 17th century, the region has passed between German and French control numerous times, resulting in a cultural blend. Germanic traits remain in the more traditional, rural parts of the culture, such as the cuisine and architecture, whereas modern institutions are totally dominated by French culture.
Symbolism
thumb|Coats of arms of Alsace.
Strasbourg
thumb|Coats of arms of Strasbourg.
Strasbourg's arms are the colors of the shield of the Bishop of Strasbourg (a band of red on a white field, also considered an inversion of the arms of the diocese) at the end of a revolt of the burghers during the Middle Ages who took their independence from the teachings of the Bishop. It retains its power over the surrounding area.
Flags
thumb|Rot-un-Wiss, the historical flag
thumb|The region's flag from 1949 to 2008
There is controversy around the recognition of the Alsatian flag. The authentic historical flag is the Rot-un-Wiss ; Red and White are commonly found on the coat of arms of Alsatian cities (Strasbourg, Mulhouse, Sélestat...) and of many Swiss cites, especially in Basel's region. The German region Hesse uses a flag similar to the Rot-un-Wiss. As it underlines the Germanic roots of the region, it was replaced in 1949 by a new "Union jack-like" flag representing the union of the two départements. It has, however, no real historical relevance. It has been since replaced again by a slightly different one, also representing the two départements. With the purpose of "Francizing" the region, the Rot-un-Wiss has not been recognized by Paris. Some overzealous statesmen have called it a Nazi invention - while its origins date back to the XIth century and the Red and White bannerGenealogie-bisval.net of Gérard de Lorraine (aka. d'Alsace). The Rot-un-Wiss flag is still known as the real historical emblem of the region by most of the population and the départements' parliaments and has been widely used during protests against the creation of a new "super-region" gathering Champagne-Ardennes, Lorraine and Alsace, namely on Colmar's statue of liberty.
Language
thumb|350px|Spatial distribution of dialects in Alsace prior to the expansion of standard French in the 20th century
Although German dialects were spoken in Alsace for most of its history, the dominant language in Alsace today is French.
The traditional language of the région is Alsatian, an Alemannic dialect of Upper German spoken on both sides of the Rhine and closely related to Swiss German. Some Frankish dialects of West Central German are also spoken in "Alsace Bossue" and in the extreme north of Alsace. Neither Alsatian nor the Frankish dialects have any form of official status, as is customary for regional languages in France, although both are now recognized as languages of France and can be chosen as subjects in lycées.
Although Alsace has been part of France multiple times in the past, the region had no direct connection with the French state for several centuries. From the end of the Roman Empire (5th century) to the French annexation (17th century), Alsace was politically part of the Germanic world.
The towns of Alsace were the first to adopt German language as their official language, instead of Latin, during the Lutheran Reform. It was in Strasbourg that German was first used for the liturgy. It was also in Strasbourg that the first German Bible was published in 1466.
From the annexation of Alsace by France in the 17th century and the language policy of the French Revolution up to 1870, knowledge of French in Alsace increased considerably. With the education reforms of the 19th century, the middle classes began to speak and write French well. The French language never really managed, however, to win over the masses, the vast majority of whom continued to speak their German dialects and write in German (which we would now call "standard German").
Between 1870 and 1918, Alsace was annexed by the German Empire in the form of an imperial province or Reichsland, and the mandatory official language, especially in schools, became High German. French lost ground to such an extent that it has been estimated that only 2% of the population spoke French fluently and only 8% had some knowledge of it (Maugue, 1970).
After 1918, French was the only language used in schools, and particularly primary schools. After much argument and discussion and after many temporary measures, a memorandum was issued by Vice-Chancellor Pfister in 1927 and governed education in primary schools until 1939.
During a reannexation by Germany (1940–1945), High German was reinstated as the language of education. The population was forced to speak German and 'French' family names were Germanized. Following the Second World War, the 1927 regulation was not reinstated and the teaching of German in primary schools was suspended by a provisional rectorial decree, which was supposed to enable French to regain lost ground. The teaching of German became a major issue, however, as early as 1946. Following World War II, the French government pursued, in line with its traditional language policy, a campaign to suppress the use of German as part of a wider Francization campaign.
In 1951, Article 10 of the Deixonne Law (Loi Deixonne) on the teaching of local languages and dialects made provision for Breton, Basque, Catalan and old Provençal, but not for Corsican, Dutch (West Flemish) or Alsatian in Alsace and Moselle. However, in a Decree of 18 December 1952, supplemented by an Order of 19 December of the same year, optional teaching of the German language was introduced in elementary schools in Communes where the language of habitual use was the Alsatian dialect.
In 1972, the Inspector General of German, Georges Holderith, obtained authorization to reintroduce German into 33 intermediate classes on an experimental basis. This teaching of German, referred to as the Holderith Reform, was later extended to all pupils in the last two years of elementary school. This reform is still largely the basis of German teaching (but not Alsatian) in elementary schools today.
It was not until 9 June 1982, with the Circulaire sur la langue et la culture régionales en Alsace (Memorandum on regional language and culture in Alsace) issued by the Vice-Chancellor of the Académie Pierre Deyon, that the teaching of German in primary schools in Alsace really began to be given more official status. The Ministerial Memorandum of 21 June 1982, known as the Circulaire Savary, introduced financial support, over three years, for the teaching of regional languages in schools and universities. This memorandum was, however, implemented in a fairly lax manner.
Both Alsatian and Standard German were for a time banned from public life (including street and city names, official administration, and educational system). Though the ban has long been lifted and street signs today are often bilingual, Alsace-Lorraine is today very French in language and culture. Few young people speak Alsatian today, although there do still exist one or two enclaves in the Sundgau region where some older inhabitants cannot speak French, and where Alsatian is still used as the mother tongue. A related Alemannic German survives on the opposite bank of the Rhine, in Baden, and especially in Switzerland. However, while French is the major language of the region, the Alsatian dialect of French is heavily influenced by German and other languages such a Yiddish in phonology and vocabulary.
This situation has spurred a movement to preserve the Alsatian language, which is perceived as endangered, a situation paralleled in other régions of France, such as Brittany or Occitania. Alsatian is now taught in French high schools. Increasingly, French is the only language used at home and at work, whereas a growing number of people have a good knowledge of standard German as a foreign language learned in school.
The constitution of the Fifth Republic states that French alone is the official language of the Republic. However, Alsatian, along with other regional languages, are recognized by the French government in the official list of languages of France. A 1999 INSEE survey counted 548,000 adult speakers of Alsatian in France, making it the second most-spoken regional language in the country (after Occitan). Like all regional languages in France, however, the transmission of Alsatian is on the decline. While 39% of the adult population of Alsace speaks Alsatian, only one in four children speaks it, and only one in ten children uses it regularly.
Although the French government signed the European Charter for Regional or Minority Languages in 1992, it never ratified the treaty and therefore no legal basis exists for any of the regional languages in France. However, visitors to Alsace can see indications of renewed political and cultural interest in the language – in Alsatian signs appearing in car-windows and on hoardings, and in new official bilingual street signs in Strasbourg and Mulhouse.
Architecture
thumb|Colmar's old town
The traditional habitat of the Alsatian lowland, like in other regions of Germany and Northern Europe, consists of houses constructed with walls in timber framing and cob and roofing in flat tiles. This type of construction is abundant in adjacent parts of Germany and can be seen in other areas of France, but their particular abundance in Alsace is owed to several reasons:
The proximity to the Vosges where the wood can be found.
During periods of war and bubonic plague, villages were often burned down, so to prevent the collapse of the upper floors, ground floors were built of stone and upper floors built in half-timberings to prevent the spread of fire.
During most of the part of its history, a great part of Alsace was flooded by the Rhine every year. Half-timbered houses were easy to knock down and to move around during those times (a day was necessary to move it and a day to rebuild it in another place).
However, half-timbering was found to increase the risk of fire, which is why from the 19th century, it began to be rendered. In recent times, villagers started to paint the rendering white in accordance with Beaux-Arts movements. To discourage this, the region's authorities gave financial grants to the inhabitants to paint the rendering in various colours, in order to return to the original style and many inhabitants accepted (more for financial reasons than by firm belief).
Cuisine
thumb|Flammekueche
Alsatian cuisine, somewhat based on Germanic culinary traditions, is marked by the use of pork in various forms. It is perhaps mostly known for the region's wines and beers. Traditional dishes include baeckeoffe, flammekueche, choucroute, and fleischnacka. Southern Alsace, also called the Sundgau, is characterized by carpe frite (that also exists in Yiddish tradition).
Food
The festivities of the year's end involve the production of a great variety of biscuits and small cakes called bredela as well as pain d'épices (gingerbread cakes) which are baked around Christmas time.
The gastronomic symbol of the région is undoubtedly the Choucroute, a local variety of Sauerkraut. The word Sauerkraut in Alsatian has the form sûrkrût, same as in other southwestern German dialects, and means "sour cabbage" as its Standard German equivalent. This word was included into the French language as choucroute. To make it, the cabbage is finely shredded, layered with salt and juniper and left to ferment in wooden barrels. Sauerkraut can be served with poultry, pork, sausage or even fish. Traditionally it is served with Strasbourg sausage or frankfurters, bacon, smoked pork or smoked Morteau or Montbéliard sausages, or a selection of other pork products. Served alongside are often roasted or steamed potatoes or dumplings.
Alsace is also well known for its foie gras made in the region since the 17th century. Additionally, Alsace is known for its fruit juices and mineral waters.
Wines
thumb|right|Riesling Grapes
Alsace is an important wine-producing région. Vins d'Alsace (Alsace wines) are mostly white. Alsace produces some of the world's most noted dry rieslings and is the only région in France to produce mostly varietal wines identified by the names of the grapes used (wine from Burgundy is also mainly varietal, but not normally identified as such), typically from grapes also used in Germany. The most notable example is Gewürztraminer.
Beers
thumb|right|Kronenbourg brewery.
Alsace is also the main beer-producing region of France, thanks primarily to breweries in and near Strasbourg. These include those of Fischer, Karlsbräu, Kronenbourg, and Heineken International. Hops are grown in Kochersberg and in northern Alsace. Schnapps is also traditionally made in Alsace, but it is in decline because home distillers are becoming less common and the consumption of traditional, strong, alcoholic beverages is decreasing.
Trivia
thumb|Alsatian stork
The stork is a main feature of Alsace and was the subject of many legends told to children. The bird practically disappeared around 1970, but re-population efforts are continuing. They are mostly found on roofs of houses, churches and other public buildings in Alsace.
The Easter Bunny was first mentioned in Georg Franck von Franckenau's De ovis paschalibus (About Easter Eggs) in 1682 referring to an Alsace tradition of an Easter Hare bringing Easter Eggs.
Use of the term "Alsatia" in English
"Alsatia", the Latin form of Alsace's name, has long ago entered the English language with the specialized meaning of "a lawless place" or "a place under no jurisdiction" - since Alsace was conceived by English people to be such. It was used into the 20th century as a term for a ramshackle marketplace, "protected by ancient custom and the independence of their patrons". As of 2007, the word is still in use among the English and Australian judiciaries with the meaning of a place where the law cannot reach: "In setting up the Serious Organised Crime Agency, the state has set out to create an Alsatia - a region of executive action free of judicial oversight," Lord Justice Sedley in UMBS v SOCA 2007.
Derived from the above, "Alsatia" was historically a cant term for the area near Whitefriars, London, which was for a long time a sanctuary. It is first known in print in the title of The Squire of Alsatia, a 1688 play written by Thomas Shadwell.
Economy
According to the Institut National de la Statistique et des Études Économiques (INSEE), Alsace had a gross domestic product of 44.3 billion euros in 2002. With a GDP per capita of €24,804, it was the second-place région of France, losing only to Île-de-France. 68% of its jobs are in the services; 25% are in industry, making Alsace one of France's most industrialised régions.
Alsace is a région of varied economic activity, including:
viticulture (mostly along the Route des Vins d'Alsace between Marlenheim and Thann)
hop harvesting and brewing (half of French beer is produced in Alsace, especially in the vicinity of Strasbourg, notably in Strasbourg-Cronenbourg, Schiltigheim and Obernai)
forestry development
automobile industry (Mulhouse and Molsheim, home town of Bugatti Automobiles)
life sciences, as part of the trinational BioValley and
tourism
potassium chloride (until the late 20th century) and phosphate mining
Alsace has many international ties and 35% of firms are foreign companies (notably German, Swiss, American, Japanese, and Scandinavian).
Tourism
Having been early and always densely populated, Alsace is famous for its high number of picturesque villages, churches and castles and for the various beauties of its three main towns, in spite of severe destructions suffered throughout five centuries of wars between France and Germany.
Alsace is furthermore famous for its vineyards (especially along the 170 km of the Route des Vins d'Alsace from Marlenheim to Thann) and the Vosges mountains with their thick and green forests and picturesque lakes.
thumb|Château du Haut-Kœnigsbourg
thumb|The main entrance of the Ouvrage Schoenenbourg from the Maginot Line
Old towns of Strasbourg, Colmar, Sélestat, Guebwiller, Saverne, Obernai, Thann
Smaller cities and villages: Molsheim, Rosheim, Riquewihr, Ribeauvillé, Kaysersberg, Wissembourg, Neuwiller-lès-Saverne, Marmoutier, Rouffach, Soultz-Haut-Rhin, Bergheim, Hunspach, Seebach, Turckheim, Eguisheim, Neuf-Brisach, Ferrette, Niedermorschwihr and the gardens of the blue house in Uttenhoffen
Churches (as main sights in otherwise less remarkable places): Thann, Andlau, Murbach, Ebersmunster, Niederhaslach, Sigolsheim, Lautenbach, Epfig, Altorf, Ottmarsheim, Domfessel, Niederhaslach, Marmoutier and the fortified church at Hunawihr
Château du Haut-Kœnigsbourg
Other castles: Ortenbourg and Ramstein (above Sélestat), Hohlandsbourg, Fleckenstein, Haut-Barr (above Saverne), Saint-Ulrich (above Ribeauvillé), Lichtenberg, Wangenbourg, the three Castles of Eguisheim, Pflixbourg, Wasigenstein, Andlau, Grand Geroldseck, Wasenbourg
Cité de l'Automobile museum in Mulhouse
Cité du train museum in Mulhouse
The EDF museum in Mulhouse
Ungersheim's "écomusée" (open-air museum) and "Bioscope" (leisure park about the environment, closed since September 2012)
Musée historique in Haguenau, largest museum in Bas-Rhin outside Strasbourg
Bibliothèque humaniste in Sélestat, one of the oldest public libraries in the world
Christmas markets in Kaysersberg, Strasbourg, Mulhouse and Colmar
Departmental Centre of the History of Families (CDHF) in Guebwiller
The Maginot Line: Ouvrage Schoenenbourg
Mount Ste Odile
Route des Vins d'Alsace (Alsace Wine Route)
Mémorial d'Alsace-Lorraine in Schirmeck
Natzweiler-Struthof, the only German concentration camp on French territory during WWII
Famous mountains: Massif du Donon, Grand Ballon, Petit Ballon, Ballon d'Alsace, Hohneck, Hartmannswillerkopf
National park: Parc naturel des Vosges du Nord
Regional park: Parc naturel régional des Ballons des Vosges (south of the Vosges)
Transportation
Roads
thumb|right|Ponts Couverts, Strasbourg
Most major car journeys are made on the A35 autoroute, which links Saint-Louis on the Swiss border to Lauterbourg on the German border.
The A4 toll road (towards Paris) begins 20 km northwest of Strasbourg and the A36 toll road towards Lyon, begins 10 km west from Mulhouse.
Spaghetti-junctions (built in the 1970s and 1980s) are prominent in the comprehensive system of motorways in Alsace, especially in the outlying areas of Strasbourg and Mulhouse. These cause a major buildup of traffic and are the main sources of pollution in the towns, notably in Strasbourg where the motorway traffic of the A35 was 170,000 per day in 2002.
At present, plans are being considered for building a new dual carriageway west of Strasbourg, which would reduce the buildup of traffic in that area by picking up north and southbound vehicles and getting rid of the buildup outside Strasbourg. The line plans to link up the interchange of Hœrdt to the north of Strasbourg, with Innenheim in the southwest. The opening is envisaged at the end of 2011, with an average usage of 41,000 vehicles a day. Estimates of the French Works Commissioner however, raised some doubts over the interest of such a project, since it would pick up only about 10% of the traffic of the A35 at Strasbourg. Paradoxically, this reversed the situation of the 1950s. At that time, the French trunk road left of the Rhine not been built, so that traffic would cross into Germany to use the Karlsruhe-Basel Autobahn.
To add to the buildup of traffic, the neighbouring German state of Baden-Württemberg has imposed a tax on heavy-goods vehicles using their Autobahnen. Thus, a proportion of the HGVs travelling from north Germany to Switzerland or southern Alsace bypasses the A5 on the Alsace-Baden-Württemberg border and uses the untolled, French A35 instead.
Trains
thumb|right|Place de l'Homme de Fer Tram Station
TER Alsace is the rail network serving Alsace. Its network is articulated around the city of Strasbourg. It is one of the most developed rail networks in France, financially sustained partly by the French railroad SNCF, and partly by the région Alsace.
Because the Vosges are surmountable only by the Col de Saverne and the Belfort Gap, it has been suggested that Alsace needs to open up and get closer to France in terms of its rail links. Developments already under way or planned include:
the TGV Est (Paris – Strasbourg) had its first phase brought into service in June 2007, bringing down the Strasbourg-Paris trip from 4 hours to 2 hours 20 minutes. Work on its second phase, which will further bring down this time to 1hour 50 minutes, is due to be completed in 2016.
the TGV Rhin-Rhône between Dijon and Mulhouse (opened in 2011)
a tram-train system in Mulhouse (2011)
an interconnection with the German InterCityExpress, as far as Kehl (expected 2016)
However, the abandoned Maurice-Lemaire tunnel towards Saint-Dié-des-Vosges was rebuilt as a toll road.
Waterways
Port traffic of Alsace exceeds 15 million tonnes, of which about three-quarters is centred on Strasbourg, which is the second busiest French fluvial harbour. The enlargement plan of the Rhône–Rhine Canal, intended to link up the Mediterranean Sea and Central Europe (Rhine, Danube, North Sea and Baltic Sea) was abandoned in 1998 for reasons of expense and land erosion, notably in the Doubs valley.
Air traffic
There are two international airports in Alsace:
the international airport of Strasbourg in Entzheim
the international EuroAirport Basel-Mulhouse-Freiburg, which is the seventh largest French airport in terms of traffic
Strasbourg is also two hours away by road from one of the largest European airports, Frankfurt Main, and 2 hour 30 minutes from Charles de Gaulle Airport through the direct TGV service, stopping in Terminal 2.
Cycling network
Crossed by three EuroVelo routes
the EuroVelo 5 (Via Francigena from London to Rome/Brindisi),
the EuroVelo 6 (Véloroute des fleuves from Nantes to Budapest (H)) and
the EuroVelo 15 (Véloroute Rhin / Rhine cycle route from Andermatt (CH) to Rotterdam (NL)).
Alsace is the most well equipped region of France, with 2000 kilometres of cycle routes. The network is of a very good standard and well signposted. All the towpaths of the canals in Alsace (canal des houillères de la Sarre, canal de la Marne au Rhin, canal de la Bruche, canal du Rhône au Rhin) are tarred.
Famous Alsatians
thumb|upright|Statue of Martin Schongauer by Frédéric Bartholdi in front of the Unterlinden Museum, Colmar
The following is a selection of people born in Alsace who have been particularly influential and/or successful in their respective field.
Arts
Jean Arp
Frédéric Auguste Bartholdi
Théodore Deck
Gustave Doré
Jean-Jacques Henner
Philip James de Loutherbourg
Master of the Drapery Studies
Marcel Marceau
Charles Munch
Claude Rich
Martin Schongauer
Marie Tussaud
Tomi Ungerer
Émile Waldteufel
William Wyler
Businesss
Schlumberger brothers
Literature
Sebastian Brant
Gottfried von Strassburg
Military
François Christophe de Kellermann
Jean-Baptiste Kléber
Jean Rapp
Nobility
Ludwig I of Bavaria
Religion
Martin Bucer
Wolfgang Capito
Charles de Foucauld
Herrad of Landsberg
Pope Leo IX
Thomas Murner
J. F. Oberlin
Odile of Alsace
Albert Schweitzer
Philip Jacob Spener
Jakob Wimpfeling
Sciences
Hans Bethe
Charles Friedel
Charles Frédéric Gerhardt
Alfred Kastler
Jean-Marie Lehn
Wilhelm Philippe Schimper
Charles Xavier Thomas
Charles-Adolphe Wurtz
Sports
Mehdi Baala
Valérien Ismaël
Sebastien Loeb
Yvan Muller
Thierry Omeyer
Arsène Wenger
Major communities
German original names in brackets if French names are different
Bischheim
Colmar (Kolmar)
Guebwiller (Gebweiler)
Haguenau (Hagenau)
Illkirch-Graffenstaden (Illkirch-Grafenstaden)
Illzach
Lingolsheim Mulhouse (Mülhausen)
Saint-Louis (St. Ludwig)
Saverne (Zabern)
Schiltigheim
Sélestat (Schlettstadt)
Strasbourg (Straßburg)
Wittenheim
Sister provinces
There is an accord de coopération internationale between Alsace and the following regions:Les Accords de coopération entre l’Alsace et...
Gyeongsangbuk-do, South Korea
Lower Silesia, Poland
Upper Austria, Austria
Quebec, Canada
Jiangsu, China
Moscow, Russia
Vest, Romania
See also
German place names (Alsace)
History of Jews in Alsace
Musée alsacien (Strasbourg)
Route Romane d'Alsace
Castroville, Texas
Charles Schreiner (Texas rancher)
Footnotes
Bibliography
Assall, Paul. Juden im Elsass. Zürich: Rio Verlag. ISBN 3-907668-00-6.
Das Elsass: Ein literarischer Reisebegleiter. Frankfurt a. M.: Insel Verlag, 2001. ISBN 3-458-34446-2.
Erbe, Michael (Hrsg.) Das Elsass: Historische Landschaft im Wandel der Zeiten. Stuttgart: Kohlhammer, 2002. ISBN 3-17-015771-X.
Faber, Gustav. Elsass. München: Artemis-Cicerone Kunst- und Reiseführer, 1989.
Fischer, Christopher J. Alsace to the Alsatians? Visions and Divisions of Alsatian Regionalism, 1870–1939 (Berghahn Books, 2010).
Gerson, Daniel. Die Kehrseite der Emanzipation in Frankreich: Judenfeindschaft im Elsass 1778 bis 1848. Essen: Klartext, 2006. ISBN 3-89861-408-5.
Haeberlin, Marc. Elsass, meine große Liebe. Orselina, La Tavola 2004. ISBN 3-909909-08-6 − Rezension über das „Schlaraffenland“ Elsass
Herden, Ralf Bernd. Straßburg Belagerung 1870. Norderstedt: BoD, 2007, ISBN 978-3-8334-5147-8.
Mehling, Marianne (Hrsg.) Knaurs Kulturführer in Farbe Elsaß. München: Droemer Knaur, 1984.
Putnam, Ruth. Alsace and Lorraine: From Cæsar to Kaiser, 58 B.C.–1871 A.D. New York: 1915.
Schreiber, Hermann. Das Elsaß und seine Geschichte, eine Kulturlandschaft im Spannungsfeld zweier Völker. Augsburg: Weltbild, 1996.
Schwengler, Bernard. Le Syndrome Alsacien: d'Letschte? Strasbourg: Éditions Oberlin, 1989. ISBN 2-85369-096-2.
Ungerer, Tomi. Elsass. Das offene Herz Europas. Straßburg: Édition La Nuée Bleue, 2004. ISBN 2-7165-0618-3.
Ungerer, Tomi, Danièle Brison, and Tony Schneider. Die elsässische Küche. 60 Rezepte aus der Weinstube L'Arsenal. Straßburg: Édition DNA, 1994. ISBN 2-7165-0341-9.
Vogler, Bernard and Hermann Lersch. Das Elsass. Morstadt: Éditions Ouest-France, 2000. ISBN 3-88571-260-1.
External links
Official website of the Alsace regional council
Alsace : at the heart of Europe – Official French website (in English)
Tourism-Alsace.com Info from the Alsace Tourism Board
Rhine Online – life in southern Alsace and neighbouring Basel and Baden Wuerrtemburg
Alsatourisme Tourism in Alsace
Statistics and figures on Alsace on the website of the INSEE
Alsace.net: Directory of Alsatian Websites
"Museums of Alsace"
Churches and chapels of Alsace (pictures only)
Medieval castles of Alsace (pictures only)
"Organs of Alsace"
The Alsatian Library of Mutual Credit
The Alsatian Artists
Category:Geographical, historical and cultural regions of France
Category:NUTS 2 statistical regions of the European Union
Category:Wine regions of France
Category:Germanic countries and territories | 48,129 | 2017-01 |
United States Army | The United States Army (USA) is the largest branch of the United States Armed Forces and performs land-based military operations. It is one of the seven uniformed services of the United States and is designated as the Army of the United States in the United States Constitution, Article 2, Section 2, Clause 1 and United States Code, Title 10, Subtitle B, Chapter 301, Section 3001. As the largest and senior branch of the U.S. military, the modern U.S. Army has its roots in the Continental Army, which was formed (14 June 1775) to fight the American Revolutionary War (1775–1783)—before the U.S. was established as a country. After the Revolutionary War, the Congress of the Confederation created the United States Army on 3 June 1784, to replace the disbanded Continental Army.Library of Congress, Journals of the Continental Congress, Volume 27 The United States Army considers itself descended from the Continental Army, and dates its institutional inception from the origin of that armed force in 1775. an excerpt from Robert Wright, The Continental Army
As a uniformed military service, the Army is part of the Department of the Army, which is one of the three military departments of the Department of Defense. The U.S. Army is headed by a civilian senior appointed civil servant, the Secretary of the Army (SECARMY), and by a chief military officer, the Chief of Staff of the Army (CSA) who is also a member of the Joint Chiefs of Staff. In the fiscal year 2017, the projected end strength for the Regular Army (USA) was 460,000 soldiers; the Army National Guard (ARNG) had 335,000 soldiers, and the United States Army Reserve (USAR) had 195,000 soldiers; the combined-component strength of the U.S. Army was 990,000 soldiers. As a branch of the armed forces, the mission of the U.S. Army is "to fight and win our Nation's wars, by providing prompt, sustained, land dominance, across the full range of military operations and the spectrum of conflict, in support of combatant commanders." The service participates in conflicts worldwide and is the major ground-based offensive and defensive force of the United States.
Mission
The United States Army serves as the land-based branch of the U.S. Armed Forces. Section 3062 of Title 10 US Code defines the purpose of the army as:DA Pamphlet 10-1 Organization of the United States Army; Figure 1.2 Military Operations.
Preserving the peace and security and providing for the defense of the United States, the Commonwealths and possessions and any areas occupied by the United States
Supporting the national policies
Implementing the national objectives
Overcoming any nations responsible for aggressive acts that imperil the peace and security of the United States
History
Origins
thumb|left|Storming of Redoubt #10 in the Siege of Yorktown during the American Revolutionary War prompted the British government to begin negotiations, resulting in the Treaty of Paris and British recognition of the United States of America.
The Continental Army was created on 14 June 1775 by the Continental CongressCont'l Cong., Formation of the Continental Army, in 2 Journals of the Continental Congress, 1774–1789 89–90 (Library of Cong. eds., 1905). as a unified army for the colonies to fight Great Britain, with George Washington appointed as its commander.Cont'l Cong., Commission for General Washington, in 2 Journals of the Continental Congress, 1774-1789 96-7 (Library of Cong. eds., 1905).Cont'l Cong., Instructions for General Washington, in 2 Journals of the Continental Congress, 1774-1789 100-1 (Library of Cong. eds., 1905).Cont'l Cong., Resolution Changing "United Colonies" to "United States", in 5 Journals of the Continental Congress, 1774-1789 747 (Library of Cong. eds., 1905). The army was initially led by men who had served in the British Army or colonial militias and who brought much of British military heritage with them. As the Revolutionary War progressed, French aid, resources, and military thinking influenced the new army. A number of European soldiers came on their own to help, such as Friedrich Wilhelm von Steuben, who taught Prussian Army tactics and organizational skills.
The army fought numerous pitched battles and in the South in 1780–81 sometimes used the Fabian strategy and hit-and-run tactics, hitting where the British were weakest, to wear down their forces. Washington led victories against the British at Trenton and Princeton, but lost a series of battles in the New York and New Jersey campaign in 1776 and the Philadelphia campaign in 1777. With a decisive victory at Yorktown, and the help of the French, the Continental Army prevailed against the British.
After the war, though, the Continental Army was quickly given land certificates and disbanded in a reflection of the republican distrust of standing armies. State militias became the new nation's sole ground army, with the exception of a regiment to guard the Western Frontier and one battery of artillery guarding West Point's arsenal. However, because of continuing conflict with Native Americans, it was soon realized that it was necessary to field a trained standing army. The Regular Army was at first very small, and after General St. Clair's defeat at the Battle of the Wabash, the Regular Army was reorganized as the Legion of the United States, which was established in 1791 and renamed the "United States Army" in 1796.
19th century
Early wars on the Frontier
thumb|left|General Andrew Jackson stands on the parapet of his makeshift defenses as his troops repulse attacking Highlanders during the defense of New Orleans, the final major and most one-sided battle of the War of 1812
The War of 1812, the second and last war between the US and Great Britain, had mixed results. The Army did not conquer Canada but it did destroy Native American resistance to expansion in the Old Northwest, and it validated US independence by stopping two major British invasions in 1814 and 1815. After taking control of Lake Erie in 1813, the US Army seized parts of western Upper Canada, burned York and defeated Tecumseh, which caused his Western Confederacy to collapse. Following US victories in the Canadian province of Upper Canada, British troops, who had dubbed the U.S. Army "Regulars, by God!", were able to capture and burn Washington, which was defended by militia, in 1814. The regular army, however, proved they were professional and capable of defeating the British army during the invasions of Plattsburgh and Baltimore, prompting British agreement on the previously rejected terms of a status quo ante bellum. Two weeks after a treaty was signed (but not ratified), Andrew Jackson defeated the British in the Battle of New Orleans and Siege of Fort St. Philip, and became a national hero. U.S. troops and sailors captured HMS Cyane, Levant, and Penguin in the final engagements of the war. Per the treaty, both sides, the United States and Great Britain, returned to the geographical status quo. Both navies kept the warships they had seized during the conflict.
The army's major campaign against the Indians was fought in Florida against Seminoles. It took long wars (1818–58) to finally defeat the Seminoles and move them to Oklahoma. The usual strategy in Indian wars was to seize control of the Indians' winter food supply, but that was no use in Florida where there was no winter. The second strategy was to form alliances with other Indian tribes, but that too was useless because the Seminoles had destroyed all the other Indians when they entered Florida in the late eighteenth century.Ron Field and Richard Hook, The Seminole Wars 1818–58 (2009)
The U.S. Army fought and won the Mexican–American War (1846–1848), which was a defining event for both countries. The U.S. victory resulted in acquisition of territory that eventually became all or parts of the states of California, Nevada, Utah, Colorado, Arizona, Wyoming and New Mexico.
American Civil War
thumb|The Battle of Gettysburg, the turning point of the American Civil War
The American Civil War was the costliest war for the U.S. in terms of casualties. After most slave states, located in the southern U.S., formed the Confederate States, C.S. troops led by former U.S. Army officers, mobilized a very large fraction of Southern white manpower. Forces of the United States (the "Union" or "the North") formed the Union Army consisting of a small body of regular army units and a large body of volunteer units raised from every state, north and south, except South Carolina.
For the first two years Confederate forces did well in set battles but lost control of the border states.McPherson, James M., ed. The Atlas of the Civil War, (Philadelphia, PA, 2010) The Confederates had the advantage of defending a very large country in an area where disease caused twice as many deaths as combat. The Union pursued a strategy of seizing the coastline, blockading the ports, and taking control of the river systems. By 1863 the Confederacy was being strangled. Its eastern armies fought well, but the western armies were defeated one after another until the Union forces captured New Orleans in 1862 along with the Tennessee River. In the famous Vicksburg Campaign of 1862–63, Ulysses Grant seized the Mississippi River and cut off the Southwest. Grant took command of Union forces in 1864 and after a series of battles with very heavy casualties, he had Lee under siege in Richmond as William T. Sherman captured Atlanta and marched through Georgia and the Carolinas. The Confederate capital was abandoned in April 1865 and Lee subsequently surrendered his army at Appomattox Court House; all other Confederate armies surrendered within a few months.
The war remains the deadliest conflict in American history, resulting in the deaths of 620,000 soldiers. Based on 1860 census figures, 8% of all white males aged 13 to 43 died in the war, including 6.4% in the North and 18% in the South.Maris Vinovskis (1990). "Toward a social history of the American Civil War: exploratory essays". Cambridge University Press. p. 7. ISBN 0-521-39559-3
Later 19th century
Following the Civil War, the U.S. Army had the mission of containing western tribes of Native Americans on their reservations. There were many forts set up, and several campaigns. U.S. Army troops also occupied several Southern states during the Reconstruction Era to protect freedmen.
The key battles of the Spanish–American War of 1898 were fought by the Navy. Using mostly new volunteers, the U.S. Army defeated Spain in land campaigns in Cuba and played the central role in suppressing a rebellion in the Philippines.
20th century
thumb|left|U.S. Army troops assault a German bunker, France, circa 1918
Starting in 1910, the army began acquiring fixed-wing aircraft.Cragg, Dan, ed., The Guide to Military Installations, Stackpole Books, Harrisburg, 1983, p. 272. In 1910, Mexico was having a civil war, peasant rebels fighting government soldiers. The army was deployed to American towns near the border to ensure safety to lives and property. In 1916, Pancho Villa, a major rebel leader, attacked Columbus, New Mexico, prompting a U.S. intervention in Mexico until 7 February 1917. They fought the rebels and the Mexican federal troops until 1918. The United States joined World War I in 1917 on the side of Britain, France, Russia, Italy and other allies. U.S. troops were sent to the Western Front and were involved in the last offensives that ended the war. With the armistice in November 1918, the army once again decreased its forces.
thumb|American soldiers hunt Japanese infiltrators during the Bougainville Campaign
The United States joined World War II in December 1941 after the Japanese attack on Pearl Harbor. On the European front, U.S. Army troops formed a significant portion of the forces that captured North Africa and Sicily, and later fought in Italy. On D-Day, June 6, 1944, and in the subsequent liberation of Europe and defeat of Nazi Germany, millions of U.S. Army troops played a central role. In the Pacific War, U.S. Army soldiers participated alongside the United States Marine Corps in capturing the Pacific Islands from Japanese control. Following the Axis surrenders in May (Germany) and August (Japan) of 1945, army troops were deployed to Japan and Germany to occupy the two defeated nations. Two years after World War II, the Army Air Forces separated from the army to become the United States Air Force in September 1947 after decades of attempting to separate. Also, in 1948, the army was desegregated by order of President Harry S. Truman.
thumb|Men of the 3rd Battalion, 504th Parachute Infantry Regiment, part of the 82nd Airborne Division, advance in a snowstorm behind a tank, January 1945
The end of World War II set the stage for the East–West confrontation known as the Cold War. With the outbreak of the Korean War, concerns over the defense of Western Europe rose. Two corps, V and VII, were reactivated under Seventh United States Army in 1950 and American strength in Europe rose from one division to four. Hundreds of thousands of U.S. troops remained stationed in West Germany, with others in Belgium, the Netherlands and the United Kingdom, until the 1990s in anticipation of a possible Soviet attack.
thumb|left|U.S. Army soldiers look upon an atomic bomb test of Operation Buster-Jangle at the Nevada Test Site during the Korean War
During the Cold War, American troops and their allies fought Communist forces in Korea and Vietnam. The Korean War began in 1950, when the Soviets walked out of a U.N. Security meeting, removing their possible veto. Under a United Nations umbrella, hundreds of thousands of U.S. troops fought to prevent the takeover of South Korea by North Korea, and later, to invade the northern nation. After repeated advances and retreats by both sides, and the PRC People's Volunteer Army's entry into the war, the Korean Armistice Agreement returned the peninsula to the status quo in 1953.
The Vietnam War is often regarded as a low point for the U.S. Army due to the use of drafted personnel, the unpopularity of the war with the American public, and frustrating restrictions placed on the military by American political leaders. While American forces had been stationed in the Republic of Vietnam since 1959, in intelligence & advising/training roles, they were not deployed in large numbers until 1965, after the Gulf of Tonkin Incident. American forces effectively established and maintained control of the "traditional" battlefield, however they struggled to counter the guerrilla hit and run tactics of the communist Viet Cong and the North Vietnamese Army. On a tactical level, American soldiers (and the U.S. military as a whole) did not lose a sizable battle.Woodruff, Mark. Unheralded Victory: The Defeat of the Viet Cong and the North Vietnamese Army 1961–1973 (Arlington, VA: Vandamere Press, 1999).
thumb|right|A U.S. Army infantry patrol moves up to assault the last North Vietnamese Army position at Dak To, South Vietnam during Operation Hawthorne
During the 1960s the Department of Defense continued to scrutinize the reserve forces and to question the number of divisions and brigades as well as the redundancy of maintaining two reserve components, the Army National Guard and the Army Reserve.Wilson, John B. (1997). Maneuver and Firepower: The Evolution of Divisions and Separate Brigades. Washington, DC: Center of Military History, Chapter XII, for references see Note 48. In 1967 Secretary of Defense Robert McNamara decided that 15 combat divisions in the Army National Guard were unnecessary and cut the number to 8 divisions (1 mechanized infantry, 2 armored, and 5 infantry), but increased the number of brigades from 7 to 18 (1 airborne, 1 armored, 2 mechanized infantry, and 14 infantry). The loss of the divisions did not set well with the states. Their objections included the inadequate maneuver element mix for those that remained and the end to the practice of rotating divisional commands among the states that supported them. Under the proposal, the remaining division commanders were to reside in the state of the division base. No reduction, however, in total Army National Guard strength was to take place, which convinced the governors to accept the plan. The states reorganized their forces accordingly between 1 December 1967 and 1 May 1968.
thumb|left|M1 Abrams move out before the Battle of Al Busayyah during the Gulf War
The Total Force Policy was adopted by Chief of Staff of the Army General Creighton Abrams in the aftermath of the Vietnam War and involves treating the three components of the army – the Regular Army, the Army National Guard and the Army Reserve as a single force.Army National Guard Constitution Believing that no U.S. president should be able to take the United States (and more specifically the U.S. Army) to war without the support of the American people, General Abrams intertwined the structure of the three components of the army in such a way as to make extended operations impossible, without the involvement of both the Army National Guard and the Army Reserve.Carafano, James, Total Force Policy and the Abrams Doctrine: Unfulfilled Promise, Uncertain Future, Foreign Policy Research Institute, 3 February 2005.
The 1980s was mostly a decade of reorganization. The army converted to an all-volunteer force with greater emphasis on training and technology. The Goldwater-Nichols Act of 1986 created unified combatant commands bringing the army together with the other four military services under unified, geographically organized command structures. The army also played a role in the invasions of Grenada in 1983 (Operation Urgent Fury) and Panama in 1989 (Operation Just Cause).
thumb|right|219px|U.S. Army soldiers prepare to take La Comandancia in the El Chorrillo neighborhood of Panama City during the United States invasion of Panama
By 1989 Germany was nearing reunification and the Cold War was coming to a close. Army leadership reacted by starting to plan for a reduction in strength. By November 1989 Pentagon briefers were laying out plans to reduce army end strength by 23%, from 750,000 to 580,000.An Army at War: Change in the Midst of Conflict, p. 515, via Google Books A number of incentives such as early retirement were used. In 1990 Iraq invaded its smaller neighbor, Kuwait, and U.S. land forces, quickly deployed to assure the protection of Saudi Arabia. In January 1991 Operation Desert Storm commenced, a U.S.-led coalition which deployed over 500,000 troops, the bulk of them from U.S. Army formations, to drive out Iraqi forces. The campaign ended in total victory, as Western coalition forces routed the Iraqi Army, organized along Soviet lines, in just one hundred hours.
After Operation Desert Storm, the army did not see major combat operations for the remainder of the 1990s but did participate in a number of peacekeeping activities. In 1990 the Department of Defense issued guidance for "rebalancing" after a review of the Total Force Policy,Section 1101, National Defense Authorization Act for Fiscal Years 1990 and 1991, Department of Defense Interim Report to Congress, September 1990. (See "rebalancing" as used in finance.) but in 2004, Air War College scholars concluded the guidance would reverse the Total Force Policy which is an "essential ingredient to the successful application of military force."Downey, Chris, The Total Force Policy and Effective Force, Air War College, 19 March 2004.
21st century
thumb|Army Rangers from the 1st Ranger Battalion conduct a MOUT exercise at Fort Bragg, North Carolina.
On September 11, 2001, 53 Army civilians (47 employees and six contractors) and 22 soldiers were among the 125 victims killed in the Pentagon in a terrorist attack when American Airlines Flight 77 commandeered by five Al-Qaeda hijackers slammed into the western side of the building, as part of the September 11 attacks. Lieutenant General Timothy Maude was the highest-ranking military official killed at the Pentagon, and the most senior U.S. Army officer killed by foreign action since the death of Lieutenant General Simon B. Buckner, Jr. on June 18, 1945, in the Battle of Okinawa during World War II."9/11 a day of remembrance". The Star Press. Muncie, Indiana.
thumb|left|Army Rangers take part in a raid during operation in Nahr-e Saraj, Afghanistan.
In response to the September 11 attacks, and as part of the Global War on Terror, U.S. and NATO forces invaded Afghanistan in October 2001, displacing the Taliban government. The U.S. Army also led the combined U.S. and allied invasion of Iraq in 2003. It served as the primary source for ground forces with its ability to sustain short and long-term deployment operations. In the following years the mission changed from conflict between regular militaries to counterinsurgency, resulting in the deaths of more than 4,000 U.S service members (as of March 2008) and injuries to thousands more. . By Gilbert Burnham, Shannon Doocy, Elizabeth Dzeng, Riyadh Lafta, and Les Roberts. A supplement to the second Lancet study. 23,813 insurgents597 killed in 2003,, 23,984 killed from 2004 through 2009 (with the exceptions of May 2004 and March 2009), 652 killed in May 2004, 45 killed in March 2009, 676 killed in 2010, 451 killed in 2011 (with the exception of February), thus giving a total of 26,405 dead. were killed in Iraq between 2003–2011.
The army's chief modernization plan was the FCS program. Many systems were canceled and the remaining were swept into the BCT modernization program. In response to Budget sequestration in 2013 the army is planned to shrink to a size not seen since the WWII buildup. The 2015 expenditure for Army research, development and acquisition changed from $32 billion projected in 2012 for FY15, to $21 billion for FY15 expected in 2014.Drwiega, Andrew. "Missions Solutions Summit: Army Leaders Warn of Rough Ride Ahead" Rotor&Wing, June 4, 2014. Accessed: June 8, 2014.
Organization
thumb|Organization chartDA Pam 10-1 Organization of the United States Army; Figure 1-1. '"Army Organizations Execute Specific Functions and Assigned Missions"
Army components
The task of organizing the U.S. Army commenced in 1775.Organization of the United States Army: America's Army 1775 – 1995, DA PAM 10–1. Headquarters, Department of the Army, Washington, 14 June 1994. In the first one hundred years of its existence, the United States Army was maintained as a small peacetime force to man permanent forts and perform other non-wartime duties such as engineering and construction works. During times of war, the U.S. Army was augmented by the much larger United States Volunteers which were raised independently by various state governments. States also maintained full-time militias which could also be called into the service of the army.
thumb|left|U.S. general officers, World War II, Europe
By the twentieth century, the U.S. Army had mobilized the U.S. Volunteers on four separate occasions during each of the major wars of the nineteenth century. During World War I, the "National Army" was organized to fight the conflict, replacing the concept of U.S. Volunteers. It was demobilized at the end of World War I, and was replaced by the Regular Army, the Organized Reserve Corps, and the State Militias. In the 1920s and 1930s, the "career" soldiers were known as the "Regular Army" with the "Enlisted Reserve Corps" and "Officer Reserve Corps" augmented to fill vacancies when needed.
In 1941, the "Army of the United States" was founded to fight World War II. The Regular Army, Army of the United States, the National Guard, and Officer/Enlisted Reserve Corps (ORC and ERC) existed simultaneously. After World War II, the ORC and ERC were combined into the United States Army Reserve. The Army of the United States was re-established for the Korean War and Vietnam War and was demobilized upon the suspension of the draft.
Currently, the army is divided into the Regular Army, the Army Reserve, and the Army National Guard. The army is also divided into major branches such as Air Defense Artillery, Infantry, Aviation, Signal Corps, Corps of Engineers, and Armor. Before 1903 members of the National Guard were considered state soldiers unless federalized (i.e., activated) by the President. Since the Militia Act of 1903 all National Guard soldiers have held dual status: as National Guardsmen under the authority of the governor of their state or territory and, when activated, as a reserve of the U.S. Army under the authority of the President.
Since the adoption of the total force policy, in the aftermath of the Vietnam War, reserve component soldiers have taken a more active role in U.S. military operations. For example, Reserve and Guard units took part in the Gulf War, peacekeeping in Kosovo, Afghanistan, and the 2003 invasion of Iraq.
Army commands and army service component commands
25px Headquarters, United States Department of the Army (HQDA):
Army CommandsCurrent commanderLocation of headquarters20px United States Army Forces Command (FORSCOM) GEN Robert B. Abrams Fort Bragg, North Carolina20px United States Army Materiel Command (AMC) GEN Gustave F. Perna Redstone Arsenal, Alabama20px United States Army Training and Doctrine Command (TRADOC) GEN David G. Perkins Fort Eustis, VirginiaArmy Service Component CommandsCurrent commanderLocation of headquarters22px United States Army Africa (USARAF) / Ninth Army / United States Army Southern European Task Forcehttp://armypubs.army.mil/epubs/pdf/go1204.pdf MG Darryl A. Williams Caserma Ederle, Vicenza, Italy20px United States Army Central (ARCENT) / Third Army LTG James L. Terry Shaw Air Force Base, South Carolina20px United States Army Europe (USAREUR) / Seventh Army (US) LTG Ben Hodges Clay Kaserne, Wiesbaden, Germany20px United States Army North (ARNORTH) / Fifth Army LTG Perry L. Wiggins Joint Base San Antonio, Texas20px United States Army Pacific (USARPAC) GEN Robert B. Brown Fort Shafter, Hawaii20px United States Army South (ARSOUTH) / Sixth Army MG Clarence K.K. Chinn Joint Base San Antonio, Texas20px Surface Deployment and Distribution Command (SDDC) MG Susan A. Davidson Scott AFB, Illinois20px United States Army Cyber Command (ARCYBER)http://www.apd.army.mil/pdffiles/go1402.pdf LTG Edward C. Cardon Fort Belvoir, VirginiaGO 2016-11 http://www.apd.army.mil/Search/ePubsSearch/ePubsSearchForm.aspx?x=DAGO20px United States Army Space and Missile Defense Command / United States Army Strategic Command (USASMDC/ARSTRAT) LTG David Mann Redstone Arsenal, Alabama20px United States Army Special Operations Command (USASOC) LTG Charles T. Cleveland Fort Bragg, North CarolinaOperational Force HeadquartersCurrent commanderLocation of headquarters20px Eighth Army (EUSA)http://www.apd.army.mil/pdffiles/go1202.pdf LTG Thomas S. Vandal Yongsan Garrison, South KoreaDirect reporting unitsCurrent commanderLocation of headquartersArlington National Cemetery and Soldiers' and Airmen's Home National Cemeteryhttp://www.apd.army.mil/pdffiles/go1475.pdf Jack E. Lechner Arlington, VirginiaUnited States Army Marketing and Engagement Brigade (USAMEB)http://www.apd.army.mil/Search/ePubsSearch/ePubsSearchDownloadPage.aspx?docID=0902c8518006b6e4 COL Brian M. Cavanaugh Fort Knox, Kentucky18px Second ArmyThe Relationship of U. S. Army Cyber Command and Second Army, U.S. Army Cyber Command, last accessed 12 January 2015 LTG Edward C. Cardon Fort Belvoir, Virginia20px United States Army Acquisition Support Center (USASC)http://www.apd.army.mil/pdffiles/go0633.pdf Craig A. Spisak Fort Belvoir, VirginiaUnited States Army Civilian Human Resources Agency (CHRA)DAGO 2017-03, DESIGNATION OF THE UNITED STATES ARMY CIVILIAN HUMAN RESOURCES AGENCY AND ITS SUBORDINATE ELEMENTS AS DIRECT REPORTING UNIT, apd.army.mil, dated 4 January 2017, last accessed 13 January 2017 Barbara P. Panther Washington, D.C.20px United States Army Corps of Engineers (USACE) LTG Todd T. SemoniteLieutenant General Todd T. Semonite, Biography article, undated. Retrieved 28 June 2016. Washington, D.C.20px United States Army Criminal Investigation Command (USACIDC) MG David E. Quantock Quantico, Virginia20px United States Army Human Resources Command (HRC)DAGO 2017-04, DESIGNATION OF UNITED STATES ARMY HUMAN RESOURCES COMMAND AND ITS SUBORDINATE ELEMENTS AS DIRECT REPORTING UNIT, apd.army.mil, dated 4 January 2017, last accessed 13 January 2017 MG Thomas C. Seamands Alexandria, Virginia20px United States Army Installation Management Command (IMCOM) LTG Kenneth R. Dahl Joint Base San Antonio, Texas20px United States Army Intelligence and Security Command (INSCOM) MG George J. Franz III Fort Belvoir, Virginia20px United States Army Medical Command (MEDCOM) LTG Nadja West Joint Base San Antonio, Texas20px United States Army Military District of Washington (MDW) MG Bradley A. Becker Fort Lesley J. McNair, Washington, D.C.20px United States Army Recruiting Command (USAREC)AR 10-87, ARMY COMMANDS, ARMY SERVICE COMPONENT COMMANDS, AND DIRECT REPORTING UNITS, apd.army.mil, dated 4 September 2007, last accessed 13 January 2017 MG Jeffrey J. Snow Fort Knox, Kentucky20px United States Army Test and Evaluation Command (ATEC) MG Peter D. Utley Alexandria, Virginia20px United States Army War College (AWC)http://www.apd.army.mil/pdffiles/go1390.pdf MG William Rapp Carlisle, Pennsylvania20px United States Military Academy (USMA) LTG Robert L. Caslen West Point, New York
Source: U.S. Army organizationOrganization, United States Army
Structure
See Structure of the United States Army for detailed treatment of the history, components, administrative and operational structure, and the branches and functional areas of the Army.
thumb|right|U.S. Army soldiers of 1st Battalion, 175th Infantry Regiment, Maryland Army National Guard conduct an urban cordon and search exercise as part of the army readiness and training evaluation program in the mock city of Balad at Fort Dix, NJ
thumb|left|U.S. soldiers from the 6th Infantry Regiment taking up positions on a street corner during a foot patrol in Ramadi, Iraq
The United States Army is made up of three components: the active component, the Regular Army; and two reserve components, the Army National Guard and the Army Reserve. Both reserve components are primarily composed of part-time soldiers who train once a month, known as battle assemblies or unit training assemblies (UTAs), and conduct two to three weeks of annual training each year. Both the Regular Army and the Army Reserve are organized under Title 10 of the United States Code, while the National Guard is organized under Title 32. While the Army National Guard is organized, trained and equipped as a component of the U.S. Army, when it is not in federal service it is under the command of individual state and territorial governors; the District of Columbia National Guard, however, reports to the U.S. President, not the district's mayor, even when not federalized. Any or all of the National Guard can be federalized by presidential order and against the governor's wishes.Perpich v. Department of Defense, 496 U.S. 334 (1990)
The army is led by a civilian Secretary of the Army, who has the statutory authority to conduct all the affairs of the army under the authority, direction and control of the Secretary of Defense. The Chief of Staff of the Army, who is the highest-ranked military officer in the army, serves as the principal military adviser and executive agent for the Secretary of the Army, i.e., its service chief; and as a member of the Joint Chiefs of Staff, a body composed of the service chiefs from each of the four military services belonging to the Department of Defense who advise the President of the United States, the Secretary of Defense, and the National Security Council on operational military matters, under the guidance of the Chairman and Vice Chairman of the Joint Chiefs of Staff. In 1986, the Goldwater–Nichols Act mandated that operational control of the services follows a chain of command from the President to the Secretary of Defense directly to the unified combatant commanders, who have control of all armed forces units in their geographic or function area of responsibility. Thus, the secretaries of the military departments (and their respective service chiefs underneath them) only have the responsibility to organize, train and equip their service components. The army provides trained forces to the combatant commanders for use as directed by the Secretary of Defense.
thumb|The 1st Cavalry Division's combat aviation brigade performs a mock charge with the horse detachment
By 2013, the army shifted to six geographical commands that align with the six geographical unified combatant commands (COCOM):
United States Army Central headquartered at Shaw Air Force Base, South Carolina
United States Army North headquartered at Fort Sam Houston, Texas
United States Army South headquartered at Fort Sam Houston, Texas
United States Army Europe headquartered at Clay Kaserne, Wiesbaden, Germany
United States Army Pacific headquartered at Fort Shafter, Hawaii
United States Army Africa headquartered at Vicenza, Italy
thumb|right|U.S. Army Special Forces soldiers from the 3rd Special Forces Group patrol a field in the Gulistan district of Farah, Afghanistan
The army also tranformed its base unit from divisions to brigades. Division lineage will be retained, but the divisional headquarters will be able to command any brigade, not just brigades that carry their divisional lineage. The central part of this plan is that each brigade will be modular, i.e., all brigades of the same type will be exactly the same, and thus any brigade can be commanded by any division. As specified before the 2013 end-strength re-definitions, the three major types of ground combat brigades are:
Armored brigades, with strength of 4,743 troops as of 2014.
Stryker brigades, with strength of 4,500 troops as of 2014.
Infantry brigades, with strength of 4,413 troops as of 2014.
In addition, there are combat support and service support modular brigades. Combat support brigades include aviation (CAB) brigades, which will come in heavy and light varieties, fires (artillery) brigades (now transforms to division artillery), and battlefield surveillance brigades. Combat service support brigades include sustainment brigades and come in several varieties and serve the standard support role in an army.
Combat maneuver organizations
To track the effects of the 2018 budget cuts, see Transformation of the United States Army#Divisions and Brigades
The U.S. Army currently consists of 10 active divisions and one deployable division headquarters (7th Infantry Division) as well as several independent units. The force is in the process of contracting after several years of growth. In June 2013, the Army announced plans to downsize to 32 active combat brigade teams by 2015 to match a reduction in active duty strength to 490,000 soldiers. Army Chief of Staff Raymond Odierno has projected that by 2018 the Army will eventually shrink to "450,000 in the active component, 335,000 in the National Guard and 195,000 in U.S. Army Reserve."
Within the Army National Guard and United States Army Reserve there are a further 8 divisions, over 15 maneuver brigades, additional combat support and combat service support brigades, and independent cavalry, infantry, artillery, aviation, engineer, and support battalions. The Army Reserve in particular provides virtually all psychological operations and civil affairs units.
25px United States Army Forces Command (FORSCOM)
Direct reporting unitsCurrent commanderLocation of headquarters20px I Corps LTG Stephen R. Lanza Joint Base Lewis-McChord, Washington20px III Corps LTG Sean MacFarland Fort Hood, Texas20px XVIII Airborne Corps LTG Stephen J. Townsend Fort Bragg, North Carolina20px First Army (FUSA) LTG Stephen Twitty Rock Island Arsenal, Illinois20px United States Army Reserve Command (USARC)http://www.apd.army.mil/pdffiles/go1102.pdf LTG Charles D. Luckey Fort Bragg, North Carolina
Combat maneuver units aligned under FORSCOMName Headquarters Subunits Subordinate to25px|left 1st Armored DivisionFort Bliss, Texas1 Stryker Brigade Combat Team (BCT), 2 armored BCTs, 1 Division Artillery (DIVARTY), 1 Combat Aviation Brigade (CAB), and 1 sustainment brigadeIII Corps20px|left 1st Cavalry DivisionFort Hood, Texas3 armored BCTs, 1 DIVARTY, 1 CAB, and 1 sustainment brigadeIII Corps20px 1st Infantry DivisionFort Riley, Kansas2 armored BCTs, 1 DIVARTY, 1 CAB, and 1 sustainment brigadeIII Corps20px|left 3d Cavalry RegimentFort Hood, Texas4 Stryker squadrons, 1 fires squadron, 1 engineer squadron, and 1 support squadron (overseen by the 1st Cavalry DivisionArmy announces Afghanistan deployment for 1,000 soldiers, ArmyTimes, by Michelle Tan, dated 2 March 2016, last accessed 3 October 2016)III Corps20px|left 3rd Infantry DivisionFort Stewart, Georgia1 infantry BCT, 1 armored BCT, 1 DIVARTY, 1 CAB, and 1 sustainment brigade as well as the 48th Infantry Brigade Combat Team of the Georgia Army National GuardXVIII Airborne Corps20px|left 4th Infantry DivisionFort Carson, Colorado1 infantry BCT, 1 Stryker BCT, 1 armored BCT, DIVARTY, 1 CAB, and 1 sustainment brigadeIII Corps20px|left 7th Infantry Division Joint Base Lewis-McChord, Washington Administrative control of 2 Stryker BCTs and 1 DIVARTY of the 2nd Infantry Division as well as the 81st Armored Brigade Combat Team of the Washington and California Army National GuardI Corps20px|left 10th Mountain DivisionFort Drum, New York3 infantry BCTs (including the 86th Infantry Brigade Combat Team (Mountain) of the Vermont Army National Guard), 1 DIVARTY, 1 CAB, and 1 sustainment brigadeXVIII Airborne Corps20px|left 82nd Airborne DivisionFort Bragg, North Carolina3 airborne infantry BCTs, 1 airborne DIVARTY, 1 CAB, and 1 airborne sustainment brigadeXVIII Airborne Corps20px|left 101st Airborne DivisionFort Campbell, Kentucky3 air assault infantry BCTs, 1 air assault DIVARTY, 1 CAB, and 1 sustainment brigadeXVIII Airborne Corps
Combat maneuver units aligned under other organizationsName Headquarters Subunits Subordinate to20px|left 2nd Cavalry RegimentRose Barracks, Vilseck, Germany4 Stryker squadrons, 1 engineer squadron, 1 fires squadron, and 1 support squadronU.S. Army Europe20px|left 2nd Infantry DivisionCamp Red Cloud, South Korea2 Stryker BCTs, 1 armored BCT, 1 DIVARTY (under administrative control of 7th ID), 1 sustainment brigade, and 1 mechanized brigade from the ROK ArmyEighth Army20px|left 25th Infantry DivisionSchofield Barracks, Hawaii3 infantry BCTs, 1 Stryker BCT, 1 DIVARTY, 1 CAB, and 1 sustainment brigadeU.S. Army Pacific20px|left 100th Infantry BattalionFort Shafter, HawaiiInfantry companies spread throughout Hawaii, American Samoa, Guam, and Saipan (The only combat maneuver unit of the Army Reserve)9th MSC under U.S. Army Pacific22px|left 173rd Airborne Brigade Combat TeamCamp Ederle, Vicenza, Italy3 airborne infantry battalions (including 1st Battalion, 143rd Infantry Regiment of the Texas Army National Guard), 1 airborne field artillery battalion, 1 cavalry squadron, 1 airborne engineer battalion (54th Brigade Engineer Battalion (BEB), effective 17 June 2015),http://www.eur.army.mil/SkySoldiers/STB/index.html and 1 airborne support battalionU.S. Army Europe
25px Combat maneuver units aligned under the Army National Guard, until federalizedName Locations Subunits20px|left 28th Infantry DivisionPennsylvania, Ohio, and Maryland2nd Infantry BCT, 55th Armored BCT, 56th Stryker BCT, and 28th Combat Aviation Brigade20px|left 29th Infantry DivisionVirginia, Maryland, North Carolina,and Florida30th Armored BCT, 53rd Infantry BCT, 116th Infantry BCT, and 29th CAB22px|left 34th Infantry DivisionMinnesota, Wisconsin, Iowa, and Idaho1st Armored BCT, 2nd Infantry BCT, 32nd Infantry BCT, 116th Cavalry BCT, and 34th CAB20px|left 35th Infantry DivisionKansas, Missouri, Illinois, Georgia, and Arkansas33rd Infantry BCT, 39th Infantry BCT, and 35th CAB20px|left 36th Infantry DivisionTexas, Oklahoma, Louisiana, and Mississippi45th Infantry BCT, 56th Infantry BCT, 72nd Infantry BCT, 155th Armored BCT, 256th Infantry BCT, 36th CAB, and the 3rd BCT (Regular Army) (formerly of the 10th Mountain Division)20px|left 38th Infantry DivisionIndiana, Michigan, Ohio, and Tennessee37th Infantry BCT, 76th Infantry BCT, 278th Armored Cavalry Regiment, and 38th CAB20px|left 40th Infantry DivisionCalifornia, Oregon, Washington, and Hawaii29th Infantry BCT, 41st Infantry BCT, 79th Infantry BCT, and 40th CAB20px|left 42nd Infantry DivisionNew York, New Jersey, and Vermont27th Infantry BCT, 50th Infantry BCT, and 42nd CAB
For a description of US Army tactical organizational structure, see: a US context, and also a global context.
Special operations forces
25px United States Army Special Operations Command (Airborne) (USASOC):USASOC Headquarters Fact Sheet, from the USASOC official website, last accessed 8 October 2016
Name Headquarters Structure and purpose20px|left 1st Special Forces Command (Airborne)Fort Bragg, North CarolinaThe command manages seven special forces groups (five active duty and two national guard), two military information support groups, one civil affairs brigade, and one sustainment brigade.21px|left Army Special Operations Aviation CommandFt. Bragg, North CarolinaOrganizes, mans, trains, resources and equips Army special operations aviation units to provide responsive, special operations aviation support to Special Operations Forces (SOF) consisting of five units: 15px USASOC Flight Company (UFC), Special Operations Training Battalion (SOATB), Technology Applications Program Office (TAPO), Systems Integration Management Office (SIMO), and the 15px 160th Special Operations Aviation Regiment (160th SOAR)25px|left 75th Ranger RegimentFort Benning, GeorgiaThree maneuver battalions and one special troops battalion of elite airborne infantry specializing in direct action raids and airfield seizures.20px|left John F. Kennedy Special Warfare Center and SchoolFt. Bragg, North CarolinaSelection & training for Special Forces, Civil Affairs & Military Information Support Operations Soldiers consisting of five distinct units and one directorate: 15px 1st Special Warfare Training Group (Airborne), 15px Special Warfare Education Group (Airborne), 15px Special Warfare Medical Group (Airborne), 15px Special Forces Warrant Officer Institute, 15px David K. Thuma Noncommissioned Officers Academy, and the 15px Directorate of Training and Doctrine.19px|left 1st Special Forces Operational Detachment-DeltaFt. Bragg, North CarolinaElite special operations & counter-terrorism unit under the control of Joint Special Operations Command.
15px Units aligned under Special Forces CommandName Headquarters Structure and purpose20px|left Special Forces GroupsVariousThere are seven special forces groups: 21px1st SFG(A), 15px 3rd SFG(A), 15px 5th SFG(A), 15px 7th SFG(A), 15px 10th SFG(A), 15px 19th SFG(A) (ARNG), and 21px20th SFG(A) (ARNG) that are trained for unconventional warfare, foreign internal defense, special reconnaissance, direct action, and counter-terrorism missions.19px|left Military Information Support GroupsFt. Bragg, North CarolinaPerforms psychological operations via two operational groups, the 15px 4th MISG(A) and 15px 8th MISG(A)20px|left 95th Civil Affairs Brigade (Airborne)Ft. Bragg, North CarolinaEnables military commanders and U.S. Ambassadors to improve relationships with various stakeholders in a local area to meet the objectives of the U.S. government via five operational battalions: 15px 91st CA BN, 15px 92nd CA BN, 15px 96th CA BN, 15px 97th CA BN, and 15px 98th CA BN.20px|left 528th Sustainment Brigade (Airborne)Ft. Bragg, North CarolinaProvides combat service support and combat health support units for all USASOC elements via the 15px 112th Special Operations Signal Battalion (Airborne), a 15px Special Troops Battalion, an ARSOF Support Operations Cell, six ARSOF Liaison Elements, and two Medical Role II teams.
Personnel
These are the U.S. Army ranks authorized for use today and their equivalent NATO designations. Although no living officer currently holds the rank of General of the Army, it is still authorized by Congress for use in wartime.
Commissioned officers
There are several paths to becoming a commissioned officerFrom the Future Soldiers Web Site. including the United States Military Academy, Reserve Officers' Training Corps, and Officer Candidate School. Regardless of which road an officer takes, the insignia are the same. Certain professions, including physicians, pharmacists, nurses, lawyers, and chaplains are commissioned directly into the army and are designated by insignia unique to their staff community.
Most army commissioned officers are promoted based on an "up or out" system. The Defense Officer Personnel Management Act of 1980 establishes rules for timing of promotions and limits the number of officers that can serve at any given time.
Army regulations call for addressing all personnel with the rank of general as 'General (last name)' regardless of the number of stars. Likewise, both colonels and lieutenant colonels are addressed as 'Colonel (last name)' and first and second lieutenants as 'Lieutenant (last name).'
US DoD Pay GradeO-1O-2O-3O-4O-5O-6O-7O-8O-9O-10O-11O-12 Insignia 22px 22px 60px 60px 60px 70px 32px 65px 100px 135px 90px 90pxTitle SecondLieutenant FirstLieutenant Captain Major LieutenantColonel Colonel BrigadierGeneral MajorGeneral LieutenantGeneral General General of theArmy General of the Armiesof the United StatesAbbreviation2LT1LTCPTMAJLTCCOLBGMGLTGGENGAGASNATO CodeOF-1OF-2OF-3OF-4OF-5OF-6OF-7OF-8OF-9OF-10OF-11Note: General of the Army is reserved for wartime.
Warrant officers
Warrant officers are single track, specialty officers with subject matter expertise in a particular area. They are initially appointed as warrant officers (in the rank of WO1) by the Secretary of the Army, but receive their commission upon promotion to chief warrant officer two (CW2).
By regulation, warrant officers are addressed as 'Mr. (last name)' or 'Ms. (last name)' by senior officers, and as "sir" or "ma'am" by all enlisted personnel. However, many personnel address warrant officers as 'Chief (last name)' within their units regardless of rank.
US DoD pay gradeW-1W-2W-3W-4W-5Insignia 22px 22px 22px 22px 22pxTitle Warrant Officer 1 Chief Warrant Officer 2 Chief Warrant Officer 3 Chief Warrant Officer 4 Chief Warrant Officer 5AbbreviationWO1CW2CW3CW4CW5NATO RankWO-1WO-2WO-3WO-4WO-5
Enlisted personnel
Sergeants and corporals are referred to as NCOs, short for non-commissioned officers.From the Enlisted Soldiers Descriptions Web Site. This distinguishes corporals from the more numerous specialists, who have the same pay grade but do not exercise leadership responsibilities.
Privates (E1 and E2) and privates first class (E3) are addressed as 'Private (last name)', specialists as 'Specialist (last name)', corporals as 'Corporal (last name)', and sergeants, staff sergeants, sergeants first class, and master sergeants all as 'Sergeant (last name).' First sergeants are addressed as 'First Sergeant (last name)', and sergeants major and command sergeants major are addressed as 'Sergeant Major (last name)'.
US DoD Pay gradeE-1E-2E-3E-4E-5E-6E-7E-8E-9Insignia No Insignia 35px 35px 35px 35px 35px 35px 35px 35px 35px 35px 35px 35pxTitlePrivatePrivatePrivateFirst ClassSpecialistCorporalSergeantStaffSergeantSergeantFirst ClassMasterSergeantFirstSergeantSergeantMajorCommandSergeant MajorSergeant Majorof the ArmyAbbreviationPV1 ¹PV2 ¹PFCSPC ²CPLSGTSSGSFCMSG1SGSGMCSMSMANATO CodeOR-1OR-2OR-3OR-4OR-4OR-5OR-6OR-7OR-8OR-8OR-9OR-9OR-9¹ PVT is also used as an abbreviation for both private ranks when pay grade need not be distinguished.http://www.apd.army.mil/pdffiles/r600_20.pdf² SP4 is sometimes encountered instead of SPC for specialist. This is a holdover from when there were additional specialist ranks at pay grades E-5 to E-7.
Training
thumb|left|Rangers practice fast roping techniques from an MH-47 during an exercise at Fort Bragg
Training in the U.S. Army is generally divided into two categories – individual and collective. Basic training consists of 10 weeks for most recruits followed by Advanced Individualized Training (AIT) where they receive training for their military occupational specialties (MOS). Some individuals MOSs range anywhere from 14–20 weeks of One Station Unit Training (OSUT), which combines Basic Training and AIT. The length of AIT school varies by the MOS The length of time spent in AIT depends on the MOS of the soldier, and some highly technical MOS training may require many months (e.g., foreign language translators). Depending on the needs of the army, Basic Combat Training for combat arms soldiers is conducted at a number of locations, but two of the longest-running are the Armor School and the Infantry School, both at Fort Benning, Georgia.
Following their basic and advanced training at the individual-level, soldiers may choose to continue their training and apply for an "additional skill identifier" (ASI). The ASI allows the army to take a wide ranging MOS and focus it into a more specific MOS. For example, a combat medic, whose duties are to provide pre-hospital emergency treatment, may receive ASI training to become a cardiovascular specialist, a dialysis specialist, or even a licensed practical nurse. For commissioned officers, ASI training includes pre-commissioning training either at USMA, or via ROTC, or by completing OCS. After commissioning, officers undergo branch specific training at the Basic Officer Leaders Course, (formerly called Officer Basic Course), which varies in time and location according their future assignments. Further career development is available through the Army Correspondence Course Program.
thumb|upright=1.1|U.S. Army soldiers familiarizing with the latest INSAS 1B1 during exercise Yudh Abhyas 2015
Collective training at the unit level takes place at the unit's assigned station, but the most intensive training at higher echelons is conducted at the three combat training centers (CTC); the National Training Center (NTC) at Fort Irwin, California, the Joint Readiness Training Center (JRTC) at Fort Polk, Louisiana, and the Joint Multinational Training Center (JMRC) at the Hohenfels Training Area in Hohenfels, Germany. ARFORGEN is the Army Force Generation process approved in 2006 to meet the need to continuously replenish forces for deployment, at unit level, and for other echelons as required by the mission. Individual-level replenishment still requires training at a unit level, which is conducted at the continental US (CONUS) replacement center at Fort Bliss, in New Mexico and Texas, before their individual deployment.
Equipment
Weapons
thumb|Lockheed Martin Terminal High Altitude Area Defense (THAAD) system used by the army for ballistic missile protection
Individual weapons
The army employs various individual weapons to provide light firepower at short ranges. The most common weapons used by the army are the compact variant of the M16 rifle, the M4 carbine,M4. U.S. Army Fact Files as well as the 7.62×51mm variant of the FN SCAR for Army Rangers. The primary sidearm in the U.S. Army is the 9 mm M9 pistol; the M11 pistol is also used. Both handguns are to be replaced by the M17Army picks Sig Sauer to replace M9 service pistol through the Modular Handgun System program.Individual Weapons Future Innovations, Project Manager Soldier Weapons. Soldiers are also equipped with various hand grenades, such as the M67 fragmentation grenade and M18 smoke grenade.
Many units are supplemented with a variety of specialized weapons, including the M249 SAW (Squad Automatic Weapon), to provide suppressive fire at the fire-team level.M249, U.S. Army Fact Files Indirect fire is provided by the M203 grenade launcher. The M1014 Joint Service Combat Shotgun or the Mossberg 590 Shotgun are used for door breaching and close-quarters combat. The M14EBR is used by designated marksmen. Snipers use the M107 Long Range Sniper Rifle, the M2010 Enhanced Sniper Rifle, and the M110 Semi-Automatic Sniper Rifle.
thumb|right|250px|American troops of the 28th Infantry Division march down the Avenue des Champs-Élysées, Paris, in the Victory Parade.
thumb|left|3rd Infantry Division soldiers manning an M1A1 Abrams in Iraq
Crew served weapons
The army employs various crew-served weapons to provide heavy firepower at ranges exceeding that of individual weapons.
The M240 is the US Army's standard Medium Machine Gun.M240, U.S. Army Fact Files The M2 heavy machine gun is generally used as a vehicle-mounted machine gun. In the same way, the 40 mm MK 19 grenade machine gun is mainly used by motorized units.MK 19, U.S. Army Fact Files
The US Army uses three types of mortar for indirect fire support when heavier artillery may not be appropriate or available. The smallest of these is the 60 mm M224, normally assigned at the infantry company level.M224, U.S. Army Fact Files At the next higher echelon, infantry battalions are typically supported by a section of 81 mm M252 mortars.M252, U.S. Army Fact Files The largest mortar in the army's inventory is the 120 mm M120/M121, usually employed by mechanized units.M120, U.S. Army Fact Files
Fire support for light infantry units is provided by towed howitzers, including the 105 mm M119A1M119, U.S. Army Fact Files and the 155 mm M777 (which will replace the M198).
The US Army utilizes a variety of direct-fire rockets and missiles to provide infantry with an Anti-Armor Capability. The AT4 is an unguided projectile that can destroy armor and bunkers at ranges up to 500 meters. The FIM-92 Stinger is a shoulder-launched, heat seeking anti-aircraft missile. The FGM-148 Javelin and BGM-71 TOW are anti-tank guided missiles.
Vehicles
thumb|A US soldier on patrol with the support of a Humvee vehicle
The army's most common vehicle is the High Mobility Multipurpose Wheeled Vehicle (HMMWV), commonly called the Humvee, which is capable of serving as a cargo/troop carrier, weapons platform, and ambulance, among many other roles.HMMWV, U.S. Army Fact Files While they operate a wide variety of combat support vehicles, one of the most common types centers on the family of HEMTT vehicles. The M1A2 Abrams is the army's main battle tank,Abrams, U.S. Army Fact Files while the M2A3 Bradley is the standard infantry fighting vehicle.Bradley, United States Army Fact Files Other vehicles include the Stryker,Stryker, U.S. Army Fact Files and the M113 armored personnel carrier,M113, U.S. Army Fact Files and multiple types of Mine Resistant Ambush Protected (MRAP) vehicles.
The Pentagon bought 25,000 MRAP vehicles since 2007 in 25 variants through rapid acquisition with no long-term plans for the platforms. The Army plans to divest 7,456 vehicles and retain 8,585. Of the total number of vehicles the Army will keep, 5,036 will be put in storage, 1,073 will be used for training, and the remainder will be spread across the active force. The Oshkosh M-ATV will be kept the most at 5,681 vehicles, as it is smaller and lighter than other MRAPs for off-road mobility. The other most retained vehicle will be the Navistar MaxxPro Dash with 2,633 vehicles, plus 301 Maxxpro ambulances. Thousands of other MRAPs like the Cougar, BAE Caiman, and larger MaxxPros will be disposed of."Majority of MRAPs to be scrapped or stored", Military Times, 5 January 2014
The U.S. Army's principal artillery weapons are the M109A6 Paladin self-propelled howitzerPaladin, Army.mil and the M270 Multiple Launch Rocket System (MLRS),MLRS, U.S. Army Fact Files both mounted on tracked platforms and assigned to heavy mechanized units.
While the United States Army Aviation Branch operates a few fixed-wing aircraft, it mainly operates several types of rotary-wing aircraft. These include the AH-64 Apache attack helicopter,Apache, U.S. Army Fact Files the OH-58D Kiowa Warrior armed reconnaissance/light attack helicopter,Kiowa, U.S. Army Fact Files the UH-60 Black Hawk utility tactical transport helicopter,Blackhawk, U.S. Army Fact Files and the CH-47 Chinook heavy-lift transport helicopter.Chinook, U.S. Army Fact Files Restructuring plans call for reduction of 750 aircraft and from 7 to 4 types.
Under the Johnson-McConnell agreement of 1966, the Army agreed to limit its fixed-wing aviation role to administrative mission support (light unarmed aircraft which cannot operate from forward positions). For UAVs, the Army is deploying at least one company of drone MQ-1C Gray Eagles to each Active Army division.Kyle Jahner, Army Times (3:35 p.m. EST January 8, 2015) "Army to build dedicated drone runway at Fort Bliss"
Uniforms
The Army Combat Uniform, or ACU, currently features a digital Universal Camouflage Pattern (UCP) and is designed for use in woodland, desert, and urban environments. However, soldiers operating in Afghanistan are being issued a fire-resistant ACU with the "MultiCam" pattern, officially known as Operation Enduring Freedom Camouflage Pattern or "OCP".
thumb|left|The Ranger Honor Platoon marching in the former service uniform.
thumb|right|An element of the 18th Infantry Regiment, wearing ASUs, representing the United States at the 2010 Victory Day commemoration in Moscow.
The standard garrison service uniform is the Army Service Uniform, which functions as both a garrison uniform (when worn with a white shirt and necktie) and a dress uniform (when worn with a white shirt and either a necktie for parades or a bow tie for after six or black tie events).
Berets
The U.S. Army's black beret is no longer worn with the new ACU for garrison duty, having been permanently replaced with the patrol cap. After years of complaints that it wasn't suited well for most work conditions, Army Chief of Staff General Martin Dempsey eliminated it for wear with the ACU in June 2011. U.S. soldiers still wear berets who are currently in a unit in jump status, whether the wearer is parachute-qualified, or not (maroon beret), Members of the 75th Ranger Regiment and the Airborne and Ranger Training Brigade (tan beret), and Special Forces (rifle green beret) and may wear it with the Army Service Uniform for non-ceremonial functions. Unit commanders may still direct the wear of patrol caps in these units in training environments or motor pools.
Tents
The army has relied heavily on tents to provide the various facilities needed while on deployment. The most common tent uses for the military are as temporary barracks (sleeping quarters), DFAC buildings (dining facilities), forward operating bases (FOBs), after action review (AAR), tactical operations center (TOC), morale, welfare, and recreation (MWR) facilities, and security checkpoints. Furthermore, most of these tents are set up and operated through the support of Natick Soldier Systems Center.
The U.S. Army is beginning to use a more modern tent called the deployable rapid assembly shelter or DRASH. In 2008, DRASH became part of the Army's Standard Integrated Command Post System.NG, DHS Technologies to support SICPS/TMSS United Press International
Tomb of the Unknowns is a tomb that soldiers walk and salute every day in any weather.http://www.arlingtoncemetery.mil/Explore/Tomb-of-the-Unknown-Soldier
3D printing
In November 2012, the United States Army developed a tactical 3D printing capability to allow it to rapidly manufacture critical components on the battlefield.
See also
America's Army (Video games for recruitment)
Army CHESS (Computer Hardware Enterprise Software and Solutions)
Army National Guard
Comparative military ranks
History of the United States Army
List of active United States military aircraft
List of former United States Army medical units
List of wars involving the United States
Military–industrial complex
Officer Candidate School (United States Army)
Reserve Officers' Training Corps and Junior Reserve Officers' Training Corps
Soldier's Creed
Structure of the United States Army
Timeline of United States military operations
Transformation of the United States Army
U.S. Army Combat Arms Regimental System
U.S. Army Regimental System
United States Military Academy
United States Army Basic Training
United States Army Center of Military History
United States Volunteers
Vehicle markings of the United States military
Warrant Officer Candidate School (United States Army)
Notes and references
Further reading
Bailey, Beth. America's Army: Making the All-Volunteer Force (2009) excerpt
Kretchik, Walter E. U.S. Army Doctrine: From the American Revolution to the War on Terror (University Press of Kansas; 2011) 392 pages; studies military doctrine in four distinct eras: 1779–1904, 1905–1944, 1944–1962, and 1962 to the present.
Woodward, David R. The American Army and the First World War (Cambridge University Press, 2014). 484 pp. online review
External links
Army.mil – United States Army official website
Army.mil/photos – United States Army featured photos
GoArmy.com – official recruiting site
U.S. Army Collection – Missouri History Museum
Finding Aids for researching the US Army (compiled by the United States Army Center of Military History)
US-militaria.com – The US Army during the Second World War
Category:Uniformed services of the United States
Category:Military units and formations established in 1775
Category:1775 establishments in the Thirteen Colonies | 32,087 | 2017-01 |
Clothing | thumb|Clothing in history, showing (from top) Egyptians, Ancient Greeks, Romans, Byzantines, Franks, and 13th through 15th century Europeans.
Clothing (also called clothes and attire) is fiber and textile material worn on the body. The wearing of clothing is mostly restricted to human beings and is a feature of nearly all human societies. The amount and type of clothing worn depends on body type, social, and geographic considerations. Some clothing types can be gender-specific.
Physically, clothing serves many purposes: it can serve as protection from the elements, and can enhance safety during hazardous activities such as hiking and cooking. It protects the wearer from rough surfaces, rash-causing plants, insect bites, splinters, thorns and prickles by providing a barrier between the skin and the environment. Clothes can insulate against cold or hot conditions. Further, they can provide a hygienic barrier, keeping infectious and toxic materials away from the body. Clothing also provides protection from ultraviolet radiation.
Origin of clothing
There is no easy way to determine when clothing was first developed, but some information has been inferred by studying lice. The body louse specifically lives in clothing, and diverge from head lice about 170 millennia ago, suggesting that clothing existed at that time."...Lice Indicates Early Clothing Use ...", Mol Biol Evol (2011) 28 (1): 29-32. Another theory is that modern humans are the only survivors of several species of primates who may have worn clothes and that clothing may have been used as long ago as 650 millennia. Other louse-based estimates put the introduction of clothing at around 42,000–72,000 B.P.
Functions
thumb|A baby wearing many items of winter clothing: headband, cap, fur-lined coat, shawl and sweater
The most obvious function of clothing is to improve the comfort of the wearer, by protecting the wearer from the elements. In hot climates, clothing provides protection from sunburn or wind damage, while in cold climates its thermal insulation properties are generally more important. Shelter usually reduces the functional need for clothing. For example, coats, hats, gloves, and other superficial layers are normally removed when entering a warm home, particularly if one is residing or sleeping there. Similarly, clothing has seasonal and regional aspects, so that thinner materials and fewer layers of clothing are generally worn in warmer seasons and regions than in colder ones.
Clothing performs a range of social and cultural functions, such as individual, occupational and gender differentiation, and social status. Alternative ISBN 978-0-404-14721-1 (This work is one of the earliest attempts at an overview of the psycho-social and practical functions of clothing) In many societies, norms about clothing reflect standards of modesty, religion, gender, and social status. Clothing may also function as a form of adornment and an expression of personal taste or style.
Clothing can and has in history been made from a very wide variety of materials. Materials have ranged from leather and furs, to woven materials, to elaborate and exotic natural and synthetic fabrics. Not all body coverings are regarded as clothing. Articles carried rather than worn (such as purses), worn on a single part of the body and easily removed (scarves), worn purely for adornment (jewelry), or those that serve a function other than protection (eyeglasses), are normally considered accessories rather than clothing, as are footwear and hats.
Clothing protects against many things that might injure the uncovered human body. Clothes protect people from the elements, including rain, snow, wind, and other weather, as well as from the sun. However, clothing that is too sheer, thin, small, tight, etc., offers less protection. Clothes also reduce risk during activities such as work or sport. Some clothing protects from specific environmental hazards, such as insects, noxious chemicals, weather, weapons, and contact with abrasive substances. Conversely, clothing may protect the environment from the clothing wearer, as with doctors wearing medical scrubs.
Humans have shown extreme invention in devising clothing solutions to environmental hazards. Examples include: space suits, air conditioned clothing, armor, diving suits, swimsuits, bee-keeper gear, motorcycle leathers, high-visibility clothing, and other pieces of protective clothing. Meanwhile, the distinction between clothing and protective equipment is not always clear-cut—since clothes designed to be fashionable often have protective value and clothes designed for function often consider fashion in their design. Wearing clothes also has social implications. They cover parts of the body that social norms require to be covered, act as a form of adornment, and serve other social purposes.
Scholarship
Although dissertations on clothing and its function appear from the 19th century as colonising countries dealt with new environments,e.g. concerted scientific research into psycho-social, physiological and other functions of clothing (e.g. protective, cartage) occurred in the first half of the 20th century, with publications such as J. C. Flügel's Psychology of Clothes in 1930, and Newburgh's seminal Physiology of Heat Regulation and The Science of Clothing in 1949. By 1968, the field of environmental physiology had advanced and expanded significantly, but the science of clothing in relation to environmental physiology had changed little. (reviewer's name appears next to Newburgh, but was not the co-author. See also reviewer's name at bottom of page). While considerable research has since occurred and the knowledge-base has grown significantly, the main concepts remain unchanged, and indeed Newburgh's book is still cited by contemporary authors, including those attempting to develop thermoregulatory models of clothing development.
Cultural aspects
Gender differentiation
left|thumb|150px|Former 3rd Duke of Fife wearing a traditional Scottish kilt. (1984)
r|thumb|150px|Former US Secretary of State Condoleezza Rice and Turkish President Abdullah Gül both wearing Western-style business suits.
In most cultures, gender differentiation of clothing is considered appropriate. The differences are in styles, colors and fabrics.
In Western societies, skirts, dresses and high-heeled shoes are usually seen as women's clothing, while neckties are usually seen as men's clothing. Trousers were once seen as exclusively male clothing, but are nowadays worn by both genders. Male clothes are often more practical (that is, they can function well under a wide variety of situations), but a wider range of clothing styles are available for females. Males are typically allowed to bare their chests in a greater variety of public places. It is generally acceptable for a woman to wear traditionally male clothing, while the converse is unusual.
In some cultures, sumptuary laws regulate what men and women are required to wear. Islam requires women to wear more modest forms of attire, usually hijab. What qualifies as "modest" varies in different Muslim societies. However, women are usually required to cover more of their bodies than men are. Articles of clothing Muslim women wear for modesty range from the head-scarf to the burqa.
Men may sometimes choose to wear men's skirts such as togas or kilts, especially on ceremonial occasions. Such garments were (in previous times) often worn as normal daily clothing by men.
Social status
thumb|upright| A Barong Tagalog made for a wedding ceremony.
thumb|left|Alim Khan's bemedaled robe sends a social message about his wealth, status, and power
In some societies, clothing may be used to indicate rank or status. In ancient Rome, for example, only senators could wear garments dyed with Tyrian purple. In traditional Hawaiian society, only high-ranking chiefs could wear feather cloaks and palaoa, or carved whale teeth. Under the Travancore Kingdom of Kerala, (India), lower caste women had to pay a tax for the right to cover their upper body. In China, before establishment of the republic, only the emperor could wear yellow. History provides many examples of elaborate sumptuary laws that regulated what people could wear. In societies without such laws, which includes most modern societies, social status is instead signaled by the purchase of rare or luxury items that are limited by cost to those with wealth or status. In addition, peer pressure influences clothing choice.
Religion
thumb|upright|Nicolas Trigault, in Ming-style Confucian hanfu, by Rubens.
thumb|left|Muslim men traditionally wear white robes and a cap during prayers
Religious clothing might be considered a special case of occupational clothing. Sometimes it is worn only during the performance of religious ceremonies. However, it may also be worn everyday as a marker for special religious status.
For example, Jains and Muslim men wear unstitched cloth pieces when performing religious ceremonies. The unstitched cloth signifies unified and complete devotion to the task at hand, with no digression. Sikhs wear a turban as it is a part of their religion.
The cleanliness of religious dresses in Eastern religions like Hinduism, Sikhism, Buddhism, Islam and Jainism is of paramount importance, since it indicates purity.
Clothing figures prominently in the Bible where it appears in numerous contexts, the more prominent ones being: the story of Adam and Eve who made coverings for themselves out of fig leaves, Joseph's cloak, Judah and Tamar, Mordecai and Esther. Furthermore, the priests officiating in the Temple had very specific garments, the lack of which made one liable to death.
In Islamic traditions, women are required to wear long, loose, opaque outer dress when stepping out of the home. This dress code was democratic (for all women regardless of status) and for protection from the scorching sun. The Quran says this about husbands and wives: "...They are clothing/covering (Libaas) for you; and you for them" (chapter 2:187).
Jewish ritual also requires rending of one's upper garment as a sign of mourning. This practice is found in the Bible when Jacob hears of the apparent death of his son Joseph.
Origin and history
First recorded use
According to archaeologists and anthropologists, the earliest clothing likely consisted of fur, leather, leaves, or grass that were draped, wrapped, or tied around the body. Knowledge of such clothing remains inferential, since clothing materials deteriorate quickly compared to stone, bone, shell and metal artifacts. Archeologists have identified very early sewing needles of bone and ivory from about 30,000 BC, found near Kostenki, Russia in 1988.Hoffecker, J., Scott, J., Excavations In Eastern Europe Reveal Ancient Human Lifestyles, University of Colorado at Boulder News Archive, March 21, 2002, colorado.edu Dyed flax fibers that could have been used in clothing have been found in a prehistoric cave in the Republic of Georgia that date back to 36,000 BP. Supporting Online Material
Scientists are still debating when people started wearing clothes. Ralf Kittler, Manfred Kayser and Mark Stoneking, anthropologists at the Max Planck Institute for Evolutionary Anthropology, have conducted a genetic analysis of human body lice that suggests clothing originated quite recently, around 170,000 years ago. Body lice is an indicator of clothes-wearing, since most humans have sparse body hair, and lice thus require human clothing to survive. Their research suggests the invention of clothing may have coincided with the northward migration of modern Homo sapiens away from the warm climate of Africa, thought to have begun between 50,000 and 100,000 years ago. However, a second group of researchers using similar genetic methods estimate that clothing originated around 540,000 years ago For now, the date of the origin of clothing remains unresolved.
Making clothing
Some human cultures, such as the various people of the Arctic Circle, traditionally make their clothing entirely of prepared and decorated furs and skins. Other cultures supplemented or replaced leather and skins with cloth: woven, knitted, or twined from various animal and vegetable fibers.
Although modern consumers may take the production of clothing for granted, making fabric by hand is a tedious and labor-intensive process. The textile industry was the first to be mechanized – with the powered loom – during the Industrial Revolution.
Different cultures have evolved various ways of creating clothes out of cloth. One approach simply involves draping the cloth. Many people wore, and still wear, garments consisting of rectangles of cloth wrapped to fit – for example, the dhoti for men and the sari for women in the Indian subcontinent, the Scottish kilt or the Javanese sarong. The clothes may simply be tied up, as is the case of the first two garments; or pins or belts hold the garments in place, as in the case of the latter two. The precious cloth remains uncut, and people of various sizes or the same person at different sizes can wear the garment.
Another approach involves cutting and sewing the cloth, but using every bit of the cloth rectangle in constructing the clothing. The tailor may cut triangular pieces from one corner of the cloth, and then add them elsewhere as gussets. Traditional European patterns for men's shirts and women's chemises take this approach.
Modern European fashion treats cloth much less conservatively, typically cutting in such a way as to leave various odd-shaped cloth remnants. Industrial sewing operations sell these as waste; home sewers may turn them into quilts.
In the thousands of years that humans have spent constructing clothing, they have created an astonishing array of styles, many of which have been reconstructed from surviving garments, photos, paintings, mosaics, etc., as well as from written descriptions. Costume history serves as a source of inspiration to current fashion designers, as well as a topic of professional interest to costumers constructing for plays, films, television, and historical reenactment.
Contemporary clothing
Western dress code
The Western dress code has changed over the past 500+ years. The mechanization of the textile industry made many varieties of cloth widely available at affordable prices. Styles have changed, and the availability of synthetic fabrics has changed the definition of "stylish". In the latter half of the 20th century, blue jeans became very popular, and are now worn to events that normally demand formal attire. Activewear has also become a large and growing market.
The licensing of designer names was pioneered by designers like Pierre Cardin in the 1960s and has been a common practice within the fashion industry from about the 1970s. Among the more popular include Marc Jacobs and Gucci, named for Marc Jacobs and Guccio Gucci respectively.
Spread of western styles
By the early years of the 21st century, western clothing styles had, to some extent, become international styles. This process began hundreds of years earlier, during the periods of European colonialism. The process of cultural dissemination has perpetuated over the centuries as Western media corporations have penetrated markets throughout the world, spreading Western culture and styles. Fast fashion clothing has also become a global phenomenon. These garments are less expensive, mass-produced Western clothing. Donated used clothing from Western countries are also delivered to people in poor countries by charity organizations.
Ethnic and cultural heritage
People may wear ethnic or national dress on special occasions or in certain roles or occupations. For example, most Korean men and women have adopted Western-style dress for daily wear, but still wear traditional hanboks on special occasions, like weddings and cultural holidays. Items of Western dress may also appear worn or accessorized in distinctive, non-Western ways. A Tongan man may combine a used T-shirt with a Tongan wrapped skirt, or tupenu.
Sport and activity
thumb|upright|Fashion shows are often the source of the latest style and trends in clothing fashions.
Most sports and physical activities are practiced wearing special clothing, for practical, comfort or safety reasons. Common sportswear garments include shorts, T-shirts, tennis shirts, leotards, tracksuits, and trainers. Specialized garments include wet suits (for swimming, diving or surfing), salopettes (for skiing) and leotards (for gymnastics). Also, spandex materials are often used as base layers to soak up sweat. Spandex is also preferable for active sports that require form fitting garments, such as volleyball, wrestling, track & field, dance, gymnastics and swimming.
Fashion
There exists a diverse range of styles in fashion, varying by geography, exposure to modern media, economic conditions, and ranging from expensive haute couture to traditional garb, to thrift store grunge.
Future trends
The world of clothing is always changing, as new cultural influences meet technological innovations. Researchers in scientific labs have been developing prototypes for fabrics that can serve functional purposes well beyond their traditional roles, for example, clothes that can automatically adjust their temperature, repel bullets, project images, and generate electricity. Some practical advances already available to consumers are bullet-resistant garments made with kevlar and stain-resistant fabrics that are coated with chemical mixtures that reduce the absorption of liquids.
Political issues
Working conditions in the garments industry
thumb|Garments factory in Bangladesh
thumb|upright|Safety garb for women workers in Los Angeles, c. 1943, was designed to prevent occupational accidents among female war workers.
Though mechanization transformed most aspects of human industry by the mid-20th century, garment workers have continued to labor under challenging conditions that demand repetitive manual labor. Mass-produced clothing is often made in what are considered by some to be sweatshops, typified by long work hours, lack of benefits, and lack of worker representation. While most examples of such conditions are found in developing countries, clothes made in industrialized nations may also be manufactured similarly.
Coalitions of NGOs, designers (including Katharine Hamnett, American Apparel, Veja, Quiksilver, eVocal, and Edun) and campaign groups like the Clean Clothes Campaign (CCC) and the Institute for Global Labour and Human Rights as well as textile and clothing trade unions have sought to improve these conditions as much as possible by sponsoring awareness-raising events, which draw the attention of both the media and the general public to the workers.
Outsourcing production to low wage countries like Bangladesh, China, India and Sri Lanka became possible when the Multi Fibre Agreement (MFA) was abolished. The MFA, which placed quotas on textiles imports, was deemed a protectionist measure. Although many countries recognize treaties like the International Labor Organization, which attempt to set standards for worker safety and rights, many countries have made exceptions to certain parts of the treaties or failed to thoroughly enforce them. India for example has not ratified sections 87 and 92 of the treaty.
Despite the strong reactions that "sweatshops" evoked among critics of globalization, the production of textiles has functioned as a consistent industry for developing nations providing work and wages, whether construed as exploitative or not, to many thousands of people.
Fur
The use of animal fur in clothing dates to prehistoric times. It is currently associated in developed countries with expensive, designer clothing, although fur is still used by indigenous people in arctic zones and higher elevations for its warmth and protection. Once uncontroversial, it has recently been the focus of campaigns on the grounds that campaigners consider it cruel and unnecessary. PETA, along with other animal rights and animal liberation groups have called attention to fur farming and other practices they consider cruel.
Life cycle
Clothing maintenance
Clothing suffers assault both from within and without. The human body sheds skin cells and body oils, and exudes sweat, urine, and feces. From the outside, sun damage, moisture, abrasion and dirt assault garments. Fleas and lice can hide in seams. Worn clothing, if not cleaned and refurbished, itches, looks scruffy, and loses functionality (as when buttons fall off, seams come undone, fabrics thin or tear, and zippers fail).
In some cases, people wear an item of clothing until it falls apart. Cleaning leather presents difficulties, and bark cloth (tapa) cannot be washed without dissolving it. Owners may patch tears and rips, and brush off surface dirt, but old leather and bark clothing always look old.
But most clothing consists of cloth, and most cloth can be laundered and mended (patching, darning, but compare felt).
Laundry, ironing, storage
Humans have developed many specialized methods for laundering, ranging from early methods of pounding clothes against rocks in running streams, to the latest in electronic washing machines and dry cleaning (dissolving dirt in solvents other than water). Hot water washing (boiling), chemical cleaning and ironing are all traditional methods of sterilizing fabrics for hygiene purposes.
Many kinds of clothing are designed to be ironed before they are worn to remove wrinkles. Most modern formal and semi-formal clothing is in this category (for example, dress shirts and suits). Ironed clothes are believed to look clean, fresh, and neat. Much contemporary casual clothing is made of knit materials that do not readily wrinkle, and do not require ironing. Some clothing is permanent press, having been treated with a coating (such as polytetrafluoroethylene) that suppresses wrinkles and creates a smooth appearance without ironing.
Once clothes have been laundered and possibly ironed, they are usually hung on clothes hangers or folded, to keep them fresh until they are worn. Clothes are folded to allow them to be stored compactly, to prevent creasing, to preserve creases or to present them in a more pleasing manner, for instance when they are put on sale in stores.
Non-iron
A resin used for making non-wrinkle shirts releases formaldehyde, which could cause contact dermatitis for some people; no disclosure requirements exist, and in 2008 the U.S. Government Accountability Office tested formaldehyde in clothing and found that generally the highest levels were in non-wrinkle shirts and pants.When Wrinkle-Free Clothing Also Means Formaldehyde Fumes. New York Times. In 1999, a study of the effect of washing on the formaldehyde levels found that after 6 months after washing, 7 of 27 shirts had levels in excess of 75 ppm, which is a safe limit for direct skin exposure.Changes of Free Formaldehyde Quantity in Non-iron Shirts by Washing and Storage. Journal of Health Science.
Mending
In past times, mending was an art. A meticulous tailor or seamstress could mend rips with thread raveled from hems and seam edges so skillfully that the tear was practically invisible. When the raw material – cloth – was worth more than labor, it made sense to expend labor in saving it. Today clothing is considered a consumable item. Mass-manufactured clothing is less expensive than the labor required to repair it. Many people buy a new piece of clothing rather than spend time mending. The thrifty still replace zippers and buttons and sew up ripped hems.
Recycling
Used, unwearable clothing can be used for quilts, rags, rugs, bandages, and many other household uses. It can also be recycled into paper. In Western societies, used clothing is often thrown out or donated to charity (such as through a clothing bin). It is also sold to consignment shops, dress agencies, flea markets, and in online auctions. Used clothing is also often collected on an industrial scale to be sorted and shipped for re-use in poorer countries.
There are many concerns about the life cycle of synthetics, which come primarily from petrochemicals. Unlike natural fibers, their source is not renewable and they are not biodegradable.The Textile Materials Eco Battle Between Natural and Synthetic Fabrics "Steven E. Davis, Sweatshirt Station".
See also
Thermoregulation
Timeline of requisite dress in Western civilization
Bangladesh textile industry
List of current and defunct clothing and footwear stores in the United Kingdom
References
Further reading
ebook ISBN 978-0-231-51273-2
Paperback ISBN 978-0-7456-3187-5
(see especially sections 5 – 'Clothing' – & 6 – 'Protective clothing').
External links
BBC Wiltshire Dents Glove Museum
International Textile and Apparel Association, scholarly publications
German Hosiery Museum (English language)
Molecular Evolution of Pediculus humanus and the Origin of Clothing by Ralf Kittler, Manfred Kayser and Mark Stoneking (.PDF file)
Cornell Home Economics Archive: Research, Tradition, History (HEARTH)
Category:Clothing manufacturers | 38,180 | 2017-01 |
Comcast |
Comcast Corporation (formerly registered as Comcast Holdings)Before the AT&T merger in 2001, the parent company was Comcast Holdings Corporation. Comcast Holdings Corporation now refers to a subsidiary of Comcast Corporation, not the parent company (see: Bloomberg profile on Comcast Holdings Corporation). Technically, the current parent company was founded December 7, 2001 as CAB Holdings Corporation, which changed its name to AT&T Comcast Corporation before finally taking on the Comcast Corporation name (see: Nov 2002 8K/A Form and Nov 2002 S-4). is an American global telecommunications conglomerate that is the largest broadcasting and cable television company in the world by revenue. It is the second-largest pay-TV company after the AT&T-DirecTV acquisition, largest cable TV company and largest home Internet service provider in the United States, and the nation's third-largest home telephone service provider. Comcast services U.S. residential and commercial customers in 40 states and in the District of Columbia.Comcast 2008 Form 10-K, files.shareholder.com The company's headquarters are located in Philadelphia, Pennsylvania. As the owner of the international media company NBCUniversal since 2011,(2013-03-19) . Deadline, "Comcast Completes Acquisition Of GE’s 49% Stake In NBCUniversal". Retrieved on March 19, 2013. Comcast is a producer of feature films and television programs intended for theatrical exhibition and over-the-air and cable television broadcast.
Comcast operates over-the-air national broadcast network channels (NBC and Telemundo), multiple cable-only channels (including MSNBC, CNBC, USA Network, NBCSN, E!, The Weather Channel, among others), the film production studio Universal Pictures, and Universal Parks & Resorts in Los Angeles and Orlando. The first Universal theme park outside of the U.S., Universal Studios Japan, opened in 2001, followed by Universal Studios Singapore in 2011. Few new locations are being planned or developed for future operation. Comcast also has significant holdings in digital distribution (thePlatform). In February 2014 the company agreed to merge with Time Warner Cable in an equity swap deal worth $45.2 billion. Under the terms of the agreement Comcast was to acquire 100% of Time Warner Cable.Comcast and Time Warner Cable to merge in $45.2bn deal. Broadcast Communications. Retrieved on February 14, 2014. However, on April 24, 2015, Comcast terminated the agreement.
Comcast has been criticized for multiple reasons. For instance, the company's customer satisfaction often ranks among the lowest in the cable industry.J.D. Power Releases 2008 Residential Television Service Satisfaction Survey. News.ecoustics.com. Retrieved on July 8, 2011. In addition, Comcast has violated net neutrality practices in the past; and, despite Comcast's commitment to a narrow definition of net neutrality, critics advocate a definition of which precludes distinction between Comcast's private network services and the rest of the Internet.Modine, Austin. (January 21, 2009) TheRegister.co.uk. TheRegister.co.uk. Retrieved on July 8, 2011. Critics also point out a lack of competition in the vast majority of Comcast's service area; there is limited competition among cable providers. Furthermore, given Comcast's negotiating power as a large ISP, some suspect that Comcast could leverage paid peering agreements to unfairly influence end-user connection speeds. And its ownership of both content production (in NBCUniversal) and content distribution (as an ISP) has raised antitrust concerns. These issues, in addition to others, led to Comcast being dubbed "The Worst Company in America" by The Consumerist in 2010 and 2014.
Despite being publicly-traded, Comcast is a family-owned business, with the Roberts family owning a controlling stake and multiple generations serving as company executives.
Overview
Leadership
Comcast is sometimes described as a family business. Brian L. Roberts, Chairman, President, and CEO of Comcast, is the son of co-founder Ralph Roberts. Roberts owns or controls just over 1% of all Comcast shares but all of the Class B supervoting shares, which gives him an "undilutable 33% voting power over the company".All of Comcast's class B common stock, which controls 33.3% of voting power, is owned by CEO Brian Roberts. (see ) Legal expert Susan P. Crawford has said this gives him "effective control over [Comcast's] every step". In 2010, he was one of the highest paid executives in the United States, with total compensation of about $31 million.
Corporate offices
Comcast is headquartered in Philadelphia, Pennsylvania, and also has corporate offices in Atlanta, Detroit, Denver, and Manchester, New Hampshire.Comcast Corporate Overview. Comcast.com. Retrieved on July 8, 2011. On January 3, 2005, Comcast announced that it would become the anchor tenant in the new Comcast Center in downtown Philadelphia. The skyscraper is the tallest building in Pennsylvania. Comcast has begun construction on a second skyscraper directly adjacent to the original Comcast headquarters in the summer of 2014.
Employee relations
The company is often criticized by both the media and its own staff for its less upstanding policies regarding employee relations. A 2012 Reddit post written by an anonymous Comcast call center employee eager to share their negative experiences with the public received attention from publications including The Huffington Post. A 2014 investigative series published by The Verge involved interviews with 150 of Comcast's employees. It sought to examine why the company has become so widely criticized by its customers, the media and even members of its own staff. The series claimed part of the problem is internal and that Comcast's staff endures unreasonable corporate policies. According to the report: "customer service has been replaced by an obsession with sales; technicians are understaffed while tech support is poorly trained; and the company is hobbled by internal fragmentation." A widely read article penned by an anonymous call center employee working for Comcast appeared in November 2014 on Cracked. Titled "Five Nightmares You Live While Working For America's Worst Company," the article also claimed that Comcast is obsessed with sales, doesn't train its employees properly and concluded that "the system makes good customer service impossible."
Comcast has also earned a reputation for being anti-union. According to one of the company's training manuals, "Comcast does not feel union representation is in the best interest of its employees, customers, or shareholders". A dispute in 2004 with CWA, a labor union that represented many employees at Comcast's offices in Beaverton, Oregon, led to allegations of management intimidating workers, requiring them to attend anti-union meetings and unwarranted disciplinary action for union members.Comcast Systematically Squeezing Out Unions, Northwest Labor Press, 2004. In 2011, Comcast received criticism from Writers Guild of America for its policies in regards to unions.Comcast Seeking to Destroy Writer's Guild, Members Say, CNN's the Wrap, 2011.
Despite these criticisms, Comcast has appeared on multiple "top places to work" lists. In 2009, it was included on CableFAX magazine's "Top 10 Places to Work in Cable", which cited its "scale, savvy and vision".2009 Top 10 Places to Work in Cable, CableFAX, October 27, 2009. Similarly, the Philadelphia Business Journal awarded Comcast the silver medal among extra-large companies in Philadelphia, with the gold medal going to partner organization, Comcast-Spectacor.Silver Winner - Extra-Large Company Comcast Corp., Philadelphia Business Journal, October 16, 2009.Gold Winner - Extra-Large Company: Comcast-Spectacor, Philadelphia Business Journal, October 16, 2009. The Boston Globe found Comcast to be that city's top place to work in 2009.A cable company that listens, The Boston Globe, November 8, 2009. Employee diversity is also an attribute upon which Comcast receives strong marks. In 2008, Black Enterprise magazine rated Comcast among the top 15 companies for workforce diversity.The 15 Best Companies for Workforce Diversity, Black Enterprise, July 10, 2008. Comcast was also named a "Top 2014 Workplace" by the Washington Post in their annual feature. the Human Rights Campaign has given Comcast a 100 on the Corporate Equality Index and one of the best places for LGBT people to work
Financial performance
The book value of the company nearly doubled from $8.19 a share in 1999 to $15 a share in 2009. Revenues grew sixfold from 1999's $6 billion to almost $36 billion in 2009. Net profit margin rose from 4.2% in 1999 to 8.4% in 2009, with operating margins improving 31 percent and return on equity doubling to 6.7 percent in the same time span. Between 1999 and 2009, return on capital nearly tripled to 7 percent.Malcolm Berko: Taking stock, The State Journal-Register, October 7, 2009. Comcast reported first quarter 2012 profit increases of 30% due to increase in high-speed internet customers. In February 2014, Comcast generated 1.1 billion in revenue during the first quarter due to the Sochi Olympics,.Reuters April 22, 2014, Reuters
Lobbying and electoral fundraising
With $18.8 million spent in 2013, Comcast has the seventh largest lobbying budget of any individual company or organization in the United States. Comcast employs multiple former US Congressmen as lobbyists. The National Cable & Telecommunications Association, which has multiple Comcast executives on its board, also represents Comcast and other cable companies as the fifth largest lobbying organization in the United States, spending $19.8 million in 2013. Comcast was among the top backers of Barack Obama's presidential runs, with Comcast vice president David Cohen raising over $2.2 million from 2007 to 2012. Cohen has been described by many sources as influential in the US government, though he is no longer a registered lobbyist, as the time he spends lobbying falls short of the 20% which requires official registration. Comcast's PAC, the Comcast Corporation and NBCUniversal Political Action Committee, is the among the largest PACs in the US, raising about $3.7 million from 2011-2012 for the campaigns of various candidates for office in the United States Federal Government. Comcast is also a major backer of the National Cable and Telecommunications Association Political Action Committee, which raised $2.6 million from 2011-2012. Comcast spent the most money of any organization in support of the Stop Online Piracy and PROTECT IP bills, spending roughly $5 million to lobby for their passage.
Comcast also backs lobbying and PACs on a regional level, backing organizations such as the Tennessee Cable Telecommunications Association and the Broadband Communications Association of Washington PAC. Comcast and other cable companies have lobbied state governments to pass legislation restricting or banning individual cities from offering public broadband service. Municipal broadband restrictions of varying scope have been passed in a total of 20 US States.
Philanthropy
Comcast operates most of its philanthropic programs through its charitable arm, the Comcast Foundation. The organization is particularly focused on minority groups, such as the Hispanic National Council of La Raza. In 2014, the foundation reported grants totaling over $591,000 to nonprofits in parts of Pennsylvania, Ohio, Maryland, and West Virginia.
Outside the Comcast Foundation, Comcast offers free or low cost internet to many low income households through a program called "Internet Essentials". Comcast also has run trials for this program for senior citizens in Florida, and for college students in Chicago and Denver.
Comcast offers low cost internet and cable service to schools, subsidized by general broadband consumers through the US government's E-Rate program. Critics have noted that many of the strongest supporters of Comcast's business deals have received substantial funding from the Comcast Foundation.
History
American Cable Systems
In 1963, Ralph J. Roberts in conjunction with his two business partners, Daniel Aaron and Julian A. Brodsky, purchased American Cable Systems as a corporate spin-off from its parent, Jerrold Electronics, for US $500,000. At the time, American Cable was a small cable operator in Tupelo, Mississippi, with five channels and 12,000 customers. Storecast Corporation of America, a product placement supermarket specialist marketing firm, was purchased by American Cable in 1965. With Storecast being a Muzak client, American Cable purchased its first Muzak franchise of many in Orlando, Florida.
Comcast
thumb|Comcast logo from 1969 to 1999 before it was replaced with the crescent logo
The company was re-incorporated in Pennsylvania in 1969, under the new name Comcast Corporation. The name "Comcast" is a portmanteau of the words "Communication" and "Broadcast".Businessweek.com Brian Roberts: High-Speed Pipes, Business Week, October 1, 2002. Comcast's initial public offering occurred on June 29, 1972, with a market capitalization of US $3,010,000. In 1977, HBO was first launched on a Comcast system with 20,000 customers in western Pennsylvania with a five-night free preview getting a 15% sign up rate.
Comcast bought 26% of Group W Cable in 1986, doubling its number of subscribers to 1 million. Also that year, Comcast made a founding investment of $380 million in QVC.
Although Comcast lost a bidding war with Kohlberg Kravis Roberts to buy Storer Communications in 1985, in 1988, it was able to buy a 50% share of the company's assets in a joint deal with Tele-Communications Inc.. Comcast also acquired American Cellular Network Corporation in 1988 for $230 million, marking the first time it became a mobile phone operator. Comcast started its Comcast Cellular Communications division.
Increasing market share (1990–2000)
In February 1990, Ralph Roberts' son, Brian L. Roberts, succeeded his father as president of Comcast. Comcast Cellular purchased controlling interest in Metromedia's Metrophone in 1992.
In 1994, Comcast became the third-largest cable operator in the United States, with around 3.5 million subscribers following its purchase of Maclean-Hunter's American division for $1.27 billion. The company's UK branch, Comcast UK Cable Partners, went public while constructing a cable telecommunications network. With five other media companies, the corporation became an original investor in The Golf Channel. Following a bid in 1994 for $2.1 billion, Comcast increased its ownership of QVC from 15.5% of stock to a majority, in a move to prevent QVC from merging with CBS. Comcast later sold its QVC shares in 2004 to Liberty Media for $7.9 billion.
In October 1995, Comcast announced the purchase of the cable operation of E. W. Scripps Company for $1.575 billion in stock, a deal making Comcast the no. 3 cable company with 4.3 million customers. Comcast offered internet connection for the first time in 1996, with its part in the launch of the @Home Network. When Excite@Home went bankrupt in 2002, Comcast took over providing internet directly to consumers.
In 1996, Comcast Spectacor and Comcast SportsNet were formed as Comcast units. Comcast Spectacor by joining Ed Snider's Spectacor sports venture company and Comcast SportsNet as a Philadelphia region sports channel which launches in 1997. Microsoft invested $1 billion in Comcast in 1997. Also that year, Digital TV is rolled out by Comcast. In partnership with Disney, Comcast gets a 50.1 percent controlling interest in E! Entertainment.
In February 1998, Comcast sold its UK division to NTL for US $600 million, along with the division's $397 million in debt. Additionally, Comcast launched the Style Network. 1997 cable acquisitions were Jones Intercable, Inc. with 1 million customers and a stake in Prime Communications with 430,000 subscribers.
Comcast sold Comcast Cellular to SBC Communications in 1999 for $400 million, releasing them from $1.27 billion in debt. Comcast acquired Greater Philadelphia Cablevision in 1999. In March 1999, Comcast offered to buy MediaOne for $60 billion. However, MediaOne decided to accept AT&T Corporation's offer of $62 billion instead. Comcast University started in 1999 as well as Comcast Interactive Capital Group to make technology and Internet related investments taking its first investment in VeriSign.
With AT&T Broadband in 1999, the company agreed to trade cable systems. The trade was completed in 2000 with Comcast gaining systems in Florida, Michigan, New Jersey, Pennsylvania and Washington, D.C. A trade was also completed with Adelphia thus receiving systems in Florida, Indiana, Michigan, New Jersey, New Mexico and Pennsylvania. Lenfest Communications, Inc. with about 1.3 million cable subscribers, acquisition is closed.
Largest US cable provider (2001–present)
thumb|right|Proposed merger name logo, 2001
thumb|right|Comcast logo from 1999 to 2012
In 2001, Comcast announced it would acquire the assets of the largest cable television operator at the time, AT&T Broadband, for US$44.5 billion. The proposed name for the merged company was "AT&T Comcast", but the companies ultimately decided to keep only the Comcast name. In 2002, Comcast acquired all assets of AT&T Broadband, thus making Comcast the largest cable television company in the United States with over 22 million subscribers. This also spurred the start of Comcast Advertising Sales (using AT&T's groundwork) which would later be renamed Comcast Spotlight. As part of this acquisition, Comcast also acquired the National Digital Television Center in Centennial, Colorado as a wholly owned subsidiary, which is today known as the Comcast Media Center.
On February 11, 2004, Comcast announced a $54 billion bid for The Walt Disney Company, as well as taking on $12 billion of Disney's debt. The deal would have made Comcast the largest media conglomerate in the world. However, after rejection by Disney and uncertain response from investors, the bid was abandoned in April. The main reason for the buyout attempt was so that Comcast could acquire Disney's 80 percent stake in ESPN, which a Comcast executive called "the most important and valuable asset" that Disney owned.
On April 8, 2005, a partnership led by Comcast and Sony Pictures Entertainment finalized a deal to acquire MGM and its affiliate studio, United Artists, and create an additional outlet to carry MGM/UA's material for cable and Internet distribution. On October 31, 2005, Comcast officially announced that it had acquired Susquehanna Communications a South Central Pennsylvania, -based cable television and broadband services provider and unit of the former Susquehanna Pfaltzgraff company, for $775 million cash. In this deal Comcast acquired approximately 230,000 basic cable customers, 71,000 digital cable customers, and 86,000 high-speed Internet customers. Comcast previously owned approximately 30 percent of Susquehanna Communications through affiliate company Lenfest. In December 2005, Comcast announced the creation of Comcast Interactive Media, a new division focused on online media.
In July 2006, Comcast purchased the Seattle-based software company thePlatform. This represented an entry into a new line of business – selling software to allow companies to manage their Internet (and IP-based) media publishing efforts.
On April 3, 2007, Comcast announced it had entered into an agreement to acquire the cable systems owned and operated by Patriot Media, a privately held company owned by cable veteran Steven J. Simmons, Spectrum Equity Investors and Spire Capital, that serves approximately 81,000 video subscribers. Comcast will acquire Patriot for a net cash investment of approximately $483 million.Comcast Corporation To Acquire Patriot Media. Comcast.com (April 3, 2007). Retrieved on July 8, 2011. By acquiring the niche provider the deal will plug a hole in its central New Jersey service.Comcast to Buy Patriot Media. Seekingalpha.com (April 4, 2007). Retrieved on July 8, 2011.
Comcast announced in May 2007 and launched in September 2008 a dashboard called SmartZone. Hewlett-Packard led "design, creation and management". Collaboration and unified messaging technology came from open-source vendor Zimbra. "SmartZone users will be able to send and receive e-mail, listen to their voicemail messages online and forward that information via e-mail to others, send instant messages and video instant messages and merge their contacts into one address book". There is also Cloudmark spam and phishing protection and Trend Micro antivirus. The address book is Comcast Plaxo software.
In May 2008 Comcast purchased Plaxo for a reported $150 million to $170 million.
Comcast won the Consumerist Worst Company In America ("Golden Poo") award in 2010. A gold trophy in the shape of a pile of human feces was delivered to Comcast Corporate Headquarters to commemorate the unmatched level of enmity flowing from their customer base to their business. Competitor Verizon congratulated Comcast on their award via the Verizon Twitter feed. Comcast responded immediately by publicly acknowledging the dubious award, and citing ongoing efforts to improve its customer service.
Adelphia purchase
In April 2005, Comcast and Time Warner Cable announced plans to buy the assets of bankrupted Adelphia Cable. The two companies paid a total of $17.6 billion in the deal that was finalized in the second quarter of 2006—after the U.S. Federal Communications Commission (FCC) completed a seven-month investigation without raising an objection. Time Warner Cable became the second-largest cable provider in the U.S., ranking behind Comcast. As part of the deal, Time Warner and Comcast traded existing subscribers in order to consolidate them into larger geographic clusters.
In August 2006, Comcast and Time Warner dissolved a 50/50 partnership that controlled the systems in the Houston, Southwest Texas, San Antonio, and Kansas City markets under the Time Warner brand. After the dissolution, Comcast obtained the Houston system, and Time Warner retained the others.Time Warner Cable, Time Warner Cable/Comcast Official Statement. Web.archive.org (September 26, 2007). Retrieved on July 8, 2011. On January 1, 2007, Comcast officially took control of the Houston system, but continued to operate under the Time Warner Cable brand until June 19, 2007.
NBCUniversal
thumb|NBCUniversal logo from 2004 to 2011
thumb|NBCUniversal logo from 2011 to present
Media outlets began reporting in late September 2009 that Comcast was in talks to buy NBCUniversal. Comcast denied the rumors at first, while NBC would not comment on them.Comcast in Talks to Buy NBC Universal, AJC.com, October 1, 2009 However, CNBC itself reported on October 1 that General Electric was considering spinning NBCUniversal off into a separate company that would merge the NBC television network and its cable properties such as USA Network, Syfy and MSNBC with Comcast's content assets. GE would maintain 49% control of the new company, while Comcast owned 51%.GE is in Talks to Spin Off NBC, Give Comcast 51% of New Unit, CNBC.com, October 1, 2009GE and Comcast Exploring a Spin-Off of NBC Universal, The New York Times, October 1, 2009 Vivendi, which owns 20%, would have to sell its stake to GE. It was reported that under the current deal with GE that it would happen in November or December.GE Investors Breathe Sigh of Relief on Comcast Talks, Reuters.com, October 1, 2009Questions Continue to Swirl Around Comcast Venture, The Philadelphia Inquirer, October 3, 2009 It was also reported that Time Warner would be interested in placing a bid, until CEO Jeffrey L. Bewkes directly denied interest,Time Warner won't bid for NBC Universal, Toronto Star, October 2, 2009 leaving Comcast the sole bidder. On November 1, 2009, The New York Times reported Comcast had moved closer to a deal to purchase NBCUniversal and that a formal announcement could be made sometime the following week.Comcast Said to Be Close to Gaining NBC Universal, The New York Times, November 1, 2009
Following a tentative agreement on by December 1, on December 3, 2009, the parties announced that Comcast would buy a controlling 51% stake in NBCUniversal for $6.5 billion in cash and $7.3 billion in programming. GE would take over the remaining 49% stake in NBCUniversal, using $5.8 billion to buy out Vivendi's 20% minority stake in NBCUniversal. On January 18, 2011, the FCC approved the deal by a vote of 4 to 1.Government Approves Comcast-NBC Deal, New York Times, January 18, 2011 The sale was completed on January 28, 2011.Comcast, NBC U Merger a Done Deal, Variety, January 29, 2011Comcast Takes Over NBC Universal After Long Review, ABC News, January 29, 2011 In late December 2012, Comcast added the NBC peacock symbol to their new logo. On February 12, 2013, Comcast announced an intention to acquire the remaining 49% of General Electric's interest in NBCUniversal,Meg James "Los Angeles Times" February 12, 2013 Comcast to buy out GE's interest in NBCUniversal latimes.com, Retrieved on February 13, 2013David Lieberman "Hollywood Deadline" February 12, 2013 Comcast To Pay $16.7B For General Electric’s 49% Of NBCUniversal deadline.com, Retrieved on February 13, 2013 which Comcast completed on March 19, 2013.
Time Warner Cable
On February 12, 2014, the Los Angeles Times reported that Comcast sought to acquire Time Warner Cable in a deal valued at $45.2 billion. On February 13, it was reported that Time Warner Cable agreed to the acquisition. This was to add several metropolitan areas to the Comcast portfolio, such as New York City, Los Angeles, Dallas-Fort Worth, Cleveland, Columbus, Cincinnati, Charlotte, San Diego, and San Antonio. Time Warner Cable and Comcast aimed to merge into one company by the end of 2014 and both have praised the deal, emphasizing the increased capabilities of a combined telecommunications network, and to "create operating efficiencies and economies of scale".
In 2014, critics expressed concern that the deal would give Comcast greater negotiating power in a number of areas, including rebroadcast fees with television channels, and peering agreements with ISPs.
Critics noted in 2013 that Tom Wheeler, the head of the FCC, which has to approve the deal, is the former head of both the largest cable lobbying organization, the National Cable & Telecommunications Association, and as largest wireless lobby, CTIA – The Wireless Association. According to Politico, Comcast "donated to almost every member of Congress who has a hand in regulating it."Romm, Tony (March 9, 2014). Comcast spreads cash wide on Capitol Hill. Politico. Retrieved March 11, 2014. The US Senate Judiciary Committee held a hearing on the deal on April 9, 2014. The House Judiciary Committee planned its own hearing. On March 6, 2014 the United States Department of Justice Antitrust Division confirmed it was investigating the deal. In March 2014, the division's chairman, William Baer, recused himself because he was involved in the prior Comcast NBCUniversal acquisition. Several states' attorneys general have announced support for the federal investigation. On April 24, 2015, Jonathan Sallet, general counsel of the F.C.C., said that he was going to recommend a hearing before an administrative law judge, equivalent to a collapse of the deal.
In August 2015, Comcast announced to speed up Internet for low income customers from 5 megabits per second (mbps) to 10 Mbit/s, provide free wireless routers, and will pilot an initiative to increase Internet access for low-income senior citizens. In mid-2016, a new updated version of comcast called "X1" is comcast's most popular and latest version of the whole Comcast company. In September of that year Comcast also launched Watchable, a YouTube competitor. The move was seen by Variety as an attempt to appeal to the cord cutting market.
DreamWorks Animation
In April 2016, Comcast confirmed that its NBCUniversal division would acquire DreamWorks Animation for $3.8 billion. The deal closed on August 22, 2016
Cellular service
In September 2016, Comcast confirmed that it had reached a partnership with Verizon Wireless to launch a cellular network as an MVNO. The new service, described as being a "Wi-Fi and MVNO-integrated product", is expected to launch in mid-2017. The partnership and the addition of wireless would allow Comcast to offer a quadruple play of services.
Divisions and subsidiaries
Comcast Cable (Xfinity)
Comcast Cable is the cable television division of Comcast Corporation, providing cable television, broadband internet, and landline telephone under the Xfinity brand. Comcast Cable also provides connections to small to medium-sized business through its Comcast Business brand, and Fortune 1000 companies through its Comcast Enterprise brand.
NBCUniversal
Comcast delivers third-party television programming content to its own customers, and also produces its own first-party content both for subscribers and customers of other competing television services. Fully or partially owned Comcast programming includes Comcast Newsmakers, Comcast Network, Comcast SportsNet, SportsNet New York, MLB Network, Comcast Sports Southeast/Charter Sports Southeast, NBC Sports Network, The Golf Channel, AZN Television, and FEARnet. On May 19, 2009, Disney and ESPN announced an agreement to allow Comcast Corporation to carry the channels ESPNU and ESPN3."Comcast adds ESPNU and ESPN360.COM to line up with content on television, on demand and online". espnmediazone.com, Comcast press release, May 19, 2009. Accessed October 12, 2009. The U.S. Olympic Committee and Comcast intended to team up to create The U.S. Olympic Network, which was slated to launch after the 2010 Vancouver Olympic Games.Comcast, U.S. Olympic Committee to Launch Cable Net, Mediaweek, July 8, 2009 These plans were then put on hold by the U.S. Olympic Committee.U.S. Olympic Cable Network Put on Hold, Mediaweek, August 17, 2009 The U.S. Olympic Committee and Comcast have ended the plans to create The U.S. Olympic Network.U.S.O.C. Ends Plans for Its Own Olympic Channel, The New York Times, April 21, 2010
Comcast's content networks and assets also include E!, Esquire Network, Golf Channel, NBCSN, Sprout, TV One, and the regional Comcast SportsNets. When Comcast took majority ownership in NBCUniversal, significant number of cable networks were added to this list. Comcast's NHL deal obligated them to create a U.S. version of NHL Network, launched in October 2007.
Comcast also owns many local channels. Comcast also has a variety network known as Comcast Network, available exclusively to Comcast and Cablevision subscribers. The channel shows news, sports, and entertainment and places emphasis in Philadelphia and the Baltimore/Washington, D.C. areas, though the channel is also available in New York, Pittsburgh, and Richmond. In August 2004, Comcast started a channel called Comcast Entertainment Television, for Colorado Comcast subscribers, and focusing on life in Colorado. It also carries some National Hockey League and National Basketball Association games when Altitude Sports & Entertainment is carrying the NHL or NBA. In January 2006, CET became the primary channel for Colorado's Emergency Alert System in the Denver Metro Area. In 2006, Comcast helped found the channel SportsNet New York, acquiring a minority stake. The other partners in the project were New York Mets and Time Warner Cable.
Professional sports
In 1996, Comcast bought a controlling stake in Spectacor from the company's founder, Ed Snider. Comcast Spectacor holdings now include the Philadelphia Flyers NHL hockey team and their home arena in Philadelphia. Over a number of years, Comcast became majority owner of Comcast SportsNet, as well as Golf Channel and NBCSN (formerly the Outdoor Life Network, then Versus). In 2002, Comcast paid the University of Maryland $25 million for naming rights to the new basketball arena built on the College Park campus, the XFINITY Center. Before it was renamed for Comcast's cable subsidiary, XFINITY Center was called Comcast Center from its opening in 2002 through July 2014.
Venture capital
Comcast founded its first venture capital fund in January 1999, as Comcast Interactive Capital. Around 2011, following the 2009 NBC Universal acquisition, Comcast Interactive Capital was merged with The Peacock Equity Fund, the venture capital subsidiary of NBCUniversal. The combined company, Comcast Ventures, backs various companies such as FanDuel and Vox Media, for example.
DreamWorks Animation
On April 28, 2016, Comcast bought DreamWorks Animation, along with its major IPs including Shrek, How to Train Your Dragon, Kung Fu Panda, Madagascar.
Criticism and controversy
thumb|250px|right|Comcast service van, Ypsilanti Township, Michigan
In 2004 and 2007, the American Customer Satisfaction Index (ACSI) survey found that Comcast had the worst customer satisfaction rating of any company or government agency in the country, including the Internal Revenue Service. The ACSI indicates that almost half of all cable customers (regardless of company) have registered complaints, and that cable is the only industry to score below 60 in the ACSI.The American Customer Satisfaction Index, First Quarter, 2004 Comcast's Customer Service Rating by the ACSI surveys indicate that the company's customer service has not improved since the surveys began in 2001. Analysis of the surveys states that "Comcast is one of the lowest scoring companies in ACSI. As its customer satisfaction eroded by 7% over the past year, revenue increased by 12%." The ACSI analysis also addresses this contradiction, stating that "Such pricing power usually comes with some level of monopoly protection and most cable companies have little competition at the local level. This also means that a cable company can do well financially even though its customers are not particularly satisfied."American Customer Satisfaction Index, First Quarter, 2007 American Customer Satisfaction Index, Scores By Company: Comcast Corporation
In April 2014, Comcast was awarded the 2014 "Worst Company in America" award; an annual contest by the consumer affairs blog The Consumerist that runs a series of reader polls to determine the least popular company in America. This was the second time Comcast had been awarded this title, the first being in 2010.
Comcast spends millions of dollars annually on government relationships.The Center for Public Integrity, Comcast Corp. Political Influence. Publicintegrity.org. Retrieved on July 8, 2011. Comcast employs the spouses, sons and daughters of mayors, councilmen, commissioners, and other officials to assure its continued preferred market allocations.The Washington Post, Prominent Ties Among Comcast Hires. Washingtonpost.com (March 7, 2006). Retrieved on July 8, 2011.The Washington Post, Md. Lawmakers Call for Probe of Comcast Ties. Washingtonpost.com (March 8, 2006). Retrieved on July 8, 2011.Law.com, Federal Judge Certifies Antitrust Class Against Comcast. Law.com. Retrieved on July 8, 2011.
Comcast was given an "F" for its corporate governance practices in 2010, by Corporate Library, an independent shareholder-research organization. According to Corporate Library, Comcast's board of directors ability to oversee and control management was severely compromised (at least in 2010) by the fact that several of the directors either worked for the company or had business ties to it (making them susceptible to management pressure), and a third of the directors were over 70 years of age. According to the Wall Street Journal nearly two-thirds of the flights of Comcast's $40 million corporate jet purchased for business travel related to the NBCU acquisition, were to CEO Brian Roberts' private homes or to resorts.
In January 2015, a customer named Ricardo Brown received a bill from Comcast with his name changed to "Asshole Brown". Brown's wife, Lisa, believed a Comcast employee changed the name in response to the Browns' request to cancel their cable service, an incident in which she was refused a cancellation unless she paid a $60 fee and instead was routed to a retention specialist. Comcast refused to correct the name on their bill after bringing it to the attention of numerous customer service outlets for the company by explaining that Ricardo is the legal name of the customer, so the Browns turned to consumer advocate Christopher Elliott. Elliott posted the facts of the incident, along with a copy of the bill, on his blog. Shortly thereafter, Elliott contacted Comcast and Comcast offered the Browns an apology, a $60 refund, and a promise to track down and fire the responsible employee. The Browns instead requested a full refund for their negative experience and Comcast agreed to refund the family the last two years of service and provide the next two years of service at no charge. Comcast released a statement explaining: "We have spoken with our customer and apologized for this completely unacceptable and inappropriate name change. We have zero tolerance for this type of disrespectful behavior and are conducting a thorough investigation to determine what happened. We are working with our customer to make this right and will take appropriate steps to prevent this from happening again." Bort, Julie. SFGate. Retrieved January 29, 2015
On February 19, 2015 a Comcast customer-support representative was caught falsely telling a customer that the company is required by law to implement data caps. In a SoundCloud recording posted on Reddit, the Comcast agent, named Lionel, can be heard telling the customer, "Every Internet service provider has data caps. It is mandated by the law." Geller, Eric. TheDailyDot. Retrieved February 19, 2015
On August 1, 2016 Washington State Attorney General Bob Ferguson filed a lawsuit against cable television and Internet giant Comcast Corporation in King County Superior Court, alleging the company’s own documents reveal a pattern of illegally deceiving their customers to pad their bottom line by tens of millions of dollars.Office of the Attorney General. . Retrieved August 1, 2016
Notes
References
External links
Category:American brands
Category:Media companies of the United States
Category:Broadband
Category:Cable television companies of the United States
Category:Entertainment companies of the United States
Category:Internet service providers of the United States
Category:Security companies
Category:Telecommunications companies of the United States
Category:Video on demand
Category:VoIP companies of the United States
Category:Multinational companies headquartered in the United States
Category:Companies based in Philadelphia
Category:Conglomerate companies of the United States
Category:American companies established in 1963
Category:Entertainment companies established in 1963
Category:Media companies established in 1963
Category:Publicly traded companies of the United States
Category:Companies listed on NASDAQ
Category:Culture of Philadelphia
Category:NBCUniversal
Category:Roberts family | 303,749 | 2017-01 |
Elizabeth II | Elizabeth II (Elizabeth Alexandra Mary; born 21 April 1926) has been Queen of the United Kingdom, Canada, Australia, and New Zealand since 6 February 1952. She is Head of the Commonwealth and Queen of 12 countries that have become independent since her accession: Jamaica, Barbados, the Bahamas, Grenada, Papua New Guinea, Solomon Islands, Tuvalu, Saint Lucia, Saint Vincent and the Grenadines, Belize, Antigua and Barbuda, and Saint Kitts and Nevis.
Elizabeth was born in London as the eldest child of the Duke and Duchess of York, later King George VI and Queen Elizabeth, and she was educated privately at home. Her father acceded to the throne on the abdication of his brother Edward VIII in 1936, from which time she was the heir presumptive. She began to undertake public duties during the Second World War, serving in the Auxiliary Territorial Service. In 1947, she married Philip, Duke of Edinburgh, a former prince of Greece and Denmark, with whom she has four children: Charles, Prince of Wales; Anne, Princess Royal; Prince Andrew, Duke of York; and Prince Edward, Earl of Wessex.
Elizabeth's many historic visits and meetings include a state visit to the Republic of Ireland and visits to or from five popes. She has seen major constitutional changes, such as devolution in the United Kingdom, Canadian patriation, and the decolonisation of Africa. She has reigned through various wars and conflicts involving many of her realms. She is the world's oldest reigning monarch as well as Britain's longest-lived. In 2015, she surpassed the reign of her great-great-grandmother, Queen Victoria, to become the longest-reigning British monarch and the longest-reigning queen regnant and female head of state in world history. In October 2016, she became the longest currently reigning monarch and head of state following the death of King Bhumibol Adulyadej of Thailand.
Times of personal significance have included the births and marriages of her children, grandchildren, and great grandchildren, her coronation in 1953, and the celebration of milestones such as her Silver, Golden, and Diamond Jubilees in 1977, 2002, and 2012, respectively. Moments of sadness for her include the death of her father in 1952 at age 56; the assassination of Prince Philip's uncle, Lord Mountbatten in 1979; the breakdown of her children's marriages in 1992 (her annus horribilis); the death in 1997 of her son's former wife, Diana, Princess of Wales; and the deaths of her mother and sister in 2002. Elizabeth has occasionally faced republican sentiments and press criticism of the royal family; however, support for the monarchy remains high, as does her personal popularity.
Early life
thumb|left|upright|alt=Elizabeth as a thoughtful-looking toddler with curly, fair hair|Princess Elizabeth aged three,
Elizabeth was born at 02:40 (GMT) on 21 April 1926, during the reign of her paternal grandfather, King George V. Her father, Prince Albert, Duke of York (later King George VI), was the second son of the King. Her mother, Elizabeth, Duchess of York (later Queen Elizabeth), was the youngest daughter of Scottish aristocrat Claude Bowes-Lyon, 14th Earl of Strathmore and Kinghorne. She was delivered by Caesarean section at her maternal grandfather's London house: 17 Bruton Street, Mayfair.Bradford, p. 22; Brandreth, p. 103; Marr, p. 76; Pimlott, pp. 2–3; Lacey, pp. 75–76; Roberts, p. 74 She was baptised by the Anglican Archbishop of York, Cosmo Gordon Lang, in the private chapel of Buckingham Palace on 29 May,Hoey, p. 40 and named Elizabeth after her mother, Alexandra after George V's mother, who had died six months earlier, and Mary after her paternal grandmother.Brandreth, p. 103 Called "Lilibet" by her close family,Pimlott, p. 12 based on what she called herself at first,Williamson, p. 205 she was cherished by her grandfather George V, and during his serious illness in 1929 her regular visits were credited in the popular press and by later biographers with raising his spirits and aiding his recovery.Lacey, p. 56; Nicolson, p. 433; Pimlott, pp. 14–16
Elizabeth's only sibling, Princess Margaret, was born in 1930. The two princesses were educated at home under the supervision of their mother and their governess, Marion Crawford, who was casually known as "Crawfie".Crawford, p. 26; Pimlott, p. 20; Shawcross, p. 21 Lessons concentrated on history, language, literature and music.Brandreth, p. 124; Lacey, pp. 62–63; Pimlott, pp. 24, 69 Crawford published a biography of Elizabeth and Margaret's childhood years entitled The Little Princesses in 1950, much to the dismay of the royal family.Brandreth, pp. 108–110; Lacey, pp. 159–161; Pimlott, pp. 20, 163 The book describes Elizabeth's love of horses and dogs, her orderliness, and her attitude of responsibility.Brandreth, pp. 108–110 Others echoed such observations: Winston Churchill described Elizabeth when she was two as "a character. She has an air of authority and reflectiveness astonishing in an infant."Brandreth, p. 105; Lacey, p. 81; Shawcross, pp. 21–22 Her cousin Margaret Rhodes described her as "a jolly little girl, but fundamentally sensible and well-behaved".Brandreth, pp. 105–106
Heir presumptive
thumb|right|upright|alt=Elizabeth as a rosy-cheeked young girl with blue eyes and fair hair|Princess Elizabeth aged seven, painted by Philip de László, 1933
During her grandfather's reign, Elizabeth was third in the line of succession to the throne, behind her uncle Edward, Prince of Wales, and her father, the Duke of York. Although her birth generated public interest, she was not expected to become queen, as the Prince of Wales was still young. Many people believed that he would marry and have children of his own.Bond, p. 8; Lacey, p. 76; Pimlott, p. 3 When her grandfather died in 1936 and her uncle succeeded as Edward VIII, she became second-in-line to the throne, after her father. Later that year, Edward abdicated, after his proposed marriage to divorced socialite Wallis Simpson provoked a constitutional crisis.Lacey, pp. 97–98 Consequently, Elizabeth's father became king, and she became heir presumptive. If her parents had had a later son, she would have lost her position as first-in-line, as her brother would have been heir apparent and above her in the line of succession.Marr, pp. 78, 85; Pimlott, pp. 71–73
Elizabeth received private tuition in constitutional history from Henry Marten, Vice-Provost of Eton College,Brandreth, p. 124; Crawford, p. 85; Lacey, p. 112; Marr, p. 88; Pimlott, p. 51; Shawcross, p. 25 and learned French from a succession of native-speaking governesses. A Girl Guides company, the 1st Buckingham Palace Company, was formed specifically so that she could socialise with girls her own age.Marr, p. 84; Pimlott, p. 47 Later, she was enrolled as a Sea Ranger.
In 1939, Elizabeth's parents toured Canada and the United States. As in 1927, when her parents had toured Australia and New Zealand, Elizabeth remained in Britain, since her father thought her too young to undertake public tours.Pimlott, p. 54 Elizabeth "looked tearful" as her parents departed.Pimlott, p. 55 They corresponded regularly, and she and her parents made the first royal transatlantic telephone call on 18 May.
Second World War
thumb|left|Elizabeth in Auxiliary Territorial Service uniform,
In September 1939, Britain entered the Second World War, which lasted until 1945. During the war, many of London's children were evacuated to avoid the frequent aerial bombing. The suggestion by senior politician Lord Hailsham that the two princesses should be evacuated to Canada was rejected by Elizabeth's mother, who declared, "The children won't go without me. I won't leave without the King. And the King will never leave." Princesses Elizabeth and Margaret stayed at Balmoral Castle, Scotland, until Christmas 1939, when they moved to Sandringham House, Norfolk.Crawford, pp. 104–114; Pimlott, pp. 56–57 From February to May 1940, they lived at Royal Lodge, Windsor, until moving to Windsor Castle, where they lived for most of the next five years.Crawford, pp. 114–119; Pimlott, p. 57 At Windsor, the princesses staged pantomimes at Christmas in aid of the Queen's Wool Fund, which bought yarn to knit into military garments.Crawford, pp. 137–141 In 1940, the 14-year-old Elizabeth made her first radio broadcast during the BBC's Children's Hour, addressing other children who had been evacuated from the cities. She stated: "We are trying to do all we can to help our gallant sailors, soldiers and airmen, and we are trying, too, to bear our share of the danger and sadness of war. We know, every one of us, that in the end all will be well."
thumb|right|Princess Elizabeth (left, in uniform) on the balcony of Buckingham Palace with (left to right) her mother Queen Elizabeth, Winston Churchill, King George VI, and Princess Margaret,
In 1943, at the age of 16, Elizabeth undertook her first solo public appearance on a visit to the Grenadier Guards, of which she had been appointed colonel the previous year. As she approached her 18th birthday, parliament changed the law so that she could act as one of five Counsellors of State in the event of her father's incapacity or absence abroad, such as his visit to Italy in July 1944.Pimlott, p. 71 In February 1945, she joined the Women's Auxiliary Territorial Service as an honorary second subaltern with the service number of 230873. She trained as a driver and mechanic and was promoted to honorary junior commander five months later.Bradford, p. 45; Lacey, p. 148; Marr, p. 100; Pimlott, p. 75
At the end of the war in Europe, on Victory in Europe Day, Princesses Elizabeth and Margaret mingled anonymously with the celebratory crowds in the streets of London. Elizabeth later said in a rare interview, "We asked my parents if we could go out and see for ourselves. I remember we were terrified of being recognised ... I remember lines of unknown people linking arms and walking down Whitehall, all of us just swept along on a tide of happiness and relief."Bond, p. 10; Pimlott, p. 79
During the war, plans were drawn up to quell Welsh nationalism by affiliating Elizabeth more closely with Wales. Proposals, such as appointing her Constable of Caernarfon Castle or a patron of Urdd Gobaith Cymru (the Welsh League of Youth), were abandoned for various reasons, which included a fear of associating Elizabeth with conscientious objectors in the Urdd, at a time when Britain was at war. Welsh politicians suggested that she be made Princess of Wales on her 18th birthday. Home Secretary, Herbert Morrison supported the idea, but the King rejected it because he felt such a title belonged solely to the wife of a Prince of Wales and the Prince of Wales had always been the heir apparent.Pimlott, pp. 71–73 In 1946, she was inducted into the Welsh Gorsedd of Bards at the National Eisteddfod of Wales.
In 1947, Princess Elizabeth went on her first overseas tour, accompanying her parents through southern Africa. During the tour, in a broadcast to the British Commonwealth on her 21st birthday, she made the following pledge: "I declare before you all that my whole life, whether it be long or short, shall be devoted to your service and the service of our great imperial family to which we all belong."
Marriage
Elizabeth met her future husband, Prince Philip of Greece and Denmark, in 1934 and 1937.Brandreth, pp. 132–139; Lacey, pp. 124–125; Pimlott, p. 86 They are second cousins once removed through King Christian IX of Denmark and third cousins through Queen Victoria. After another meeting at the Royal Naval College in Dartmouth in July 1939, Elizabeth – though only 13 years old – said she fell in love with Philip and they began to exchange letters.Bond, p. 10; Brandreth, pp. 132–136, 166–169; Lacey, pp. 119, 126, 135 She was 21 when their engagement was officially announced on 9 July 1947.Heald, p. 77
The engagement was not without controversy; Philip had no financial standing, was foreign-born (though a British subject who had served in the Royal Navy throughout the Second World War), and had sisters who had married German noblemen with Nazi links. Marion Crawford wrote, "Some of the King's advisors did not think him good enough for her. He was a prince without a home or kingdom. Some of the papers played long and loud tunes on the string of Philip's foreign origin."Crawford, p. 180 Later biographies reported that Elizabeth's mother initially opposed the union, dubbing Philip "The Hun". In later life, however, the Queen Mother told biographer Tim Heald that Philip was "an English gentleman".Heald, p. xviii
Before the marriage, Philip renounced his Greek and Danish titles, converted from Greek Orthodoxy to Anglicanism, and adopted the style Lieutenant Philip Mountbatten, taking the surname of his mother's British family.Hoey, pp. 55–56; Pimlott, pp. 101, 137 Just before the wedding, he was created Duke of Edinburgh and granted the style His Royal Highness.
Elizabeth and Philip were married on 20 November 1947 at Westminster Abbey. They received 2500 wedding gifts from around the world. Because Britain had not yet completely recovered from the devastation of the war, Elizabeth required ration coupons to buy the material for her gown, which was designed by Norman Hartnell.Hoey, p. 58; Pimlott, pp. 133–134 In post-war Britain, it was not acceptable for the Duke of Edinburgh's German relations, including his three surviving sisters, to be invited to the wedding.Hoey, p. 59; Petropoulos, p. 363 The Duke of Windsor, formerly King Edward VIII, was not invited either.Bradford, p. 61
Elizabeth gave birth to her first child, Prince Charles, on 14 November 1948. One month earlier, the King had issued letters patent allowing her children to use the style and title of a royal prince or princess, to which they otherwise would not have been entitled as their father was no longer a royal prince.Letters Patent, 22 October 1948; Hoey, pp. 69–70; Pimlott, pp. 155–156 A second child, Princess Anne, was born in 1950.Pimlott, p. 163
Following their wedding, the couple leased Windlesham Moor, near Windsor Castle, until July 1949, when they took up residence at Clarence House in London. At various times between 1949 and 1951, the Duke of Edinburgh was stationed in the British Crown Colony of Malta as a serving Royal Navy officer. He and Elizabeth lived intermittently in Malta for several months at a time in the hamlet of Gwardamanġa, at Villa Guardamangia, the rented home of Philip's uncle, Lord Mountbatten. The children remained in Britain.Brandreth, pp. 226–238; Pimlott, pp. 145, 159–163, 167
Reign
Accession and coronation
thumb|upright|left|Coronation of Elizabeth II,
During 1951, George VI's health declined and Elizabeth frequently stood in for him at public events. When she toured Canada and visited President Harry S. Truman in Washington, D.C., in October 1951, her private secretary, Martin Charteris, carried a draft accession declaration in case the King died while she was on tour.Brandreth, pp. 240–241; Lacey, p. 166; Pimlott, pp. 169–172 In early 1952, Elizabeth and Philip set out for a tour of Australia and New Zealand by way of Kenya. On 6 February 1952, they had just returned to their Kenyan home, Sagana Lodge, after a night spent at Treetops Hotel, when word arrived of the death of the King and consequently Elizabeth's immediate accession to the throne. Philip broke the news to the new Queen.Brandreth, pp. 245–247; Lacey, p. 166; Pimlott, pp. 173–176; Shawcross, p.16 Martin Charteris asked her to choose a regnal name; she chose to remain Elizabeth, "of course".Bousfield and Toffoli, p. 72; Charteris quoted in Pimlott, p. 179 and Shawcross, p. 17 She was proclaimed queen throughout her realms and the royal party hastily returned to the United Kingdom.Pimlott, pp. 178–179 She and the Duke of Edinburgh moved into Buckingham Palace.Pimlott, pp. 186–187
With Elizabeth's accession, it seemed probable that the royal house would bear her husband's name, becoming the House of Mountbatten, in line with the custom of a wife taking her husband's surname on marriage. The British Prime Minister, Winston Churchill, and Elizabeth's grandmother, Queen Mary, favoured the retention of the House of Windsor, and so on 9 April 1952 Elizabeth issued a declaration that Windsor would continue to be the name of the royal house. The Duke complained, "I am the only man in the country not allowed to give his name to his own children."Bradford, p. 80; Brandreth, pp. 253–254; Lacey, pp. 172–173; Pimlott, pp. 183–185 In 1960, after the death of Queen Mary in 1953 and the resignation of Churchill in 1955, the surname Mountbatten-Windsor was adopted for Philip and Elizabeth's male-line descendants who do not carry royal titles.
thumb|upright|alt=Elizabeth in crown and robes next to her husband in military uniform|Coronation portrait of Elizabeth and Philip,
Amid preparations for the coronation, Princess Margaret informed her sister that she wished to marry Peter Townsend, a divorcé‚ 16 years Margaret's senior, with two sons from his previous marriage. The Queen asked them to wait for a year; in the words of Martin Charteris, "the Queen was naturally sympathetic towards the Princess, but I think she thought – she hoped – given time, the affair would peter out."Brandreth, pp. 269–271 Senior politicians were against the match and the Church of England did not permit remarriage after divorce. If Margaret had contracted a civil marriage, she would have been expected to renounce her right of succession.Brandreth, pp. 269–271; Lacey, pp. 193–194; Pimlott, pp. 201, 236–238 Eventually, she decided to abandon her plans with Townsend.Bond, p. 22; Brandreth, p. 271; Lacey, p. 194; Pimlott, p. 238; Shawcross, p. 146 In 1960, she married Antony Armstrong-Jones, who was created Earl of Snowdon the following year. They divorced in 1978; she did not remarry.
Despite the death of Queen Mary on 24 March, the coronation on 2 June 1953 went ahead as planned, as Mary had asked before she died.Bradford, p. 82 The ceremony in Westminster Abbey, with the exception of the anointing and communion, was televised for the first time. Elizabeth's coronation gown was embroidered on her instructions with the floral emblems of Commonwealth countries:Lacey, p. 190; Pimlott, pp. 247–248 English Tudor rose; Scots thistle; Welsh leek; Irish shamrock; Australian wattle; Canadian maple leaf; New Zealand silver fern; South African protea; lotus flowers for India and Ceylon; and Pakistan's wheat, cotton, and jute.
Continuing evolution of the Commonwealth
thumb|Commonwealth realms (pink) and their territories and protectorates (red) at the beginning of Elizabeth's reign
From Elizabeth's birth onwards, the British Empire continued its transformation into the Commonwealth of Nations.Marr, p. 272 By the time of her accession in 1952, her role as head of multiple independent states was already established.Pimlott, p. 182 In 1953, the Queen and her husband embarked on a seven-month round-the-world tour, visiting 13 countries and covering more than 40,000 miles by land, sea and air. She became the first reigning monarch of Australia and New Zealand to visit those nations.Marr, p. 126 During the tour, crowds were immense; three-quarters of the population of Australia were estimated to have seen her.Brandreth, p. 278; Marr, p. 126; Pimlott, p. 224; Shawcross, p. 59 Throughout her reign, the Queen has made hundreds of state visits to other countries and tours of the Commonwealth; she is the most widely travelled head of state.
In 1956, the British and French prime ministers, Sir Anthony Eden and Guy Mollet, discussed the possibility of France joining the Commonwealth. The proposal was never accepted and the following year France signed the Treaty of Rome, which established the European Economic Community, the precursor to the European Union. In November 1956, Britain and France invaded Egypt in an ultimately unsuccessful attempt to capture the Suez Canal. Lord Mountbatten claimed the Queen was opposed to the invasion, though Eden denied it. Eden resigned two months later.Pimlott, p. 255; Roberts, p. 84
thumb|left|alt=A formal group of Elizabeth in tiara and evening dress with eleven politicians in evening dress or national costume.|Elizabeth II and Commonwealth leaders at the 1960 Commonwealth Conference
The absence of a formal mechanism within the Conservative Party for choosing a leader meant that, following Eden's resignation, it fell to the Queen to decide whom to commission to form a government. Eden recommended that she consult Lord Salisbury, the Lord President of the Council. Lord Salisbury and Lord Kilmuir, the Lord Chancellor, consulted the British Cabinet, Winston Churchill, and the Chairman of the backbench 1922 Committee, resulting in the Queen appointing their recommended candidate: Harold Macmillan.Marr, pp. 175–176; Pimlott, pp. 256–260; Roberts, p. 84
The Suez crisis and the choice of Eden's successor led in 1957 to the first major personal criticism of the Queen. In a magazine, which he owned and edited,Lacey, p. 199; Shawcross, p. 75 Lord Altrincham accused her of being "out of touch".Lord Altrincham in National Review quoted by Brandreth, p. 374 and Roberts, p. 83 Altrincham was denounced by public figures and slapped by a member of the public appalled by his comments.Brandreth, p. 374; Pimlott, pp. 280–281; Shawcross, p. 76 Six years later, in 1963, Macmillan resigned and advised the Queen to appoint the Earl of Home as prime minister, advice that she followed.Hardman, p. 22; Pimlott, pp. 324–335; Roberts, p. 84 The Queen again came under criticism for appointing the prime minister on the advice of a small number of ministers or a single minister. In 1965, the Conservatives adopted a formal mechanism for electing a leader, thus relieving her of involvement.Roberts, p. 84
In 1957, she made a state visit to the United States, where she addressed the United Nations General Assembly on behalf of the Commonwealth. On the same tour, she opened the 23rd Canadian Parliament, becoming the first monarch of Canada to open a parliamentary session. Two years later, solely in her capacity as Queen of Canada, she revisited the United States and toured Canada.Bradford, p. 114 In 1961, she toured Cyprus, India, Pakistan, Nepal, and Iran.Pimlott, p. 303; Shawcross, p. 83 On a visit to Ghana the same year, she dismissed fears for her safety, even though her host, President Kwame Nkrumah, who had replaced her as head of state, was a target for assassins. Harold Macmillan wrote, "The Queen has been absolutely determined all through ... She is impatient of the attitude towards her to treat her as ... a film star ... She has indeed 'the heart and stomach of a man' ... She loves her duty and means to be a Queen."Macmillan, pp. 466–472 Before her tour through parts of Quebec in 1964, the press reported that extremists within the Quebec separatist movement were plotting Elizabeth's assassination. No attempt was made, but a riot did break out while she was in Montreal; the Queen's "calmness and courage in the face of the violence" was noted.Bousfield, p. 139
Elizabeth's pregnancies with Princes Andrew and Edward, in 1959 and 1963, mark the only times she has not performed the State Opening of the British parliament during her reign. In addition to performing traditional ceremonies, she also instituted new practices. Her first royal walkabout, meeting ordinary members of the public, took place during a tour of Australia and New Zealand in 1970.Hardman, pp. 213–214
Acceleration of decolonisation
thumb|upright|left|The Queen with Edward Heath (left) and First Lady Pat Nixon, 1970
The 1960s and 1970s saw an acceleration in the decolonisation of Africa and the Caribbean. Over 20 countries gained independence from Britain as part of a planned transition to self-government. In 1965, however, the Rhodesian Prime Minister, Ian Smith, in opposition to moves towards majority rule, declared unilateral independence from Britain while still expressing "loyalty and devotion" to Elizabeth. Although the Queen dismissed him in a formal declaration, and the international community applied sanctions against Rhodesia, his regime survived for over a decade.Bond, p. 66; Pimlott, pp. 345–354 As Britain's ties to its former empire weakened, the British government sought entry to the European Community, a goal it achieved in 1973.Bradford, pp. 123, 154, 176; Pimlott, pp. 301, 315–316, 415–417
In February 1974, the British Prime Minister, Edward Heath, advised the Queen to call a general election in the middle of her tour of the Austronesian Pacific Rim, requiring her to fly back to Britain.Bradford, p. 181; Pimlott, p. 418 The election resulted in a hung parliament; Heath's Conservatives were not the largest party, but could stay in office if they formed a coalition with the Liberals. Heath only resigned when discussions on forming a coalition foundered, after which the Queen asked the Leader of the Opposition, Labour's Harold Wilson, to form a government.Bradford, p. 181; Marr, p. 256; Pimlott, p. 419; Shawcross, pp. 109–110
A year later, at the height of the 1975 Australian constitutional crisis, the Australian Prime Minister, Gough Whitlam, was dismissed from his post by Governor-General Sir John Kerr, after the Opposition-controlled Senate rejected Whitlam's budget proposals.Bond, p. 96; Marr, p. 257; Pimlott, p. 427; Shawcross, p. 110 As Whitlam had a majority in the House of Representatives, Speaker Gordon Scholes appealed to the Queen to reverse Kerr's decision. She declined, stating that she would not interfere in decisions reserved by the Constitution of Australia for the governor-general.Pimlott, pp. 428–429 The crisis fuelled Australian republicanism.
Silver Jubilee
In 1977, Elizabeth marked the Silver Jubilee of her accession. Parties and events took place throughout the Commonwealth, many coinciding with her associated national and Commonwealth tours. The celebrations re-affirmed the Queen's popularity, despite virtually coincident negative press coverage of Princess Margaret's separation from her husband.Pimlott, p. 449 In 1978, the Queen endured a state visit to the United Kingdom by Romania's communist dictator, Nicolae Ceaușescu, and his wife, Elena,Hardman, p. 137; Roberts, pp. 88–89; Shawcross, p. 178 though privately she thought they had "blood on their hands".Elizabeth to her staff, quoted in Shawcross, p. 178 The following year brought two blows: one was the unmasking of Anthony Blunt, former Surveyor of the Queen's Pictures, as a communist spy; the other was the assassination of her relative and in-law Lord Mountbatten by the Provisional Irish Republican Army.Pimlott, pp. 336–337, 470–471; Roberts, pp. 88–89
According to Paul Martin, Sr., by the end of the 1970s the Queen was worried that the Crown "had little meaning for" Pierre Trudeau, the Canadian Prime Minister. Tony Benn said that the Queen found Trudeau "rather disappointing". Trudeau's supposed republicanism seemed to be confirmed by his antics, such as sliding down banisters at Buckingham Palace and pirouetting behind the Queen's back in 1977, and the removal of various Canadian royal symbols during his term of office. In 1980, Canadian politicians sent to London to discuss the patriation of the Canadian constitution found the Queen "better informed ... than any of the British politicians or bureaucrats". She was particularly interested after the failure of Bill C-60, which would have affected her role as head of state. Patriation removed the role of the British parliament from the Canadian constitution, but the monarchy was retained. Trudeau said in his memoirs that the Queen favoured his attempt to reform the constitution and that he was impressed by "the grace she displayed in public" and "the wisdom she showed in private".Trudeau, p. 313
1980s
thumb|right|alt=Elizabeth in red uniform on a black horse|Elizabeth riding Burmese at the 1986 Trooping the Colour ceremony
During the 1981 Trooping the Colour ceremony, six weeks before the wedding of Charles, Prince of Wales, and Lady Diana Spencer, six shots were fired at the Queen from close range as she rode down The Mall on her horse, Burmese. Police later discovered that the shots were blanks. The 17-year-old assailant, Marcus Sarjeant, was sentenced to five years in prison and released after three. The Queen's composure and skill in controlling her mount were widely praised.Lacey, p. 281; Pimlott, pp. 476–477; Shawcross, p. 192
From April to September 1982, the Queen was anxiousBond, p. 115; Pimlott, p. 487 but proudShawcross, p. 127 of her son, Prince Andrew, who was serving with British forces during the Falklands War. On 9 July, the Queen awoke in her bedroom at Buckingham Palace to find an intruder, Michael Fagan, in the room with her. Remaining calm and through two calls to the Palace police switchboard, she spoke to Fagan while he sat at the foot of her bed until assistance arrived seven minutes later.Lacey, pp. 297–298; Pimlott, p. 491 After hosting US President Ronald Reagan at Windsor Castle in 1982 and visiting his California ranch in 1983, the Queen was angered when his administration ordered the invasion of Grenada, one of her Caribbean realms, without informing her.Bond, p. 188; Pimlott, p. 497
Intense media interest in the opinions and private lives of the royal family during the 1980s led to a series of sensational stories in the press, not all of which were entirely true.Pimlott, pp. 488–490 As Kelvin MacKenzie, editor of The Sun, told his staff: "Give me a Sunday for Monday splash on the Royals. Don't worry if it's not true—so long as there's not too much of a fuss about it afterwards."Pimlott, p. 521 Newspaper editor Donald Trelford wrote in The Observer of 21 September 1986: "The royal soap opera has now reached such a pitch of public interest that the boundary between fact and fiction has been lost sight of ... it is not just that some papers don't check their facts or accept denials: they don't care if the stories are true or not." It was reported, most notably in The Sunday Times of 20 July 1986, that the Queen was worried that Margaret Thatcher's economic policies fostered social divisions and was alarmed by high unemployment, a series of riots, the violence of a miners' strike, and Thatcher's refusal to apply sanctions against the apartheid regime in South Africa. The sources of the rumours included royal aide Michael Shea and Commonwealth Secretary-General Shridath Ramphal, but Shea claimed his remarks were taken out of context and embellished by speculation.Pimlott, pp. 503–515; see also Neil, pp. 195–207 and Shawcross, pp. 129–132 Thatcher reputedly said the Queen would vote for the Social Democratic Party – Thatcher's political opponents.Thatcher to Brian Walden quoted in Neil, p. 207; Andrew Neil quoted in Woodrow Wyatt's diary of 26 October 1990 Thatcher's biographer John Campbell claimed "the report was a piece of journalistic mischief-making".Campbell, p. 467 Belying reports of acrimony between them, Thatcher later conveyed her personal admiration for the Queen,Thatcher, p. 309 and the Queen gave two honours in her personal gift – membership in the Order of Merit and the Order of the Garter – to Thatcher after her replacement as prime minister by John Major.Roberts, p. 101; Shawcross, p. 139 Former Canadian Prime Minister Brian Mulroney said Elizabeth was a "behind the scenes force" in ending apartheid.
In 1987, in Canada, Elizabeth publicly supported politically divisive constitutional amendments, prompting criticism from opponents of the proposed changes, including Pierre Trudeau. The same year, the elected Fijian government was deposed in a military coup. As monarch of Fiji, Elizabeth supported the attempts of the Governor-General, Ratu Sir Penaia Ganilau, to assert executive power and negotiate a settlement. Coup leader Sitiveni Rabuka deposed Ganilau and declared Fiji a republic.Pimlott, pp. 515–516 By the start of 1991, republican feeling in Britain had risen because of press estimates of the Queen's private wealth – which were contradicted by the Palace – and reports of affairs and strained marriages among her extended family.Pimlott, pp. 519–534 The involvement of younger members of the royal family in the charity game show It's a Royal Knockout was ridiculed,Hardman, p. 81; Lacey, p. 307; Pimlott, pp. 522–526 and the Queen was the target of satire.Lacey, pp. 293–294; Pimlott, p. 541
1990s
In 1991, in the wake of coalition victory in the Gulf War, the Queen became the first British monarch to address a joint meeting of the United States Congress.Pimlott, p. 538
thumb|left|alt=Elizabeth, in formal dress, holds a pair of spectacles to her mouth in a thoughtful pose|Philip and Elizabeth,
In a speech on 24 November 1992, to mark the 40th anniversary of her accession, Elizabeth called 1992 her annus horribilis, meaning horrible year. In March, her second son, Prince Andrew, Duke of York, and his wife, Sarah, separated; in April, her daughter, Princess Anne, divorced Captain Mark Phillips;Lacey, p. 319; Marr, p. 315; Pimlott, pp. 550–551 during a state visit to Germany in October, angry demonstrators in Dresden threw eggs at her; and, in November, a large fire broke out at Windsor Castle, one of her official residences. The monarchy came under increased criticism and public scrutiny.Brandreth, p. 377; Pimlott, pp. 558–559; Roberts, p. 94; Shawcross, p. 204 In an unusually personal speech, the Queen said that any institution must expect criticism, but suggested it be done with "a touch of humour, gentleness and understanding".Brandreth, p. 377 Two days later, the Prime Minister, John Major, announced reforms to the royal finances planned since the previous year, including the Queen paying income tax from 1993 onwards, and a reduction in the civil list.Bradford, p. 229; Lacey, pp. 325–326; Pimlott, pp. 559–561 In December, Prince Charles and his wife, Diana, formally separated.Bradford, p. 226; Hardman, p. 96; Lacey, p. 328; Pimlott, p. 561 The year ended with a lawsuit as the Queen sued The Sun newspaper for breach of copyright when it published the text of her annual Christmas message two days before it was broadcast. The newspaper was forced to pay her legal fees and donated £200,000 to charity.Pimlott, p. 562
In the years to follow, public revelations on the state of Charles and Diana's marriage continued.Brandreth, p. 356; Pimlott, pp. 572–577; Roberts, p. 94; Shawcross, p. 168 Even though support for republicanism in Britain seemed higher than at any time in living memory, republicanism was still a minority viewpoint, and the Queen herself had high approval ratings.MORI poll for The Independent newspaper, March 1996, quoted in Pimlott, p. 578 and Criticism was focused on the institution of the monarchy itself and the Queen's wider family rather than her own behaviour and actions.Pimlott, p. 578 In consultation with her husband and the Prime Minister, John Major, as well as the Archbishop of Canterbury, George Carey, and her private secretary, Robert Fellowes, she wrote to Charles and Diana at the end of December 1995, saying that a divorce was desirable.Brandreth, p. 357; Pimlott, p. 577
In 1997, a year after the divorce, Diana was killed in a car crash in Paris. The Queen was on holiday with her extended family at Balmoral. Diana's two sons by Charles – Princes William and Harry – wanted to attend church and so the Queen and Prince Philip took them that morning.Brandreth, p. 358; Hardman, p. 101; Pimlott, p. 610 After that single public appearance, for five days the Queen and the Duke shielded their grandsons from the intense press interest by keeping them at Balmoral where they could grieve in private,Bond, p. 134; Brandreth, p. 358; Marr, p. 338; Pimlott, p. 615 but the royal family's seclusion and the failure to fly a flag at half-mast over Buckingham Palace caused public dismay.Bond, p. 134; Brandreth, p. 358; Lacey, pp. 6–7; Pimlott, p. 616; Roberts, p. 98; Shawcross, p. 8 Pressured by the hostile reaction, the Queen agreed to return to London and do a live television broadcast on 5 September, the day before Diana's funeral.Brandreth, pp. 358–359; Lacey, pp. 8–9; Pimlott, pp. 621–622 In the broadcast, she expressed admiration for Diana and her feelings "as a grandmother" for the two princes.Bond, p. 134; Brandreth, p. 359; Lacey, pp. 13–15; Pimlott, pp. 623–624 As a result, much of the public hostility evaporated.
Golden Jubilee
thumb|upright|left|Elizabeth II in 2007
In 2002, Elizabeth marked her Golden Jubilee. Her sister and mother died in February and March respectively, and the media speculated whether the Jubilee would be a success or a failure.Bond, p. 156; Bradford, pp. 248–249; Marr, pp. 349–350 She again undertook an extensive tour of her realms, which began in Jamaica in February, where she called the farewell banquet "memorable" after a power cut plunged the King's House, the official residence of the governor-general, into darkness.Brandreth, p. 31 As in 1977, there were street parties and commemorative events, and monuments were named to honour the occasion. A million people attended each day of the three-day main Jubilee celebration in London,Bond, pp. 166–167 and the enthusiasm shown by the public for the Queen was greater than many journalists had expected.Bond, p. 157
Though generally healthy throughout her life, in 2003 she had keyhole surgery on both knees. In October 2006, she missed the opening of the new Emirates Stadium because of a strained back muscle that had been troubling her since the summer.
In May 2007, The Daily Telegraph, citing unnamed sources, reported that the Queen was "exasperated and frustrated" by the policies of the British Prime Minister, Tony Blair, that she was concerned the British Armed Forces were overstretched in Iraq and Afghanistan, and that she had raised concerns over rural and countryside issues with Blair. She was, however, said to admire Blair's efforts to achieve peace in Northern Ireland. On 20 March 2008, at the Church of Ireland St Patrick's Cathedral, Armagh, the Queen attended the first Maundy service held outside England and Wales. At the invitation of the Irish President, Mary McAleese, the Queen made the first state visit to the Republic of Ireland by a British monarch in May 2011.Bradford, p. 253
The Queen addressed the United Nations for a second time in 2010, again in her capacity as Queen of all Commonwealth realms and Head of the Commonwealth. The UN Secretary General, Ban Ki-moon, introduced her as "an anchor for our age". During her visit to New York, which followed a tour of Canada, she officially opened a memorial garden for the British victims of the September 11 attacks. The Queen's visit to Australia in October 2011 – her sixteenth visit since 1954 – was called her "farewell tour" in the press because of her age.
Diamond Jubilee and beyond
thumb|Elizabeth visiting Birmingham in as part of her Diamond Jubilee tour
Her Diamond Jubilee in 2012 marked 60 years on the throne, and celebrations were held throughout her realms, the wider Commonwealth, and beyond. In a message released on Accession Day, Elizabeth wrote:
She and her husband undertook an extensive tour of the United Kingdom, while her children and grandchildren embarked on royal tours of other Commonwealth states on her behalf. On 4 June, Jubilee beacons were lit around the world. On 18 December, she became the first British sovereign to attend a peacetime Cabinet meeting since George III in 1781.
left|thumb|upright|The Queen visiting the Home Office in 2015
The Queen, who opened the 1976 Summer Olympics in Montreal, also opened the 2012 Summer Olympics and Paralympics in London, making her the first head of state to open two Olympic Games in two different countries. For the London Olympics, she played herself in a short film as part of the opening ceremony, alongside Daniel Craig as James Bond. On 4 April 2013, she received an honorary BAFTA for her patronage of the film industry and was called "the most memorable Bond girl yet" at the award ceremony.
On 3 March 2013, Elizabeth was admitted to King Edward VII's Hospital as a precaution after developing symptoms of gastroenteritis. She returned to Buckingham Palace the following day. A week later, she signed the new Commonwealth charter. Because of her age and the need for her to limit travelling, in 2013 she chose not to attend the biennial meeting of Commonwealth heads of government for the first time in 40 years. She was represented at the summit in Sri Lanka by her son, Prince Charles.
The Queen surpassed her great-great-grandmother, Queen Victoria, to become the longest-lived British monarch in December 2007, and the longest-reigning British monarch on 9 September 2015. She was celebrated in Canada as the "longest-reigning sovereign in Canada's modern era". (King Louis XIV of France reigned over Canada (New France) for longer.) She is also the longest-reigning queen regnant in history, and the world's oldest reigning monarch. She became the longest-serving current head of state following the death of King Bhumibol of Thailand on 13 October 2016.
The Queen does not intend to abdicate,Brandreth, pp. 370–371; Marr, p. 395 though Prince Charles is expected to take on more of her workload as Elizabeth, who celebrated her ninetieth birthday in 2016, carries out fewer public engagements.Marr, p. 395
Public perception and character
Since Elizabeth rarely gives interviews, little is known of her personal feelings. As a constitutional monarch, she has not expressed her own political opinions in a public forum. She does have a deep sense of religious and civic duty, and takes her coronation oath seriously.Shawcross, pp. 194–195 Aside from her official religious role as Supreme Governor of the established Church of England, she is personally a member of that church and the national Church of Scotland. She has demonstrated support for inter-faith relations and has met with leaders of other churches and religions, including five popes: Pius XII, John XXIII, John Paul II, Benedict XVI and Francis. A personal note about her faith often features in her annual Christmas message broadcast to the Commonwealth. In 2000, she spoke about the theological significance of the millennium marking the 2000th anniversary of the birth of Jesus:To many of us, our beliefs are of fundamental importance. For me the teachings of Christ and my own personal accountability before God provide a framework in which I try to lead my life. I, like so many of you, have drawn great comfort in difficult times from Christ's words and example.Shawcross, pp. 236–237
thumb|left|alt=Elizabeth and Ronald Reagan on black horses. He bare-headed; she in a headscarf; both in tweeds, jodhpurs and riding boots.|Elizabeth II and Ronald Reagan riding at Windsor,
She is patron of over 600 organisations and charities. Her main leisure interests include equestrianism and dogs, especially her Pembroke Welsh Corgis. Her lifelong love of corgis began in 1933 with Dookie, the first corgi owned by her family. Scenes of a relaxed, informal home life have occasionally been witnessed; she and her family, from time to time, prepare a meal together and do the washing up afterwards.
In the 1950s, as a young woman at the start of her reign, Elizabeth was depicted as a glamorous "fairytale Queen".Bond, p. 22 After the trauma of the Second World War, it was a time of hope, a period of progress and achievement heralding a "new Elizabethan age".Bond, p. 35; Pimlott, p. 180; Roberts, p. 82; Shawcross, p. 50 Lord Altrincham's accusation in 1957 that her speeches sounded like those of a "priggish schoolgirl" was an extremely rare criticism.Bond, p. 35; Pimlott, p. 280; Shawcross, p. 76 In the late 1960s, attempts to portray a more modern image of the monarchy were made in the television documentary Royal Family and by televising Prince Charles's investiture as Prince of Wales.Bond, pp. 66–67, 84, 87–89; Bradford, pp. 160–163; Hardman, pp. 22, 210–213; Lacey, pp. 222–226; Marr, p. 237; Pimlott, pp. 378–392; Roberts, pp. 84–86 In public, she took to wearing mostly solid-colour overcoats and decorative hats, which allow her to be seen easily in a crowd.
At her Silver Jubilee in 1977, the crowds and celebrations were genuinely enthusiastic,Bond, p. 97; Bradford, p. 189; Pimlott, pp. 449–450; Roberts, p. 87; Shawcross, pp. 114–117 but in the 1980s, public criticism of the royal family increased, as the personal and working lives of Elizabeth's children came under media scrutiny.Bond, p. 117; Roberts, p. 91 Elizabeth's popularity sank to a low point in the 1990s. Under pressure from public opinion, she began to pay income tax for the first time, and Buckingham Palace was opened to the public.Bond, p. 134; Pimlott, pp. 556–561, 570 Discontent with the monarchy reached its peak on the death of Diana, Princess of Wales, though Elizabeth's personal popularity and support for the monarchy rebounded after her live television broadcast to the world five days after Diana's death.Bond, p. 134; Pimlott, pp. 624–625
In November 1999, a referendum in Australia on the future of the Australian monarchy favoured its retention in preference to an indirectly elected head of state.Hardman, p. 310; Lacey, p. 387; Roberts, p. 101; Shawcross, p. 218 Polls in Britain in 2006 and 2007 revealed strong support for Elizabeth, and in 2012, her Diamond Jubilee year, approval ratings hit 90 percent. Referenda in Tuvalu in 2008 and Saint Vincent and the Grenadines in 2009 both rejected proposals to become republics.
Elizabeth has been portrayed in a variety of media by many notable artists, including painters Pietro Annigoni, Peter Blake, Chinwe Chukwuogo-Roy, Terence Cuneo, Lucian Freud, Damien Hirst, Juliet Pannett, and Tai-Shan Schierenberg. Notable photographers of Elizabeth have included Cecil Beaton, Yousuf Karsh, Annie Leibovitz, Lord Lichfield, Terry O'Neill, John Swannell, and Dorothy Wilding. The first official portrait of Elizabeth was taken by Marcus Adams in 1926.
Finances
thumb|right|alt=View of Sandingham House from the south bank of the Upper Lake|Sandringham House, Elizabeth's private residence in Norfolk
Elizabeth's personal fortune has been the subject of speculation for many years. Jock Colville, who was her former private secretary and a director of her bank, Coutts, estimated her wealth in 1971 at £2 million (equivalent to about £ today).Pimlott, p. 401 In 1993, Buckingham Palace called estimates of £100 million "grossly overstated".Lord Chamberlain Lord Airlie quoted in Hoey, p. 225 and Pimlott, p. 561 She inherited an estimated £70 million estate from her mother in 2002. The Sunday Times Rich List 2015 estimated her private wealth at £340 million, making her the 302nd richest person in the UK.
The Royal Collection, which includes thousands of historic works of art and the Crown Jewels, is not owned by the Queen personally but is held in trust, as are her official residences, such as Buckingham Palace and Windsor Castle, and the Duchy of Lancaster, a property portfolio valued in 2014 at £442 million. Sandringham House and Balmoral Castle are privately owned by the Queen. The British Crown Estate – with holdings of £9.4 billion in 2014 – is held in trust by the sovereign and cannot be sold or owned by Elizabeth in a private capacity.
Titles, styles, honours, and arms
Titles and styles
Elizabeth has held many titles and honorary military positions throughout the Commonwealth, is Sovereign of many orders in her own countries, and has received honours and awards from around the world. In each of her realms she has a distinct title that follows a similar formula: Queen of Jamaica and her other realms and territories in Jamaica, Queen of Australia and her other realms and territories in Australia, etc. In the Channel Islands and Isle of Man, which are Crown dependencies rather than separate realms, she is known as Duke of Normandy and Lord of Mann, respectively. Additional styles include Defender of the Faith and Duke of Lancaster. When in conversation with the Queen, the practice is to initially address her as Your Majesty and thereafter as Ma'am.
Arms
From 21 April 1944 until her accession, Elizabeth's arms consisted of a lozenge bearing the royal coat of arms of the United Kingdom differenced with a label of three points argent, the centre point bearing a Tudor rose and the first and third a cross of St George. Upon her accession, she inherited the various arms her father held as sovereign. The Queen also possesses royal standards and personal flags for use in the United Kingdom, Canada, Australia, New Zealand, Jamaica, Barbados, and elsewhere.
center|200pxcenter|200pxcenter|200pxcenter|200pxcenter|150pxCoat of arms of Princess Elizabeth (1944–1947)Coat of arms of Princess Elizabeth, Duchess of Edinburgh (1947–1952)Coat of arms of Elizabeth II in England, Wales and Northern IrelandCoat of arms of Elizabeth II in ScotlandCoat of arms of Elizabeth II in Canada (one of three versions used in her reign)
Issue
Name Birth Marriage Their children Their grandchildren Date Spouse Prince Charles, Prince of Wales 14 November 1948 29 July 1981Divorced 28 August 1996 Lady Diana Spencer Prince William, Duke of Cambridge Prince George of CambridgePrincess Charlotte of Cambridge Prince Henry of Wales 9 April 2005 Camilla Parker Bowles Princess Anne, Princess Royal 15 August 1950 14 November 1973Divorced 28 April 1992 Mark Phillips Peter Phillips Savannah PhillipsIsla Phillips Zara Tindall Mia Tindall 12 December 1992 Timothy Laurence Prince Andrew, Duke of York 19 February 1960 23 July 1986Divorced 30 May 1996 Sarah Ferguson Princess Beatrice of York Princess Eugenie of York Prince Edward, Earl of Wessex 10 March 1964 19 June 1999 Sophie Rhys-Jones Lady Louise WindsorJames, Viscount Severn
Ancestry
See also
Household of Elizabeth II
List of things named after Queen Elizabeth II
Notes
References
Bibliography
Bond, Jennie (2006). Elizabeth: Eighty Glorious Years. London: Carlton Publishing Group. ISBN 1-84442-260-7
Bousfield, Arthur; Toffoli, Gary (2002). Fifty Years the Queen. Toronto: Dundurn Press. ISBN 978-1-55002-360-2
Bradford, Sarah (2012). Queen Elizabeth II: Her Life in Our Times. London: Penguin. ISBN 978-0-670-91911-6
Brandreth, Gyles (2004). Philip and Elizabeth: Portrait of a Marriage. London: Century. ISBN 0-7126-6103-4
Briggs, Asa (1995). The History of Broadcasting in the United Kingdom: Volume 4. Oxford: Oxford University Press. ISBN 0-19-212967-8
Campbell, John (2003). Margaret Thatcher: The Iron Lady. London: Jonathan Cape. ISBN 0-224-06156-9
Crawford, Marion (1950). The Little Princesses. London: Cassell & Co.
Hardman, Robert (2011). Our Queen. London: Hutchinson. ISBN 978-0-09-193689-1
Heald, Tim (2007). Princess Margaret: A Life Unravelled. London: Weidenfeld & Nicolson. ISBN 978-0-297-84820-2
Hoey, Brian (2002). Her Majesty: Fifty Regal Years. London: HarperCollins. ISBN 0-00-653136-9
Lacey, Robert (2002). Royal: Her Majesty Queen Elizabeth II. London: Little, Brown. ISBN 0-316-85940-0
Macmillan, Harold (1972). Pointing The Way 1959–1961 London: Macmillan. ISBN 0-333-12411-1
Marr, Andrew (2011). The Diamond Queen: Elizabeth II and Her People. London: Macmillan. ISBN 978-0-230-74852-1
Neil, Andrew (1996). Full Disclosure. London: Macmillan. ISBN 0-333-64682-7
Nicolson, Sir Harold (1952). King George the Fifth: His Life and Reign. London: Constable & Co.
Petropoulos, Jonathan (2006). Royals and the Reich: the princes von Hessen in Nazi Germany. New York: Oxford University Press. ISBN 0-19-516133-5
Pimlott, Ben (2001). The Queen: Elizabeth II and the Monarchy. London: HarperCollins. ISBN 0-00-255494-1
Roberts, Andrew; Edited by Antonia Fraser (2000). The House of Windsor. London: Cassell & Co. ISBN 0-304-35406-6
Shawcross, William (2002). Queen and Country. Toronto: McClelland & Stewart. ISBN 0-7710-8056-5
Thatcher, Margaret (1993). The Downing Street Years. London: HarperCollins. ISBN 0-00-255049-0
Trudeau, Pierre Elliott (1993). Memoirs. Toronto: McLelland & Stewart. ISBN 978-0-7710-8588-8
Williamson, David (1987). Debrett's Kings and Queens of Britain. Webb & Bower. ISBN 0-86350-101-X
Wyatt, Woodrow; Edited by Sarah Curtis (1999). The Journals of Woodrow Wyatt: Volume II. London: Macmillan. ISBN 0-333-77405-1
External links
The Queen at the Royal Family website
Category:1926 births
Category:Auxiliary Territorial Service officers
Category:British Anglicans
Category:British philanthropists
Category:British Presbyterians
Category:British princesses
Category:British racehorse owners and breeders
Category:British women in World War II
Category:Girlguiding UK
Category:Heads of state of Antigua and Barbuda
Category:Heads of state of the Bahamas
Category:Heads of state of Barbados
Category:Heads of state of Belize
Category:Heads of state of Canada
Category:Heads of state of Fiji
Category:Heads of state of the Gambia
Category:Heads of state of Ghana
Category:Heads of state of Grenada
Category:Heads of state of Guyana
Category:Heads of state of Jamaica
Category:Heads of state of Kenya
Category:Heads of state of Malawi
Category:Heads of state of Malta
Category:Heads of state of Mauritius
Category:Heads of state of New Zealand
Category:Heads of state of Nigeria
Category:Heads of state of Pakistan
Category:Heads of state of Papua New Guinea
Category:Heads of state of Saint Kitts and Nevis
Category:Heads of state of Saint Lucia
Category:Heads of state of Saint Vincent and the Grenadines
Category:Heads of state of Sierra Leone
Category:Heads of state of the Solomon Islands
Category:Heads of state of Tanganyika
Category:Heads of state of Trinidad and Tobago
Category:Heads of state of Tuvalu
Category:Heads of state of Uganda
Category:Heads of the Commonwealth
Category:Honorary air commodores
Category:House of Windsor
Category:Jewellery collectors
Category:Living people
Category:Monarchs of Australia
Category:Monarchs of Ceylon
Category:Monarchs of South Africa
Category:Monarchs of the United Kingdom
Category:People from Mayfair
Category:Protestant monarchs
Category:Queens regnant in the British Isles
Category:Women in the Canadian armed services | 12,153,654 | 2017-01 |
Liberia | Liberia , officially the Republic of Liberia, is a country on the West African coast. Liberia means "Land of the Free" in Latin. It is bordered by Sierra Leone to its west, Guinea to its north and Ivory Coast to its east. It covers an area of and has a population of 4,503,000 people. English is the official language and over 20 indigenous languages are spoken, representing the numerous tribes who make up more than 95% of the population. The country's capital and largest city is Monrovia.
Forests on the coastline are composed mostly of salt-tolerant mangrove trees, while the more sparsely populated inland has forests opening onto a plateau of drier grasslands. The climate is equatorial, with significant rainfall during the May–October rainy season and harsh harmattan winds the remainder of the year. Liberia possesses about forty percent of the remaining Upper Guinean rainforest. It was an important producer of rubber in the early 20th century.
The Republic of Liberia began as a settlement of the American Colonization Society (ACS), who believed blacks would face better chances for freedom in Africa than in the United States. The country declared its independence on July 26, 1847. The U.S. did not recognize Liberia's independence until during the American Civil War on February 5, 1862. Between January 7, 1822 and the American Civil War, more than 15,000 freed and free-born black Americans, who faced legislated limits in the U.S., and 3,198 Afro-Caribbeans, relocated to the settlement. The black American settlers carried their culture with them to Liberia. The Liberian constitution and flag were modeled after those of the U.S. On January 3, 1848, Joseph Jenkins Roberts, a wealthy, free-born black American from Virginia who settled in Liberia, was elected as Liberia's first president after the people proclaimed independence."July 26, 1847 Liberian independence proclaimed", This Day In History, History website.
Liberia is the only African republic to have self-proclaimed independence without gaining independence through revolt from any other nation, being Africa's first and oldest modern republic. Liberia maintained and kept its independence during the European colonial era. During World War II, Liberia supported the United States war efforts against Germany and in turn the U.S. invested in considerable infrastructure in Liberia to help its war effort, which also aided the country in modernizing and improving its major air transportation facilities. In addition, President William Tubman encouraged economic changes. Internationally, Liberia was a founding member of the League of Nations, United Nations and the Organisation of African Unity.
Political tensions from the rule of William R. Tolbert resulted in a military coup in 1980 that overthrew his leadership soon after his death, marking the beginning of years-long political instability. Five years of military rule by the People's Redemption Council and five years of civilian rule by the National Democratic Party of Liberia were followed by the First and Second Liberian Civil Wars. These resulted in the deaths and displacement of more than half a million people and devastated Liberia's economy. A peace agreement in 2003 led to democratic elections in 2005. Recovery proceeds but about 85% of the population live below the international poverty line. Liberia's economic and political stability was threatened in the 2010s by an Ebola virus epidemic; it originated in Guinea in December 2013, entered Liberia in March 2014, and was declared officially ended on May 8, 2015.
History
thumb|left|350px|A European map of West Africa and the Grain Coast, 1736. It has the archaic mapping designation of Negroland.
The Pepper Coast, also known as the Grain Coast, has been inhabited by indigenous peoples of Africa at least as far back as the 12th century. Mende-speaking people expanded westward from the Sudan, forcing many smaller ethnic groups southward toward the Atlantic Ocean. The Dei, Bassa, Kru, Gola and Kissi were some of the earliest documented peoples in the area.
This influx was compounded by the decline of the Western Sudanic Mali Empire in 1375 and the Songhai Empire in 1591. Additionally, as inland regions underwent desertification, inhabitants moved to the wetter coast. These new inhabitants brought skills such as cotton spinning, cloth weaving, iron smelting, rice and sorghum cultivation, and social and political institutions from the Mali and Songhai empires. Shortly after the Mane conquered the region, the Vai people of the former Mali Empire immigrated into the Grand Cape Mount County region. The ethnic Kru opposed the influx of Vai, forming an alliance with the Mane to stop further influx of Vai.
People along the coast built canoes and traded with other West Africans from Cap-Vert to the Gold Coast. Arab traders entered the region from the north, and a long-established slave trade took captives to north and east Africa.
Between 1461 and the late 17th century, Portuguese, Dutch and British traders had contacts and trading posts in the region. The Portuguese named the area Costa da Pimenta ("Pepper Coast") but it later came to be known as the Grain Coast, due to the abundance of melegueta pepper grains. European traders would barter commodities and goods with local people.
Early settlement
In the United States, there was a movement to resettle free-born blacks and freed slaves who faced racial discrimination in the form of political disenfranchisement, and the denial of civil, religious and social priviliges in the United States.Howard Brotz, ed., African American Social & Political Thought 1850 - 1920 (New Brunswick: Transaction Publishers, 1996), 38-39. Most whites and later a small cadre of black nationalists believed that blacks would face better chances for freedom in Africa than in the U.S."Background on conflict in Liberia", Friends Committee on National Legislation, July 30, 2003 The American Colonization Society was founded in 1816 in Washington, DC for this purpose, by a group of prominent politicians and slaveholders. But its membership grew to include mostly people who supported abolition of slavery. Slaveholders wanted to get free people of color out of the South, where they were thought to threaten the stability of the slave societies. Some abolitionists collaborated on relocation of free blacks, as they were discouraged by racial discrimination against them in the North and believed they would never be accepted in the larger society.Maggie Montesinos Sale (1997). The Slumbering Volcano: American Slave Ship Revolts and the Production of Rebellious Masculinity, Duke University Press, 1997, p. 264. ISBN 0-8223-1992-6 Most African-Americans, who were native-born by this time, wanted to work toward justice in the United States rather than emigrate. Leading activists in the North strongly opposed the ACS, but some free blacks were ready to try a different environment.
In 1822, the American Colonization Society began sending African-American volunteers to the Pepper Coast to establish a colony for freed African-Americans. By 1867, the ACS (and state-related chapters) had assisted in the migration of more than 13,000 African Americans to Liberia. These free African-Americans and their descendants married within their community and came to identify as Americo-Liberians. Many were of mixed race and educated in American culture; they did not identify with the indigenous natives of the tribes they encountered. They intermarried largely within the colonial community, developing an ethnic group that had a cultural tradition infused with American notions of political republicanism and Protestant Christianity.
thumb|259x259px|Map of Liberia Colony in the 1830s, created by the ACS, and also showing Mississippi Colony and other state-sponsored colonies.
The ACS, the private organization supported by prominent American politicians such as Abraham Lincoln, Henry Clay, and James Monroe, believed repatriation of free blacks was preferable to widespread emancipation of slaves. Similar state-based organizations established colonies in Mississippi-in-Africa and the Republic of Maryland, which were later annexed by Liberia.
The Americo-Liberian settlers did not identify with the indigenous peoples they encountered, especially those in communities of the more isolated "bush." They knew nothing of their cultures, languages or animist religion. Encounters with tribal Africans in the bush often developed as violent confrontations. The colonial settlements were raided by the Kru and Grebo from their inland chiefdoms. Because of feeling set apart and superior by their culture and education to the indigenous peoples, the Americo-Liberians developed as a small elite that held on to political power. It excluded the indigenous tribesmen from birthright citizenship in their own lands until 1904, in a repetition of the United States' treatment of Native Americans. Because of ethnocentrism and the cultural gap, the Americo-Liberians envisioned creating a western-style state to which the tribesmen should assimilate. They encouraged religious organizations to set up missions and schools to educate the indigenous peoples.
Government
On July 26, 1847, the settlers issued a Declaration of Independence and promulgated a constitution. Based on the political principles denoted in the United States Constitution, it established the independent Republic of Liberia.
The leadership of the new nation consisted largely of the Americo-Liberians, who initially established political and economic dominance in the coastal areas that had been purchased by the ACS; they maintained relations with United States contacts in developing these areas and the resulting trade. Their passage of the 1865 Ports of Entry Act prohibited foreign commerce with the inland tribes, ostensibly to "encourage the growth of civilized values" before such trade was allowed.
By 1877, the Americo-Liberian True Whig Party was the most powerful political power in the country. It was made up primarily of people from the Americo-Liberian ethnic group, who maintained social, economic and political dominance well into the 20th century, repeating patterns of European colonists in other nations in Africa. Competition for office was usually contained within the party; a party nomination virtually ensured election.
Pressure from the United Kingdom, which controlled Sierra Leone to the west, and France with its interests in the north and east led to a loss of Liberia's claims to extensive territories. Both Sierra Leone and the Ivory Coast annexed some territories. Liberia struggled to attract investment in order to develop infrastructure and a larger, industrial economy.
There was a decline in production of Liberian goods in the late 19th century, and the government struggled financially, resulting in indebtedness on a series of international loans.
20th century
thumb|250px|Charles D. B. King, 17th President of Liberia (1920–1930), with his entourage on the steps of the Peace Palace, The Hague (the Netherlands), 1927.
American and other international interests emphasized resource extraction, with rubber production a major industry in the early 20th century.
In the mid-20th century, Liberia gradually began to modernize with American assistance. During World War II, the United States made major infrastructure improvements to support its military efforts in Africa and Europe. It built the Freeport of Monrovia and Roberts International Airport under the Lend-Lease program before its entry into the world war.
After the war, President William Tubman encouraged foreign investment in the country. Liberia had the second-highest rate of economic growth in the world during the 1950s.
Liberia also began to take a more active role in international affairs. It was a founding member of the United Nations in 1945 and became a vocal critic of the South African apartheid regime. Liberia also served as a proponent both of African independence from the European colonial powers and of Pan-Africanism, and helped to fund the Organisation of African Unity.
thumb|250px|Samuel Doe with Caspar Weinberger during a visit to the United States, 1982
thumb|250px|A technical in Monrovia during the Second Liberian Civil War.
On April 12, 1980, a military coup led by Master Sergeant Samuel Doe of the Krahn ethnic group overthrew and killed President William R. Tolbert, Jr.. Doe and the other plotters later executed a majority of Tolbert's cabinet and other Americo-Liberian government officials and True Whig Party members. The coup leaders formed the People's Redemption Council (PRC) to govern the country. A strategic Cold War ally of the West, Doe received significant financial backing from the United States while critics condemned the PRC for corruption and political repression.
After Liberia adopted a new constitution in 1985, Doe was elected president in subsequent elections, which were internationally condemned as fraudulent. On November 12, 1985, a failed counter-coup was launched by Thomas Quiwonkpa, whose soldiers briefly occupied the national radio station. Government repression intensified in response, as Doe's troops retaliated by executing members of the Gio and Mano ethnic groups in Nimba County.
The National Patriotic Front of Liberia, a rebel group led by Charles Taylor, launched an insurrection in December 1989 against Doe's government with the backing of neighboring countries such as Burkina Faso and Ivory Coast. This triggered the First Liberian Civil War. By September 1990, Doe's forces controlled only a small area just outside the capital, and Doe was captured and executed in that month by rebel forces.
The rebels soon split into various factions fighting one another. The Economic Community Monitoring Group under the Economic Community of West African States organized a military task force to intervene in the crisis. From 1989 to 1996 one of Africa's bloodiest civil wars broke out, claiming the lives of more than 200,000 Liberians and displacing a million others into refugee camps in neighboring countries. A peace deal between warring parties was reached in 1995, leading to Taylor's election as president in 1997.
Under Taylor's leadership, Liberia became internationally known as a pariah state due to its use of blood diamonds and illegal timber exports to fund the Revolutionary United Front in the Sierra Leone Civil War. The Second Liberian Civil War began in 1999 when Liberians United for Reconciliation and Democracy, a rebel group based in the northwest of the country, launched an armed insurrection against Taylor.
2000s
In March 2003, a second rebel group, Movement for Democracy in Liberia, began launching attacks against Taylor from the southeast. Peace talks between the factions began in Accra in June of that year, and Taylor was indicted by the Special Court for Sierra Leone for crimes against humanity the same month. By July 2003, the rebels had launched an assault on Monrovia. Under heavy pressure from the international community and the domestic Women of Liberia Mass Action for Peace movement, Taylor resigned in August 2003 and went into exile in Nigeria.
A peace deal was signed later that month. The United Nations Mission in Liberia began arriving in September 2003 to provide security and monitor the peace accord, and an interim government took power the following October.
The subsequent 2005 elections were internationally regarded as the most free and fair in Liberian history. Ellen Johnson Sirleaf, a Harvard-trained economist and former Minister of Finance, was elected as the first female president in Africa. Upon her inauguration, Sirleaf requested the extradition of Taylor from Nigeria and transferred him to the SCSL for trial in The Hague.
In 2006, the government established a Truth and Reconciliation Commission to address the causes and crimes of the civil war.
Geography
thumb|650px|A map of Liberia
thumb|Liberia map of Köppen climate classification.
Liberia is situated in West Africa, bordering the North Atlantic Ocean to the country's southwest. It lies between latitudes 4° and 9°N, and longitudes 7° and 12°W.
The landscape is characterized by mostly flat to rolling coastal plains that contain mangroves and swamps, which rise to a rolling plateau and low mountains in the northeast.
Tropical rainforests cover the hills, while elephant grass and semi-deciduous forests make up the dominant vegetation in the northern sections. The equatorial climate is hot year-round with heavy rainfall from May to October with a short interlude in mid-July to August. During the winter months of November to March, dry dust-laden harmattan winds blow inland, causing many problems for residents.
Liberia's watershed tends to move in a southwestern pattern towards the sea as new rains move down the forested plateau off the inland mountain range of Guinée Forestière, in Guinea. Cape Mount near the border with Sierra Leone receives the most precipitation in the nation.
Liberia's main northwestern boundary is traversed by the Mano River while its southeast limits are bounded by the Cavalla River. Liberia's three largest rivers are St. Paul exiting near Monrovia, the river St. John at Buchanan and the Cestos River, all of which flow into the Atlantic. The Cavalla is the longest river in the nation at .
The highest point wholly within Liberia is Mount Wuteve at above sea level in the northwestern Liberia range of the West Africa Mountains and the Guinea Highlands. However, Mount Nimba near Yekepa, is higher at above sea level but is not wholly within Liberia as Nimba shares a border with Guinea and Ivory Coast and is their tallest mountain as well.Financial Time's World Desk Reference (2004) Dorling Kindersley Publishing, p. 368.
Counties and districts
thumb|250px|A view of a lake in Bomi County
Liberia is divided into fifteen counties, which, in turn, are subdivided into a total of 90 districts and further subdivided into clans. The oldest counties are Grand Bassa and Montserrado, both founded in 1839 prior to Liberian independence. Gbarpolu is the newest county, created in 2001. Nimba is the largest of the counties in size at , while Montserrado is the smallest at . Montserrado is also the most populous county with 1,144,806 residents as of the 2008 census.
The fifteen counties are administered by superintendents appointed by the president. The Constitution calls for the election of various chiefs at the county and local level, but these elections have not taken place since 1985 due to war and financial constraints.
Map # County Capital Population(2008 Census) Area(km2) Number ofDistricts DateCreated 1 BomiTubmanburg82,03641984 2 BongGbarnga328,919121964 3 GbarpoluBopolu83,758 62001 4 Grand BassaBuchanan224,83981839 5 Grand Cape MountRobertsport129,05551844 6 Grand GedehZwedru126,14631964 7 Grand KruBarclayville57,106181984 8 LofaVoinjama270,11461964 9 MargibiKakata199,6894198510 MarylandHarper136,4042185711 MontserradoBensonville1,144,8064183912 NimbaSanniquellie468,0886196413 RivercessRivercess65,8626198514 River GeeFish Town67,3186200015 SinoeGreenville104,932171843
Environmental issues
thumb|250px|Pygmy hippos are among the species illegally hunted for food in Liberia. The World Conservation Union estimates that there are fewer than 3,000 pygmy hippos remaining in the wild. Database entry includes a brief justification of why this species is of endangered.
Endangered species are hunted for human consumption as bushmeat in Liberia. Species hunted for food in Liberia include elephants, pygmy hippopotamus, chimpanzees, leopards, duikers, and other monkeys. Bushmeat is often exported to neighboring Sierra Leone and Ivory Coast, despite a ban on the cross-border sale of wild animals.Anne Look, "Poaching in Liberia's Forests Threatens Rare Animals", Voice of America News, May 8, 2012.
Bushmeat is widely eaten in Liberia, and is considered a delicacy. A 2004 public opinion survey found that bushmeat ranked second behind fish amongst residents of the capital Monrovia as a preferred source of protein. Of households where bushmeat was served, 80% of residents said they cooked it "once in a while," while 13% cooked it once a week and 7% cooked bushmeat daily. The survey was conducted during the last civil war, and bushmeat consumption is now believed to be far higher.Wynfred Russell, "Extinction is forever: A crisis that is Liberia's endangered wildlife", Front Page Africa, January 15, 2014.
thumb|left|250px|Loggers and logging truck, early 1960s
Liberia is a global biodiversity hotspot – a significant reservoir of biodiversity that is under threat from humans. Liberia hosts the last remaining viable populations of certain species including western chimpanzees, forest elephants and leopards. "Liberia signs 'transformational' deal to stem deforestation", Matt McGrath, BBC News, 23 September 2014. Liberia contains a significant portion of West Africa's remaining rainforest, with about 43% of the Upper Guinean forest – an important forest that spans several West African nations.
Slash-and-burn agriculture is one of the human activities eroding Liberia's natural forests."Restoring the Battered and Broken Environment of Liberia One of the Keys to a New and Sustainable Future", United Nations Environment Program, February 13, 2014. A 2004 UN report estimated that 99 per cent of Liberians burnt charcoal and fuel wood for cooking and heating, resulting in deforestation.
Illegal logging has increased in Liberia since the end of the Second Civil War in 2003. In 2012 President Ellen Johnson Sirleaf granted licenses to companies to cut down 58% of all the primary rainforest left in Liberia. After international protests, many of those logging permits were canceled. Liberia and Norway struck an agreement in September 2014 whereby Liberia ceases all logging in exchange for $150 million in development aid.
Pollution is a significant issue in Liberia's capital city Monrovia."Monrovia's 'Never-Ending' Pollution Issues In 2013", Edwin M. Fayia III, The Liberian Observer, December 30, 2014. Since 2006 the international community has paid for all garbage collection and disposal in Monrovia via the World Bank."Digging Out Monrovia from the Waste of War", The World Bank – International Development Association, August 2009.
Politics
thumb|President Ellen Johnson Sirleaf
The government of Liberia, modeled on the government of the United States, is a unitary constitutional republic and representative democracy as established by the Constitution. The government has three co-equal branches of government: the executive, headed by the president; the legislative, consisting of the bicameral Legislature of Liberia; and the judicial, consisting of the Supreme Court and several lower courts.
The president serves as head of government, head of state and the commander-in-chief of the Armed Forces of Liberia. Among the other duties of the president are to sign or veto legislative bills, grant pardons, and appoint Cabinet members, judges and other public officials. Together with the vice president, the president is elected to a six-year term by majority vote in a two-round system and can serve up to two terms in office.
The Legislature is composed of the Senate and the House of Representatives. The House, led by a speaker, has 73 members apportioned among the 15 counties on the basis of the national census, with each county receiving a minimum of two members. Each House member represents an electoral district within a county as drawn by the National Elections Commission and is elected by a plurality of the popular vote of their district into a six-year term. The Senate is made up of two senators from each county for a total of 30 senators. Senators serve nine-year terms and are elected at-large by a plurality of the popular vote. The vice president serves as the President of the Senate, with a President pro tempore serving in their absence.
Liberia's highest judicial authority is the Supreme Court, made up of five members and headed by the Chief Justice of Liberia. Members are nominated to the court by the president and are confirmed by the Senate, serving until the age of 70. The judiciary is further divided into circuit and speciality courts, magistrate courts and justices of the peace. The judicial system is a blend of common law, based on Anglo-American law, and customary law. An informal system of traditional courts still exists within the rural areas of the country, with trial by ordeal remaining common despite being officially outlawed.
Between 1877 and 1980, the government was dominated by the True Whig Party. Today, over 20 political parties are registered in the country, based largely around personalities and ethnic groups. Most parties suffer from poor organizational capacity. The 2005 elections marked the first time that the president's party did not gain a majority of seats in the Legislature.
Corruption
Corruption is endemic at every level of the Liberian government. When President Sirleaf took office in 2006, she announced that corruption was "the major public enemy." "Liberia: Police Corruption Harms Rights, Progress", Human Rights Watch, August 22, 2013. In 2014 the US ambassador to Liberia stated that corruption there was harming people through "unnecessary costs to products and services that are already difficult for many Liberians to afford".
Liberia scored a 3.3 on a scale from 10 (highly clean) to 0 (highly corrupt) on the 2010 Corruption Perceptions Index. This gave it a ranking 87th of 178 countries worldwide and 11th of 47 in Sub-Saharan Africa. This score represented a significant improvement since 2007, when the country scored 2.1 and ranked 150th of 180 countries. When dealing with public-facing government functionaries 89% of Liberians say they have had to pay a bribe, the highest national percentage in the world according to the organization's 2010 Global Corruption Barometer.
Military
The Armed Forces of Liberia (AFL) are the armed forces of the Republic of Liberia. Founded as the Liberian Frontier Force in 1908, the military was retitled in 1956. For virtually all of its history, the AFL has received considerable material and training assistance from the United States. For most of the 1941–89 period, training was largely provided by U.S. advisers.
Foreign relations
After the turmoil following the First and Second Liberian Civil Wars, Liberia's internal stabilization in the 21st century brought a return to cordial relations with neighboring countries and much of the Western world.
In the past, both of Liberia's neighbors, Guinea and Sierra Leone, have accused Liberia of backing rebels inside their countries.
Law enforcement
The Liberian National Police are the national police force of the country. It has 844 officers in 33 stations in Montserrado County, which contains the capital Monrovia, as of October 2007. The National Police Training Academy is in Montserrado County in Paynesville City.
Economy and infrastructure
thumb|350px|A proportional representation of Liberian exports. The shipping related categories reflect Liberia's status as an international flag of convenience – there are 3,500 vessels registered under Liberia's flag accounting for 11% of ships worldwide.
thumb|350px|Liberia, trends in the Human Development Index 1970–2010.
The Central Bank of Liberia is responsible for printing and maintaining the Liberian dollar, which is the primary form of currency in Liberia.
Liberia is one of the world's poorest countries, with a formal employment rate of 15%. GDP per capita peaked in 1980 at US$496, when it was comparable to Egypt's (at the time). In 2011, the country's nominal GDP was US$1.154 billion, while nominal GDP per capita stood at US$297, the third-lowest in the world. Historically, the Liberian economy has depended heavily on foreign aid, foreign direct investment and exports of natural resources such as iron ore, rubber and timber.
Following a peak in growth in 1979, the Liberian economy began a steady decline due to economic mismanagement following the 1980 coup. This decline was accelerated by the outbreak of civil war in 1989; GDP was reduced by an estimated 90% between 1989 and 1995, one of the fastest declines in history. Upon the end of the war in 2003, GDP growth began to accelerate, reaching 9.4% in 2007. The global financial crisis slowed GDP growth to 4.6% in 2009, though a strengthening agricultural sector led by rubber and timber exports increased growth to 5.1% in 2010 and an expected 7.3% in 2011, making the economy one of the 20 fastest growing in the world.
Current impediments to growth include a small domestic market, lack of adequate infrastructure, high transportation costs, poor trade links with neighboring countries and the high dollarization of the economy. Liberia used the United States dollar as its currency from 1943 until 1982 and continues to use the U.S. dollar alongside the Liberian dollar.
thumb|300px|A boy grinding sugar cane.
Following a decrease in inflation beginning in 2003, inflation spiked in 2008 as a result of worldwide food and energy crises, reaching 17.5% before declining to 7.4% in 2009. Liberia's external debt was estimated in 2006 at approximately $4.5 billion, 800% of GDP. As a result of bilateral, multilateral and commercial debt relief from 2007 to 2010, the country's external debt fell to $222.9 million by 2011.
While official commodity exports declined during the 1990s as many investors fled the civil war, Liberia's wartime economy featured the exploitation of the region's diamond wealth. The country acted as a major trader in Sierra Leonian blood diamonds, exporting over US$300 million in diamonds in 1999. This led to a United Nations ban on Liberian diamond exports in 2001, which was lifted in 2007 following Liberia's accession to the Kimberley Process Certification Scheme.
In 2003, additional UN sanctions were placed on Liberian timber exports, which had risen from US$5 million in 1997 to over US$100 million in 2002 and were believed to be funding rebels in Sierra Leone. These sanctions were lifted in 2006. Due in large part to foreign aid and investment inflow following the end of the war, Liberia maintains a large account deficit, which peaked at nearly 60% in 2008. Liberia gained observer status with the World Trade Organization in 2010 and is in the process of acquiring full member status.
Liberia has the highest ratio of foreign direct investment to GDP in the world, with US$16 billion in investment since 2006. Following the inauguration of the Sirleaf administration in 2006, Liberia signed several multibillion-dollar concession agreements in the iron ore and palm oil industries with numerous multinational corporations, including BHP Billiton, ArcelorMittal, and Sime Darby. Especially palm oil companies like Sime Darby (Malaysia) and Golden Veroleum (USA) are being accused by critics of the destruction of livelihoods and the displacement of local communities, enabled through government concessions. The Firestone Tire and Rubber Company has operated the world's largest rubber plantation in Liberia since 1926.
Shipping flag of convenience
Due to its status as a flag of convenience, Liberia has the second-largest maritime registry in the world behind Panama. It has 3500 vessels registered under its flag accounting for 11% of ships worldwide.
Telecommunications
There are six major newspapers in Liberia, and 45% of the population has a mobile phone service.
Much of Liberia's communications infrastructure was destroyed or plundered during the two civil wars (1989–1996 and 1999–2003). With low rates of adult literacy and high poverty rates, television and newspaper use is limited, leaving radio as the predominant means of communicating with the public."Introduction to Communication and Development in Liberia", AudienceScapes. Retrieved 8 February 2014.
Transportation
thumb|300px|The streets of downtown Monrovia, March 2009.
Liberia's economic main links to the outside world come through Monrovia, via the port and airport in the capital.
Energy
Formal electricity services are provided solely by the state-owned Liberia Electricity Corporation, which operates a small grid almost exclusively in the Greater Monrovia District. The vast majority of electric energy services is provided by small privately owned generators. At $0.54 per kWh, the electricity tariff in Liberia is among the highest in the world. Total installed capacity in 2013 was 20 MW, a sharp decline from a peak of 191 MW in 1989 before the wars.
Completion of the repair and expansion of the Mount Coffee Hydropower Plant, with a maximum capacity of 80 MW, is scheduled to be completed by 2018. Construction of three new heavy fuel oil power plants is expected to boost electrical capacity by 38 MW. In 2013, Liberia began importing power from neighboring Ivory Coast and Guinea through the West African Power Pool.
Liberia has begun exploration for offshore oil; unproven oil reserves may be in excess of one billion barrels. The government divided its offshore waters into 17 blocks and began auctioning off exploration licenses for the blocks in 2004, with further auctions in 2007 and 2009. An additional 13 ultra-deep offshore blocks were demarcated in 2011 and planned for auction. Among the companies to have won licenses are Repsol, Chevron, Anadarko and Woodside Petroleum.
Demographics
thumb|350px|Liberia's population from 1961–2013.Data of FAO, year 2005 Liberia's population tripled in 40 years.
thumb|350px|Liberia's population pyramid, 2005. 43.5% of Liberians were below the age of 15 in 2010.Population Division of the Department of Economic and Social Affairs of the United Nations Secretariat, World Population Prospects: The 2010 Revision
As of the 2008 national census, Liberia was home to 3,476,608 people. Of those, 1,118,241 lived in Montserrado County, the most populous county in the country and home to the capital of Monrovia. The Greater Monrovia District has 970,824 residents. Nimba County is the next most populous county, with 462,026 residents. As revealed in the 2008 census, Monrovia is more than four times more populous than all the county capitals combined.
Prior to the 2008 census, the last census had been held in 1984 and listed the country's population as 2,101,628. The population of Liberia was 1,016,443 in 1962 and increased to 1,503,368 in 1974. , Liberia has the highest population growth rate in the world (4.50% per annum).United Nations World Population Prospects: 2006 revision – Table A.8 In 2010 some 43.5% of Liberians were below the age of 15.
Ethnic groups
The population includes 16 indigenous ethnic groups and various foreign minorities. Indigenous peoples comprise about 95 percent of the population. The 16 officially recognized ethnic groups include the Kpelle, Bassa, Mano, Gio or Dan, Kru, Grebo, Krahn, Vai, Gola, Mandingo or Mandinka, Mende, Kissi, Gbandi, Loma, Fante, Dei or Dewoin, Belleh, and Americo-Liberians or Congo people.
The Kpelle comprise more than 20% of the population and are the largest ethnic group in Liberia, residing mostly in Bong County and adjacent areas in central Liberia."Kpelle", UCLA, Anthropology. Americo-Liberians, who are descendants of African American and West Indian, mostly Barbadian settlers, make up 2.5%. Congo people, descendants of repatriated Congo and Afro-Caribbean slaves who arrived in 1825, make up an estimated 2.5%. These latter two groups established political control in the 19th century which they kept well into the 20th century.
Numerous immigrants have come as merchants and become a major part of the business community, including Lebanese, Indians, and other West African nationals. There is a high percentage of interracial marriage between ethnic Liberians and the Lebanese, resulting in a significant mixed-race population especially in and around Monrovia. A small minority of Liberians of European descent reside in the country. The Liberian constitution restricts citizenship to people of Black African descent.
Languages
English is the official language and serves as the lingua franca of Liberia. Thirty-one indigenous languages are spoken within Liberia, none of which is a first language to more than a small percentage of the population. Liberians also speak a variety of creolized dialects collectively known as Liberian English.
Largest cities
Religion
According to the 2008 National Census, 85.5% of the population practices Christianity. Protestants form the largest Christian grouping, followed by Roman Catholics. These denominations were brought by Black American settlers. Muslims comprise 12.2% of the population, largely represented by the Mandingo and Vai ethnic groups. Sunnis, Shias, Ahmadiyyas, Sufis, and non-denominational Muslims constitute the bulk of the Liberian Muslims.Pew Forum on Religious & Public life. 9 August 2012. Retrieved 29 October 2013
Traditional indigenous religions are practiced by 0.5% of the population, while 1.5% subscribe to no religion. A small number of people are Bahá'í, Hindu, Sikh, or Buddhist. While Christian, many Liberians also participate in traditional, gender-based indigenous religious secret societies, such as Poro for men and Sande for women. The all-female Sande society practices female circumcision.
The Constitution provides for freedom of religion, and the government generally respects this right. While separation of church and state is mandated by the Constitution, Liberia is considered a Christian state in practice. Public schools offer biblical studies, though parents may opt their children out. Commerce is prohibited by law on Sundays and major Christian holidays. The government does not require businesses or schools to excuse Muslims for Friday prayers.
Education
thumb|250px|Students studying by candlelight in Bong County
In 2010, the literacy rate of Liberia was estimated at 60.8% (64.8% for males and 56.8% for females). In some areas primary and secondary education is free and compulsory from the ages of 6 to 16, though enforcement of attendance is lax. In other areas children are required to pay a tuition fee to attend school. On average, children attain 10 years of education (11 for boys and 8 for girls). The country's education sector is hampered by inadequate schools and supplies, as well as a lack of qualified teachers.
Higher education is provided by a number of public and private universities. The University of Liberia is the country's largest and oldest university. Located in Monrovia, the university opened in 1862. Today it has six colleges, including a medical school and the nation's only law school, Louis Arthur Grimes School of Law.Jallah, David A. B. "Notes, Presented by Professor and Dean of the Louis Arthur Grimes School of Law, University of Liberia, David A. B. Jallah to the International Association of Law Schools Conference Learning From Each Other: Enriching the Law School Curriculum in an Interrelated World Held at Soochow University Kenneth Wang School of Law, Suzhou, China, October 17–19, 2007." International Association of Law Schools. Retrieved on September 1, 2008.
Cuttington University was established by the Episcopal Church of the USA in 1889 in Suakoko, Bong County, as part of its missionary education work among indigenous peoples. It is the nation's oldest private university.
In 2009, Tubman University in Harper, Maryland County was established as the second public university in Liberia. Since 2006, the government has also opened community colleges in Buchanan, Sanniquellie, and Voinjama.
Health
Hospitals in Liberia include the John F. Kennedy Medical Center in Monrovia and several others. Life expectancy in Liberia is estimated to be 57.4 years in 2012. With a fertility rate of 5.9 births per woman, the maternal mortality rate stood at 990 per 100,000 births in 2010. A number of highly communicable diseases are widespread, including tuberculosis, diarrheal diseases and malaria. In 2007, the HIV infection rates stood at 2% of the population aged 15–49 whereas the incidence of tuberculosis was 420 per 100,000 people in 2008. Approximately 58.2% – 66%UNICEF 2013, p. 27. of women are estimated to have undergone female genital mutilation.
Liberia imports 90% of its rice, a staple food, and is extremely vulnerable to food shortages. In 2007, 20.4% of children under the age of five were malnourished. In 2008, only 17% of the population had access to adequate sanitation facilities.
Civil war ended in 2003 after destroying approximately 95% of the country's healthcare facilities. In 2009, government expenditure on health care per capita was US$22, accounting for 10.6% of total GDP. In 2008, Liberia had only one doctor and 27 nurses per 100,000 people.
In 2014 an outbreak of Ebola virus in Guinea spread to Liberia. As of November 17, 2014, there were 2,812 confirmed deaths from the ongoing outbreak. In early August 2014 Guinea closed its borders to Liberia to help contain the spread of the virus, as more new cases were being reported in Liberia than in Guinea. On May 9, 2015 Liberia was declared Ebola free after six weeks with no new cases.
According to an Overseas Development Institute report, private health expenditure accounts for 64.1% of total spending on health.Marc DuBois and Caitlin Wake, with Scarlett Sturridge and Christina Bennett (2015) The Ebola response in West Africa: Exposing the politics and culture of international aid London: Overseas Development Institute
Crime
Rape and sexual assault are frequent in the post-conflict era in Liberia. The country has one of the highest incidences of sexual violence against women in the world. Rape is the most frequently reported crime, accounting for more than one-third of sexual violence cases. Adolescent girls are the most frequently assaulted, and almost 40% of perpetrators are adult men known to victims.Nicola Jones, Janice Cooper, Elizabeth Presler-Marshall and David Walker, June 2014; "The fallout of rape as a weapon of war", ODI; http://www.odi.org/publications/8464-rape-weapon-war-liberia
Culture
thumb|Bassa culture. Helmet Mask for Sande Society (Ndoli Jowei), Liberia. 20th century. Brooklyn Museum.
The religious practices, social customs and cultural standards of the Americo-Liberians had their roots in the antebellum American South. The settlers wore top hat and tails and modeled their homes on those of Southern slaveowners. Most Americo-Liberian men were members of the Masonic Order of Liberia, which became heavily involved in the nation's politics.
Liberia has a long, rich history in textile arts and quilting, as the settlers brought with them their sewing and quilting skills. Liberia hosted National Fairs in 1857 and 1858 in which prizes were awarded for various needle arts. One of the most well-known Liberian quilters was Martha Ann Ricks, who presented a quilt featuring the famed Liberian coffee tree to Queen Victoria in 1892. When President Ellen Johnson Sirleaf moved into the Executive Mansion, she reportedly had a Liberian-made quilt installed in her presidential office.
A rich literary tradition has existed in Liberia for over a century. Edward Wilmot Blyden, Bai T. Moore, Roland T. Dempster and Wilton G. S. Sankawulo are among Liberia's more prominent authors. Moore's novella Murder in the Cassava Patch is considered Liberia's most celebrated novel.
Polygamy
One-third of married Liberian women between the ages of 15–49 are in polygamous marriages.OECD Atlas of Gender and Development: How Social Norms Affect Gender Equality in non-OECD Countries, OECD Publishing, 2010. p 236. Customary law allows men to have up to four wives.Olukoju, Ayodeji. "Gender Roles, Marriage and Family", Culture and Customs of Liberia. Westport: Greenwood Press, 2006, p. 97.
Cuisine
thumb|A beachside barbeque at Sinkor, Monrovia, Liberia
Liberian cuisine heavily incorporates rice, the country's staple food. Other ingredients include cassava, fish, bananas, citrus fruit, plantains, coconut, okra and sweet potatoes. Heavy stews spiced with habanero and scotch bonnet chillies are popular and eaten with fufu. Liberia also has a tradition of baking imported from the United States that is unique in West Africa.
Sport
The most popular sport in Liberia is association football, with George Weah — the only African to be named FIFA World Player of the Year — being the nation's most famous athlete."Iconic Weah a true great". FIFA.com. Retrieved 17 November 2013 The Liberia national football team has reached the Africa Cup of Nations twice, in 1996 and 2002.
The second most popular sport in Liberia is basketball. The Liberian national basketball team has reached the AfroBasket twice, in 1983 and 2007.
In Liberia, the Samuel Kanyon Doe Sports Complex serves as a multi-purpose stadium. It hosts FIFA World Cup qualifying matches in addition to international concerts and national political events.
Measurement system
Liberia is one of only three countries that have not officially adopted the International System of Units (metric system). The Liberian government has begun transitioning away from use of imperial units to the metric system. However, this change has been gradual, with government reports concurrently using both imperial and metric units. A 2008 report from the University of Tennessee stated that the changeover from imperial to metric measures was confusing to coffee and cocoa farmers.
See also
Outline of Liberia
Gender inequality in Liberia
References
Further reading
Cooper, Helene, House at Sugar Beach: In Search of a Lost African Childhood (Simon & Schuster, 2008, ISBN 0-7432-6624-2)
Lang, Victoria, To Liberia: Destiny's Timing (Publish America, Baltimore, 2004, ISBN 1-4137-1829-9). A fast-paced gripping novel of the journey of a young Black couple fleeing America to settle in the African motherland of Liberia.
Maksik, Alexander, A Marker to Measure Drift (John Murray 2013; Paperback 2014; ISBN 978-1-84854-807-7). A beautifully written, powerful & moving novel about a young woman's experience of and escape from the Liberian civil war.
Mwakikagile, Godfrey, Military Coups in West Africa Since The Sixties, Chapter Eight: Liberia: 'The Love of Liberty Brought Us Here,' pp. 85–110, Nova Science Publishers, Inc., Huntington, New York, 2001; Godfrey Mwakikagile, The Modern African State: Quest for Transformation, Chapter One: The Collapse of A Modern African State: Death and Rebirth of Liberia, pp. 1–18, Nova Science Publishers, Inc., 2001.
Sankawulo, Wilton, Great Tales of Liberia. Dr. Sankawulo is the compiler of these tales from Liberia and about Liberian culture. Published by Editura Universitatii "Lucian Blaga"; din Sibiu, Romania, 2004. ISBN 973-651-838-8.
Sankawulo, Wilton, Sundown at Dawn: A Liberian Odyssey. Recommended by the Cultural Resource Center, Center for Applied Linguistics for its content concerning Liberian culture. ISBN 0-9763565-0-3
Shaw, Elma, Redemption Road: The Quest for Peace and Justice in Liberia (a novel), with a Foreword by President Ellen Johnson Sirleaf (Cotton Tree Press, 2008, ISBN 978-0-9800774-0-7)
External links
Chief of State and Cabinet Members
Liberia from UCB Libraries GovPubs.
Liberia profile from the BBC News.
Liberia profile from the African Studies Centre Leiden Country portal.
"Liberia Maps", Perry-Castañeda Library, University of Texas at Austin.
Category:Economic Community of West African States
Category:English-speaking countries and territories
Category:Least developed countries
Category:Member states of the African Union
Category:Reparations for slavery
Category:Republics
Category:States and territories established in 1847
Category:Member states of the United Nations
Category:West African countries
Category:1847 establishments in Liberia | 17,791 | 2017-01 |
51st state | thumb|width=237|Flag of the United States|51-star flags have been designed and used as a symbol by supporters of statehood in various areas. This is an example of how a 51-star flag might look.
The "51st state", in post-1959 American political discourse, is a phrase that refers to areas or locales that are – seriously or facetiously – considered candidates for U.S. statehood, joining the 50 states that presently compose the United States. The phrase has been applied to external territories as well as parts of existing states which would be admitted as separate states in their own right.
The phrase "51st state" can be used in a positive sense, meaning that a region or territory is so aligned, supportive, and conducive with the United States, that it is like a U.S. state. It can also be used in a pejorative sense, meaning an area or region is perceived to be under excessive American cultural or military influence or control. In various countries around the world, people who believe their local or national culture has become too Americanized sometimes use the term "51st state" in reference to their own countries."Sverige var USAs 51a delstat" "EU kritiserar svensk TV" , Journalisten (Swedish)
Legal requirements
Under Article IV, Section Three of the United States Constitution, which outlines the relationship among the states, Congress has the power to admit new states to the union. The states are required to give "full faith and credit" to the acts of each other's legislatures and courts, which is generally held to include the recognition of legal contracts, marriages, and criminal judgments. The states are guaranteed military and civil defense by the federal government, which is also obliged by Article IV, Section Four, to "guarantee to every state in this union a republican form of government".
Congress is a highly politicized body, and discussions about the admission of new states, which typically take years before approval, are invariably informed by the political concerns of Congress at the time the proposal is presented. These concerns include or included maintaining a balance between free and slave states, and which faction in Congress (Democrats or Republicans, conservatives or liberals, rural or urban blocks) would benefit, and which lose, if the proposed state were admitted.
Possible new states
By status changes of current U.S. territories
Puerto Rico
Puerto Rico has been discussed as a potential 51st state of the United States. In a 2012 status referendum a majority of voters, 54%, expressed dissatisfaction with the current political relationship. In a separate question, 61% of voters supported statehood (excluding the 26% of voters who left this question blank). On December 11, 2012, Puerto Rico's legislature resolved to request that the President and the U.S. Congress act on the results, end the current form of territorial status and begin the process of admitting Puerto Rico to the Union as a state.The Senate and the House of Representative of Puerto Rico: Concurrent Resolution. Retrieved December 16, 2012. On January 4th, 2017, Puerto Rico's new representative to Congress pushed a bill that would turn the island to the 51st state by 2025.
Background
Since 1898, Puerto Rico has had limited representation in the Congress in the form of a Resident Commissioner, a nonvoting delegate. The 110th Congress returned the Commissioner's power to vote in the Committee of the Whole, but not on matters where the vote would represent a decisive participation.Rules of the House of Representatives : One Hundred Tenth Congress (archived from the original on May 28, 2010). Puerto Rico has elections on the United States presidential primary or caucus of the Democratic Party and the Republican Party to select delegates to the respective parties' national conventions although presidential electors are not granted on the Electoral College. As American citizens, Puerto Ricans can vote in U.S. presidential elections, provided they reside in one of the 50 states or the District of Columbia and not in Puerto Rico itself.
Residents of Puerto Rico pay U.S. federal taxes: import/export taxes, federal commodity taxes, social security taxes, therefore contributing to the American Government. Most Puerto Rico residents do not pay federal income tax but do pay federal payroll taxes (Social Security and Medicare). However, federal employees, those who do business with the federal government, Puerto Rico–based corporations that intend to send funds to the U.S. and others do pay federal income taxes. Puerto Ricans may enlist in the U.S. military. Puerto Ricans have participated in all American wars since 1898; 52 Puerto Ricans had been killed in the Iraq War and War in Afghanistan by November 2012.ICasualties, accessed Nov. 2012.
Puerto Rico has been under U.S. sovereignty for over a century when it was ceded to the U.S. by Spain following the end of the Spanish–American War, and Puerto Ricans have been U.S. citizens since 1917. The island's ultimate status has not been determined , its residents do not have voting representation in their federal government. Puerto Rico has limited representation in the U.S. Congress in the form of a Resident Commissioner, a delegate with limited no voting rights. Like the states, Puerto Rico has self-rule, a republican form of government organized pursuant to a constitution adopted by its people, and a bill of rights.
This constitution was created when the U.S. Congress directed local government to organize a constitutional convention to write the Puerto Rico Constitution in 1951. The acceptance of that constitution by Puerto Rico's electorate, the U.S. Congress, and the U.S. president occurred in 1952. In addition, the rights, privileges and immunities attendant to United States citizens are "respected in Puerto Rico to the same extent as though Puerto Rico were a state of the union" through the express extension of the Privileges and Immunities Clause of the U.S. Constitution by the U.S. Congress in 1948., Privileges and immunities.
Puerto Rico is designated in its constitution as the "Commonwealth of Puerto Rico".The term Commonwealth is a traditional English term for a political community founded for the common good. Historically, it has sometimes been synonymous with "republic". The Constitution of Puerto Rico which became effective in 1952 adopted the name of Estado Libre Asociado (literally translated as "Free Associated State"), officially translated into English as Commonwealth, for its body politic.Constitution of the Commonwealth of Puerto Rico – in Spanish (Spanish).Constitution of the Commonwealth of Puerto Rico – in English (English translation). The island is under the jurisdiction of the Territorial Clause of the U.S. Constitution, which has led to doubts about the finality of the Commonwealth status for Puerto Rico. In addition, all people born in Puerto Rico become citizens of the U.S. at birth (under provisions of the Jones–Shafroth Act in 1917), but citizens residing in Puerto Rico cannot vote for president nor for full members of either house of Congress. Statehood would grant island residents full voting rights at the Federal level. The Puerto Rico Democracy Act (H.R. 2499) was approved on April 29, 2010, by the United States House of Representatives 223–169, but was not approved by the Senate before the end of the 111th Congress. It would have provided for a federally sanctioned self-determination process for the people of Puerto Rico. This act would provide for referendums to be held in Puerto Rico to determine the island's ultimate political status. It had also been introduced in 2007.
Vote for statehood
In November 2012, a referendum resulted in 54 percent of respondents voting to reject the current status under the territorial clause of the U.S. Constitution,CONDICIÓN POLÍTICA TERRITORIAL ACTUAL (English:Actual Territorial Political Condition). Government of Puerto Rico. State Electoral Commission. November 16, 2012 9:59PM. Retrieved November 18, 2012. while a second question resulted in 61 percent of voters identifying statehood as the preferred alternative to the current territorial status.OPCIONES NO TERRITORIALES. (English: Non-Territorial Options). Government of Puerto Rico. State Electoral Commission. November 16, 2012. Retrieved November 18, 2012. The 2012 referendum was by far the most successful referendum for statehood advocates and support for statehood has risen in each successive popular referendum. However, more than one in four voters abstained from answering the question on the preferred alternative status. Statehood opponents have argued that the statehood option garnered only 45 percent of the votes if abstentions are included. If abstentions are considered, the result of the referendum is much closer to 44 percent for statehood, a number that falls under the 50 percent majority mark.
The Washington Post, The New York Times and the Boston Herald have published opinion pieces expressing support for the statehood of Puerto Rico. On November 8, 2012, Washington, D.C. newspaper The Hill published an article saying that Congress will likely ignore the results of the referendum due to the circumstances behind the votes. and U.S. Congressman Luis Gutiérrez U.S. Congresswoman Nydia Velázquez, both of Puerto Rican ancestry, agreed with the The Hill 's statements. Shortly after the results were published Puerto Rico-born U.S. Congressman José Enrique Serrano commented "I was particularly impressed with the outcome of the 'status' referendum in Puerto Rico. A majority of those voting signaled the desire to change the current territorial status. In a second question an even larger majority asked to become a state. This is an earthquake in Puerto Rican politics. It will demand the attention of Congress, and a definitive answer to the Puerto Rican request for change. This is a history-making moment where voters asked to move forward."Serrano: Plebiscite an "Earthquake" in Puerto Rican Politics Retrieved December 6, 2012.
Several days after the referendum, the Resident Commissioner Pedro Pierluisi, Governor Luis Fortuño, and Governor-elect Alejandro García Padilla wrote separate letters to the President of the United States Barack Obama addressing the results of the voting. Pierluisi urged Obama to begin legislation in favor of the statehood of Puerto Rico, in light of its win in the referendum. Fortuño urged him to move the process forward. García Padilla asked him to reject the results because of their ambiguity. The White House stance related to the November 2012 plebiscite was that the results were clear, the people of Puerto Rico want the issue of status resolved, and a majority chose statehood in the second question. Former White House director of Hispanic media stated, "Now it is time for Congress to act and the administration will work with them on that effort, so that the people of Puerto Rico can determine their own future."
On May 15, 2013, Resident Commissioner Pierluisi introduced H.R. 2000 to Congress to "set forth the process for Puerto Rico to be admitted as a state of the Union," asking for Congress to vote on ratifying Puerto Rico as the 51st state."Pierluisi Introduces Historic Legislation", Puerto Rico Report, May 15, 2013. Retrieved on May 15, 2013.
On February 12, 2014, Senator Martin Heinrich introduced a bill in the US Senate. The bill would require a binding referendum to be held in Puerto Rico asking whether the territory wants to be admitted as a state. In the event of a yes vote, the president would be asked to submit legislation to Congress to admit Puerto Rico as a state." Sen. Martin Heinrich Presents Bill Seeking Puerto Rico Statehood", Fox News Latino, February 12, 2014. Retrieved on February 14, 2014.
Government funds
On January 15, 2014, the United States House of Representatives approved $2.5 million in funding to hold a referendum. This referendum can be held at any time as there is no deadline by which the funds have to be used. The United States Senate then passed the bill which was signed into law on January 17, 2014 by Barack Obama, the President of the United States.
District of Columbia
Washington, D.C. is often mentioned as a candidate for statehood. In Federalist No. 43 of The Federalist Papers, James Madison considered the implications of the definition of the "seat of government" found in the United States Constitution. Although he noted potential conflicts of interest, and the need for a "municipal legislature for local purposes", Madison did not address the district's role in national voting. Legal scholars disagree on whether a simple act of Congress can admit the District as a state, due to its status as the seat of government of the United States, which Article I, Section 8 of the Constitution requires to be under the exclusive jurisdiction of Congress; depending on the interpretation of this text, admission of the full District as a state may require a Constitutional amendment, which is much more difficult to enact.D.C. Statehood: Not Without a Constitutional Amendment, August 27, 1993, The Heritage Foundation. However, the Constitution does not set a minimum size for the District. Its size has already changed once before, when Virginia reclaimed the portion of the District south of the Potomac. So the constitutional requirement for a federal district can be satisfied by reducing its size to the small central core of government buildings and monuments, giving the rest of the territory to the new state.
Washington, D.C. residents who support the statehood movement sometimes use a shortened version of the Revolutionary War protest motto "No taxation without representation", omitting the initial "No", denoting their lack of Congressional representation; the phrase is now printed on newly issued Washington, D.C. license plates (although a driver may choose to have the District of Columbia website address instead). President Bill Clinton's presidential limousine had the "Taxation without representation" license plate late in his term, while President George W. Bush had the vehicle's plates changed shortly after beginning his term in office. President Barack Obama had the license plates changed back to the protest style at the beginning of his second term.
This position was carried by the D.C. Statehood Party, a political party; it has since merged with the local Green Party affiliate to form the D.C. Statehood Green Party. The nearest this movement ever came to success was in 1978, when Congress passed the District of Columbia Voting Rights Amendment. Two years later in 1980, local citizens passed an initiative calling for a constitutional convention for a new state. In 1982, voters ratified the constitution of the state, which was to be called New Columbia. The drive for statehood stalled in 1985, however, when the District of Columbia Voting Rights Amendment failed because not enough states ratified the amendment within the seven-year span specified.
Another proposed option would be to have Maryland, from which the current land was ceded, retake the District of Columbia, as Virginia has already done for its part, while leaving the National Mall, the United States Capitol, and the White House in a truncated District of Columbia. This would give residents of the District of Columbia the benefit of statehood while precluding the creation of a 51st state. The requirement that Maryland consent to a change in its borders makes this unlikely.
2016 statehood referendum
On April 15, 2016, District Mayor Muriel Bowser called for a citywide vote on whether the nation's capital should become the 51st state. This was followed by the release of a proposed state constitution. This constitution would make the Mayor of the District of Columbia the governor of the proposed state, while the members of the City Council would make up the proposed House of Delegates. While the name "New Columbia" has long been associated with the movement, community members thought other names, such as Potomac or Douglass, were more appropriate for the area.
On November 8, 2016, the voters of the District of Columbia voted overwhelmingly in favor of statehood, with 86% of voters voting to advise approving the proposal.
By status changes of former U.S. territories
Philippines
The Philippines has had small grassroots movements for U.S. statehood. Originally part of the platform of the Progressive Party, then known as the Federalista Party, the party dropped it in 1907, which coincided with the name change. In 1981, the presidential candidate for the Federal Party ran on a platform of Philippine Statehood. As recently as 2004, the concept of the Philippines becoming a U.S. state has been part of a political platform in the Philippines. Supporters of this movement include Filipinos who believe that the quality of life in the Philippines would be higher and that there would be less poverty there if the Philippines were an American state or territory. Supporters also include Filipinos that had fought as members of the United States Armed Forces in various wars during the Commonwealth period.
The Philippine statehood movement had a significant impact during the early American colonial period. It is no longer a mainstream movement, but is a small social movement that gains interest and talk in that nation.
By partition of or secession from current U.S. states
There exist several proposals to divide states with regions that are politically or culturally divergent into smaller, more homogeneous, administratively efficient entities. Splitting a state would need to receive the approval of its legislature and the Congress.
Proposals of new states by partition include:
Arizona: The secession of Pima County in Arizona with the hopes of Cochise, Yuma, and Santa Cruz joining to form a state.
California and Oregon:
Jefferson, from Northern California and Southern Oregon.
Various proposals of partition and secession in California, usually involving splitting the south half from the north or the urban coastline from the rest of the state.
California's Secretary of State allowed Tim Draper to start collecting signatures for his petition to split California into six different states. The initiative drive did not gain sufficient valid signatures to be put on the ballot.Jim Miller, "Six Californias initiative fails to make 2016 ballot", The Sacramento Bee, 09/12/2014.
Colorado: On June 6, 2013, commissioners in Weld County, Colorado announced a proposal to leave Colorado along with neighboring counties and form the state of North Colorado. The counties in contention voted to begin plans for secession on November 5, 2013, with mixed results.
Delaware, Maryland and Virginia: Delmarva, from the eastern shores of Maryland and Virginia combining with the state of Delaware, or more often, only Kent County and Sussex County, Delaware.
Florida: The secession of South Florida and the Greater Miami area to form the state of "South Florida." South Florida has a population of over 7 million, comprising 41% of Florida's population.
Illinois: The secession of Cook County, which contains Chicago, from Illinois to form a state. Chicago sits in the northeast corner of the state, with the remainder of the Illinois sometimes referred to as Downstate Illinois. Such proposals have invariably come from the more Republican downstate Illinois, as a way to end the dominance in statewide politics of the overwhelmingly Democratic Chicago area. Stronger moves have been made to form Southern Illinois "The Great Rivers State". With the separation being south of Springfield, the capital would be in Mt. Vernon in the area commonly referred to as "Little Egypt".
Maryland: The secession of five counties on the western side of the state due to political differences with the more liberal central part of the state.
Michigan and Superior: The northern part of Michigan known formally as the Upper Peninsula of Michigan, and colloquially known as Yooper Land, the Yooper Peninsula, the UP, or Superiorland. Note that the Upper Peninsula of Michigan is actually north of the region of the Lower Peninsula normally known in lower Michigan as "Northern Michigan", which tends to cause some confusion.
New York: Various proposals partitioning New York into separate states, all of which involve to some degree the separation of New York City from the rest of New York state
Texas: Under the joint resolution of Congress by which the Republic of Texas was admitted to the Union, it had the right to divide itself into as many as five different states. It is not clear whether this provides any power beyond that already provided by the Constitution. What is clear is that the Texas Legislature would have to approve any proposal to divide the state using this prerogative. There were a significant number of Texans who supported dividing the state in its early decades. They were generally called divisionists. The Texas Constitution and the Texas Annexation Act both provide for the possibility of Texas voting to divide into up to five sovereign States of the Union. Current Texas politics and self-image make any tampering with Texas' status as the largest state by land area in the contiguous United States unlikely.
Washington: Dividing the state into Western Washington and Eastern Washington via the Cascade Mountains. Suggested names include East Washington, Lincoln, and Cascadia.
The National Movement for the Establishment of a 49th State, founded by Oscar Brown, Sr. and Bradley Cyrus and active in Chicago in 1934–37, had the aim of forming an African American state in the South.
Use internationally
Some countries, because of their cultural similarities and close alliances with the United States, are often described as a 51st state. In other countries around the world, movements with various degrees of support and seriousness have proposed U.S. statehood.
Canada
In Canada, "the 51st state" is a phrase generally used in such a way as to imply that if a certain political course is taken, Canada's destiny will be as little more than a part of the United States. Examples include the Canada–United States Free Trade Agreement in 1988, the debate over the creation of a common defense perimeter, and as a potential consequence of not adopting proposals intended to resolve the issue of Quebec sovereignty, the Charlottetown Accord in 1992 and the Clarity Act in 1999.
The phrase is usually used in local political debates, in polemic writing or in private conversations. It is rarely used by politicians themselves in a public context, although at certain times in Canadian history political parties have used other similarly loaded imagery. In the 1988 federal election, the Liberals asserted that the proposed Free Trade Agreement amounted to an American takeover of CanadaStephen Azzi, "Election of 1988". histori.ca.—notably, the party ran an ad in which Progressive Conservative (PC) strategists, upon the adoption of the agreement, slowly erased the Canada-U.S. border from a desktop map of North America. Within days, however, the PCs responded with an ad which featured the border being drawn back on with a permanent marker, as an announcer intoned "Here's where we draw the line."Carolyn Ryan, "The true north, strong and negative". cbc.ca, 2006.
The implication has historical basis and dates to the breakup of British America during the American Revolution. The colonies that had confederated to form the United States invaded Canada (at the time a term referring specifically to the modern-day provinces of Quebec and Ontario, which had only been in British hands since 1763) at least twice, neither time succeeding in taking control of the territory. The first invasion was during the Revolution, under the assumption that French-speaking Canadians' presumed hostility towards British colonial rule combined with the Franco-American alliance would make them natural allies to the American cause; the Continental Army successfully recruited two Canadian regiments for the invasion. That invasion's failure forced the members of those regiments into exile, and they settled mostly in upstate New York. The Articles of Confederation, written during the Revolution, included a provision for Canada to join the United States, should they ever decide to do so, without needing to seek U.S. permission as other states would.Articles of Confederation, Article XI
The United States again invaded Canada during the War of 1812, but this effort was made more difficult due to the large number of Loyalist Americans that had fled to what is now Ontario and still resisted joining the republic. The Hunter Patriots in the 1830s and the Fenian raids after the American Civil War were private attacks on Canada from the U.S. Several U.S. politicians in the 19th century also spoke in favour of annexing Canada.J.L. Granatstein, Norman Hillmer. For Better or For Worse, Canada and the United States to the 1990s. Mississauga: Copp Clark Pitman, 1991.
In the late 1940s, during the last days of the Dominion of Newfoundland (at the time a dominion-dependency in the Commonwealth and independent of Canada), there was mainstream support, although not majority, for Newfoundland to form an economic union with the United States, thanks to the efforts of the Economic Union Party and significant U.S. investment in Newfoundland stemming from the U.S.-British alliance in World War II. The movement ultimately failed when, in a 1948 referendum, voters narrowly chose to confederate with Canada (the Economic Union Party supported an independent "responsible government" that they would then push toward their goals).
A few groups in Canada have actively campaigned in favor of joining the United States. These annexationist movements have not attracted large mainstream attention, although surveys have found that a small minority of Canadians expressed support for the concept in surveys done by Léger Marketing in 2001 Leger Marketing survey, 2001. and in 2004.
In the United States, the term "the 51st state" when applied to Canada can serve to highlight the similarities and close relationship between the United States and Canada. Sometimes the term is used disparagingly, intended to deride Canada as an unimportant neighbor. In the Quebec general election, 1989, the political party Parti 51 ran 11 candidates on a platform of Quebec seceding from Canada to join the United States (with its leader, André Perron, claiming Quebec could not survive as an independent nation). The party attracted just 3,846 votes across the province, 0.11% of the total votes cast. In comparison, the other parties in favour of sovereignty of Quebec in that election got 40.16% (PQ) and 1.22% (NPDQ).
Latin America
Central America
Due to geographical proximity of the Central American countries to the U.S. which has powerful military, economic, and political influences, there were several movements and proposals by the United States during the 19th and 20th centuries to annex some or all of the Central American republics (Costa Rica, El Salvador, Guatemala, Honduras with the formerly British-ruled Bay Islands, Nicaragua, Panama which had the U.S.-ruled Canal Zone territory from 1903 to 1979, and Belize, which is a constitutional monarchy and was known as British Honduras until 1981). However, the U.S. never acted on these proposals from some U.S. politicians; some of which were never delivered or considered seriously. In 2001, El Salvador adopted the U.S. dollar as its currency, while Panama has used it for decades due to its ties to the Canal Zone.
Cuba
In 1854 the Ostend Manifesto was written, outlining the rationale for the U.S. to purchase Cuba from Spain, implying taking the island by force if Spain refused. Once the document became published many northern states denounced the document.
In 1859, Senator John Slidell introduced a bill to purchase Cuba from Spain.
Cuba, like many Spanish territories, wanted to break free from Spain. A pro-independence movement in Cuba was supported by the U.S., and Cuban guerrilla leaders wanted annexation to the United States, but Cuban revolutionary leader José Martí called for Cuban nationhood. When the U.S. battleship Maine sank in Havana Harbor, the U.S. blamed Spain and the Spanish–American War broke out in 1898. After the U.S. won, Spain relinquished claim of sovereignty over territories, including Cuba. The U.S. administered Cuba as a protectorate until 1902. Several decades later in 1959, the Cuban government of U.S.-backed Fulgencio Batista was overthrown by Fidel Castro who subsequently installed a Marxist–Leninist government. When the US refused to trade with Cuba, Cuba allied with the Soviet Union who imported Cuban sugar, Cuba's main export. The government installed by Fidel Castro has been in power ever since. Recently the US has relaxed sanctions on Cuba. United Airlines submitted a formal application to the U.S. Department of Transportation (DOT) for authority to provide service from four of its largest U.S. gateway cities – Newark/New York, Houston, Washington, D.C. and Chicago – to Havana's José Martí International Airport.United Airlines
Dominica
In 1898, one or more news outlets in the Caribbean noted growing sentiments of resentment of British rule in Dominica, including the system of administration over the country. These publications attempted to gauge sentiments of annexation to the United States as a way to change this system of administration.
Dominican Republic
On June 30, 1870, the United States Senate took a vote on an annexation treaty with the Dominican Republic, but it failed to proceed.
Haiti
Time columnist Mark Thompson suggested that Haiti had effectively become the 51st state after the 2010 Haiti earthquake, with the widespread destruction prompting a quick and extensive response from the United States, even so far as the stationing of the U.S. military in Haitian air and sea ports to facilitate foreign aid.
Mexico
In 1847–48, with the United States occupying Mexico at the conclusion of the Mexican–American War, there was talk in Congress of annexing the entirety of Mexico; see All of Mexico Movement. The result was the Mexican Cession, also called the Treaty of Guadalupe Hidalgo for the town in which the treaty was signed, in which the U.S. annexed over 40% of Mexico. Talk of annexing all of Mexico disappeared after this time.
Asia and Pacific
Australia
In Australia, the term '51st state' is used as a disparagement of a perceived invasion of American cultural or political influence.e.g.:
Iraq
thumb|A resident of Seattle, Washington, through a homemade sign, facetiously declares that the Republic of Iraq is the 51st U.S. state.
Several publications suggested that the Iraq War was a neocolonialist war to make the Republic of Iraq into the 51st U.S. state, though such statements are usually made in a facetious manner, as a tongue-in-cheek statement.
Israel
Several websites assert that Israel is the 51st state due to the annual funding and defense support it receives from the United States. An example of this concept can be found in 2003 when Martine Rothblatt published a book called Two Stars for Peace that argued for the addition of Israel and the Palestinian territories surrounding it as the 51st state in the Union. The American State of Canaan, is a book published by Prof. Alfred de Grazia, political science and sociologist, in March 2009, proposing the creation of the 51st and 52nd states from Israel and the Palestinian territories.
Japan
In Article 3 of the Treaty of San Francisco between the Allied Powers and Japan, which came into force in April 1952, the U.S. put the outlying islands of the Ryukyus, including the island of Okinawa—home to over 1 million Okinawans related to the Japanese—and the Bonin Islands, the Volcano Islands, and Iwo Jima into U.S. trusteeship. (came into force on April 28, 1952). All these trusteeships were slowly returned to Japanese rule. Okinawa was returned on May 15, 1972, but the U.S. stations troops in the island's bases as a defense for Japan.
New Zealand
In 2010 there was an attempt to register a 51st State Party with the New Zealand Electoral Commission. The party advocates New Zealand becoming the 51st state of the United States of America. The party's secretary is Paulus Telfer, a former Christchurch mayoral candidate.
On February 5, 2010, the party applied to register a logo with the Electoral Commission. The logo – a US flag with 51 stars – was rejected by the Electoral Commission on the grounds that it was likely to cause confusion or mislead electors. , the party remains unregistered and cannot appear on a ballot.
Taiwan
A poll in 2003 among Taiwanese residents aged between 13 and 22 found that, when given the options of either becoming a province of the People's Republic of China or a state within the U.S., 55% of the respondents preferred statehood while only 36% chose joining China. (MS Word document, Chinese, See item 4) August 19, 2003 A group called Taiwan Civil Government, established in Taipei in 2008, claims that the island of Taiwan and other minor islands are the territory of the United States.
Europe
Albania
Albania has often been called the 51st state for its perceived strongly pro-American positions, mainly because of the United States' policies towards it. In reference to President George W. Bush's 2007 European tour, Edi Rama, Tirana's mayor and leader of the opposition Socialists, said: "Albania is for sure the most pro-American country in Europe, maybe even in the world ... Nowhere else can you find such respect and hospitality for the President of the United States. Even in Michigan, he wouldn't be as welcome." At the time of ex-Secretary of State James Baker's visit in 1992, there was even a move to hold a referendum declaring the country as the 51st American state. In addition to Albania, Kosovo which is predominately Albanian is seen as a 51st state due to the heavily presence and influence of the United States. The US has had troops and the largest base outside US territory, Camp Bondsteel in the territory since 1999.
Denmark
In 1989, the Los Angeles Times proclaimed that Denmark becomes the 51st state every Fourth of July, because Danish citizens in and around Aalborg celebrate the American independence day.
Greenland
During World War II, when Denmark was occupied by Nazi Germany, the United States briefly controlled Greenland for battlefields and protection. In 1946, the United States offered to buy Greenland from Denmark for $100 million ($1.2 billion today) but Denmark refused to sell it.National Review May 7, 2001 "Let's Buy Greenland! – A complete missile-defense plan" By John J. Miller (National Review's National Political Reporter) Several politicians and others have in recent years argued that Greenland could hypothetically be in a better financial situation as a part of the United States; for instance mentioned by professor Gudmundur Alfredsson at University of Akureyri in 2014. One of the actual reasons behind US interest in Greenland could be the vast natural resources of the island. According to Wikileaks, the U.S. appears to be highly interested in investing in the resource base of the island and in tapping the vast expected hydrocarbons off the Greenlandic coast.
Poland
Poland has historically been staunchly pro-American, dating back to General Tadeusz Kościuszko and Casimir Pulaski's involvement in the American Revolution. This pro-American stance was reinforced following favorable American intervention in World War I (leading to the creation of an independent Poland) and the Cold War (culminating in a Polish state independent of Soviet influence). Poland contributed a large force to the "Coalition of the Willing" in Iraq. A quote referring to Poland as "the 51st state" has been attributed to James Pavitt, then Central Intelligence Agency Deputy Director for Operations, especially in connection to extraordinary rendition.
Sicily (Italy)
The Party of Reconstruction in Sicily, which claimed 40,000 members in 1944, campaigned for Sicily to be admitted as a U.S. state. This party was one of several Sicilian separatist movements active after the downfall of Italian Fascism. Sicilians felt neglected or underrepresented by the Italian government after the annexation of 1861 that ended the rule of the Kingdom of the Two Sicilies based in Naples. The large population of Sicilians in America and the American-led Allied invasion of Sicily in July–August 1943 may have contributed to the sentiment.
United Kingdom and Republic of Ireland
The United Kingdom has sometimes been called the 51st state due to the "Special Relationship" between the two countries, particularly since the close cooperation between Franklin D. Roosevelt and Winston Churchill during World War II, and more recently continued during the premierships of Margaret Thatcher and Tony Blair.
In a December 29, 2011, column in The Times, David Aaronovitch said in jest that the UK should consider joining the United States, as the British population cannot accept union with Europe and the UK would inevitably decline on its own. He also made an alternative case that England, Scotland, and Wales should be three separate states, with Northern Ireland joining the Republic of Ireland and becoming an all-Ireland state.David Aaronovitch, Goodbye, Europe, a New World awaits us, page 23, The Times, Thursday December 29, 2011
From terra nullius
There are four categories of terra nullius, land that is unclaimed by any state: the small unclaimed territory of Bir Tawil between Egypt and Sudan, Antarctica, the oceans, and celestial bodies such as the Moon or Mars. In the last three of these, international treaties (the Antarctic Treaty, the United Nations Convention on the Law of the Sea, and the Outer Space Treaty respectively) prevent colonization and potential statehood of any of these uninhabited (and, given current technology, not permanently inhabitable) territories.
Lagrangian points of Earth's orbit
The L5 Society was founded in 1975 with the intention of constructing a space habitat at one of the Lagrangian points of Earth's orbit. Its members successfully lobbied the United States Senate to defeat the Moon Treaty, a treaty that would have transferred sovereignty of all outer space to an international organization, in 1980.Pg 5, Archive for December 1975, Space Studies Institute The high price such a project would cost compared to previous Earth-based colonies eventually led to the group's demise in the 1980s.
In popular culture
Related terms have been used in books and film usually used in a negative sense:
In Americathon (1979), set in a fictional 1998, Britain (renamed as Limeyland) has become the 57th state, and the logo of the Safeway grocery chain hangs on the Palace of Westminster.
In the alternative universe of Alan Moore's graphic novel Watchmen, the Vietnam War ends with the conquest of the North and annexation of a united Vietnam as the 51st state.
In The Light of Other Days (2000), a novel by Arthur C. Clarke and Stephen Baxter, Britain joins the United States, with the Prime Minister serving as governor and the Royal Family exiled to Australia.
The British film The 51st State (released in the United States and Canada as Formula 51), makes fun of Anglo-American relations.
The term has also been used in music.
The 1986 album The Ghost of Cain by the English rock band New Model Army features a track called "51st State", which refers to Britain under Margaret Thatcher for her perceived pro-Americanism.
The 1986 track "Heartland" by English band The The describes an increasingly economically divided Britain as the 51st state of the US.http://www.metrolyrics.com/heartland-lyrics-the-the.html
See also
Notes
External links
Will Puerto Rico Finally Become Our 51st State?
tvtropes – The United Fifty-One States Of America for examples in fiction
Carol Orsag – The People's Almanac, 1975 – The United Thirty-Eight States of America
Category:Canada–United States relations
Category:Canadian political phrases
Category:Epithets
Category:History of United States expansionism
Category:Political terminology of the United States
Category:Lists of proposals | 475,488 | 2017-01 |
IPod | The iPod is a line of portable media players and multi-purpose pocket computers designed and marketed by Apple Inc. The first version was released on October 23, 2001, about 8½ months after iTunes (Macintosh version) was released. The most recent iPod redesigns were announced on July 15, 2015. There are three current versions of the iPod: the ultra-compact iPod Shuffle, the compact iPod Nano and the touchscreen iPod Touch.
Like other digital music players, iPods can serve as external data storage devices. Storage capacity varies by model, ranging from 2 GB for the iPod Shuffle to 128 GB for the iPod Touch (previously 160 GB for the iPod Classic, which is now discontinued).
Apple's iTunes software (and other alternative software) can be used to transfer music, photos, videos, games, contact information, e-mail settings, Web bookmarks, and calendars, to the devices supporting these features from computers using certain versions of Apple Macintosh and Microsoft Windows operating systems.
Before the release of iOS 5, the iPod branding was used for the media player included with the iPhone and iPad, a combination of the Music and Videos apps on the iPod Touch. As of iOS 5, separate apps named "Music" and "Videos" are standardized across all iOS-powered products. While the iPhone and iPad have essentially the same media player capabilities as the iPod line, they are generally treated as separate products. During the middle of 2010, iPhone sales overtook those of the iPod.
In mid-2015, a new model of the iPod Touch was announced by Apple, and was officially released on the Apple store on July 15, 2015. The sixth generation iPod Touch includes a wide variety of spec improvements such as the upgraded A8 processor and higher-quality screen. The core is over 5 times faster than previous models and is built to be roughly on par with the iPhone 5S. It is available in 5 different colors: Space grey, pink, gold, silver and Product (red).
History
thumb|Various iPod models, all of which have been discontinued or updated.
Though the iPod was released in 2001, its price and Mac-only compatibility caused sales to be relatively slow until 2004. The iPod line came from Apple's "digital hub" category, when the company began creating software for the growing market of personal digital devices. Digital cameras, camcorders and organizers had well-established mainstream markets, but the company found existing digital music players "big and clunky or small and useless" with user interfaces that were "unbelievably awful,"Kahney, Leander.Straight Dope on the iPod's Birth, Wired News, October 17, 2006. Retrieved on October 30, 2006. so Apple decided to develop its own. As ordered by CEO Steve Jobs, Apple's hardware engineering chief Jon Rubinstein assembled a team of engineers to design the iPod line, including hardware engineers Tony Fadell and Michael Dhuey, and design engineer Sir Jonathan Ive. Rubinstein had already discovered the Toshiba disk drive when meeting with an Apple supplier in Japan, and purchased the rights to it for Apple, and had also already worked out how the screen, battery, and other key elements would work.Steve Jobs by Walter Issac page 865 The aesthetic was inspired by the 1958 Braun T3 transistor radio designed by Dieter Rams, while the wheel based user interface was prompted by Bang & Olufsen's BeoCom 6000 telephone. The product ("the Walkman of the twenty-first century" Simon, William L.; Young, Jeffrey S. (2005). iCon: Steve Jobs, The Greatest Second Act in the History of Business. John Wiley & Sons. ISBN 0-471-72083-6) was developed in less than one year and unveiled on October 23, 2001. Jobs announced it as a Mac-compatible product with a 5 GB hard drive that put "1,000 songs in your pocket."
Apple did not develop the iPod software entirely in-house, instead using PortalPlayer's reference platform based on two ARM cores. The platform had rudimentary software running on a commercial microkernel embedded operating system. PortalPlayer had previously been working on an IBM-branded MP3 player with Bluetooth headphones. Apple contracted another company, Pixo, to help design and implement the user interface under the direct supervision of Steve Jobs. As development progressed, Apple continued to refine the software's look and feel. Starting with the iPod Mini, the Chicago font was replaced with Espy Sans. Later iPods switched fonts again to Podium Sans—a font similar to Apple's corporate font, Myriad. Color display iPods then adopted some Mac OS X themes like Aqua progress bars, and brushed metal meant to evoke a combination lock. In 2007, Apple modified the iPod interface again with the introduction of the sixth-generation iPod Classic and third-generation iPod Nano by changing the font to Helvetica and, in most cases, splitting the screen in half by displaying the menus on the left and album artwork, photos, or videos on the right (whichever was appropriate for the selected item).
In 2006 Apple presented a special edition for iPod 5G of Irish rock band U2. Like its predecessor, this iPod has engraved the signatures of the four members of the band on its back, but this one was the first time the company changed the colour of the metal (not silver but black). This iPod was only available with 30GB of storage capacity. The special edition entitled purchasers to an exclusive video with 33 minutes of interviews and performance by U2, downloadable from the iTunes Store.
In September 2007, during a lawsuit with patent holding company Burst.com, Apple drew attention to a patent for a similar device that was developed in 1979. Kane Kramer applied for a UK patent for his design of a "plastic music box" in 1981, which he called the IXI.Boffey, Daniel Apple admit Briton DID invent iPod, but he's still not getting any money Daily Mail, September 8, 2008. Retrieved on September 8, 2008. He was unable to secure funding to renew the US$120,000 worldwide patent, so it lapsed and Kramer never profited from his idea.
The name iPod was proposed by Vinnie Chieco, a freelance copywriter, who (with others) was called by Apple to figure out how to introduce the new player to the public. After Chieco saw a prototype, he thought of the movie 2001: A Space Odyssey and the phrase "Open the pod bay door, Hal!", which refers to the white EVA Pods of the Discovery One spaceship. Chieco saw an analogy to the relationship between the spaceship and the smaller independent pods in the relationship between a personal computer and the music player. Apple researched the trademark and found that it was already in use. Joseph N. Grasso of New Jersey had originally listed an "iPod" trademark with the U.S. Patent and Trademark Office (USPTO) in July 2000 for Internet kiosks. The first iPod kiosks had been demonstrated to the public in New Jersey in March 1998, and commercial use began in January 2000, but had apparently been discontinued by 2001. The trademark was registered by the USPTO in November 2003, and Grasso assigned it to Apple Computer, Inc. in 2005.Serial No. 78018061, Registration No. 2781793, records of the U.S. Patent and Trademark Office. InPub, LLC, filed an "IPOD" trademark on June 1, 1999, for "computer software and hardware." The trademark was abandoned May 18, 2000, without commercial use.
The earliest recorded use in commerce of an "iPod" trademark was in 1991 by Chrysalis Corp. of Sturgis, Michigan, styled "iPOD".
In mid-2015, several new color schemes for all of the current iPod models were spotted in the latest version of iTunes, 12.2. Belgian website Belgium iPhone originally found the images when plugging in an iPod for the first time, and subsequent leaked photos were found by Pierre Dandumont.
Hardware
+Chipsets and ElectronicsChipset or ElectronicProduct(s)Component(s)Microcontroller iPod Classic 1st to 3rd generationsTwo ARM 7TDMI-derived CPUs running at 90 MHziPod Classic 4th and 5th generations, iPod Mini, iPod Nano 1st generationVariable-speed ARM 7TDMI CPUs, running at a peak of 80 MHz to save battery lifeiPod Classic 6th generation, iPod Nano 2nd generation onwards, iPod Shuffle 2nd generation onwardsSamsung System-on-a-chip, based around an ARM processor.Cassell, Jonathan. Apple Delivers More For Less With New iPod Nano, iSuppli Corporation, September 20, 2006. Retrieved on October 21, 2006.iPod Shuffle 1st generationSigmaTel D-Major STMP3550 chip running at 75 MHz that handles both the music decoding and the audio circuitry.Williams, Martyn. How Much Should an IPod Shuffle Cost?, PC World, February 24, 2005. Retrieved on August 14, 2006.iPod Touch 1st and 2nd generation ARM 1176JZ(F)-S at 412 MHz for 1st gen, 533 MHz for 2nd gen. iPod Touch 3rd and 4th generation ARM Cortex A8 at 600 MHz for 3rd gen, 800 MHz for 4th gen. (Apple A4) iPod Touch 5th generation ARM Cortex A9 at 800 MHz (Apple A5)iPod Touch 6th generation Apple ARMv8-A "Typhoon" at 1.1 GHz (Apple A8) with Apple M8 Motion coprocessorAudio ChipiPod Classic 1st to 5th generation, iPod Touch 1st generation, iPod Nano 1st to 3rd generation, iPod MiniMacworld Wolfson loses Apple iPod businessAudio Codecs developed by Wolfson MicroelectronicsiPod Classic 6th generation, iPod Touch 2nd generation onwards, iPod Shuffle, iPod Nano 4th generation onwardsCirrus Logic Audio Codec ChipStorage MediumiPod Classic45.7 mm (1.8 in) hard drives (ATA-6, 4200 rpm with ZIF connectors) made by ToshibaiPod Mini25.4 mm (1 in) Microdrive by Hitachi and SeagateiPod NanoFlash Memory from Samsung, Toshiba, and othersiPod Shuffle and TouchFlash MemoryBatteriesiPod Classic 1st and 2nd generationInternal Recyclable Lithium Polymer BatteriesiPod Classic 3rd generation onwards, iPod Mini, iPod Nano, iPod Touch, iPod ShuffleInternal Recyclable Lithium-Ion BatteriesDisplayiPod Nano 7th generation2.5-inch (diagonal) Multi-Touch, 432-by-240 resolution at 202 pixels per inchiPod Classic 5th and 6th generation2.5-inch (diagonal) color LCD with LED backlight, 320-by-240 resolution at 163 pixels per inchiPod Touch 5th and 6th generation4-inch (diagonal) widescreen Multi-Touch, 1136-by-640 resolution at 326 pixels per inch
Audio
The third-generation iPod had a weak bass response, as shown in audio tests.Machrone, Bill. iPod audio measurements, PC Magazine, 2005. Retrieved on February 17, 2007.Heijligers, Marc. iPod audio measurements. Retrieved on February 17, 2007. The combination of the undersized DC-blocking capacitors and the typical low-impedance of most consumer headphones form a high-pass filter, which attenuates the low-frequency bass output. Similar capacitors were used in the fourth-generation iPods.Heijligers, Marc. iPod circuit design engineering, May 2006. Retrieved on February 17, 2007. The problem is reduced when using high-impedance headphones and is completely masked when driving high-impedance (line level) loads, such as an external headphone amplifier. The first-generation iPod Shuffle uses a dual-transistor output stage, rather than a single capacitor-coupled output, and does not exhibit reduced bass response for any load.
For all iPods released in 2006 and earlier, some equalizer (EQ) sound settings would distort the bass sound far too easily, even on undemanding songs.Vaughan, Austin. , DAP review, November 8, 2004. Retrieved on September 14, 2012.Handby, Simon. , Expert Reviews, December 19, 2005. Retrieved on September 14, 2012. This would happen for EQ settings like R&B, Rock, Acoustic, and Bass Booster, because the equalizer amplified the digital audio level beyond the software's limit, causing distortion (clipping) on bass instruments.
From the fifth-generation iPod on, Apple introduced a user-configurable volume limit in response to concerns about hearing loss.Cohen, Peter. iPod update limits iPod volume setting, Macworld, 2006. Retrieved on November 7, 2008. Users report that in the sixth-generation iPod, the maximum volume output level is limited to 100 dB in EU markets. Apple previously had to remove iPods from shelves in France for exceeding this legal limit.Fried, Ian. Apple pulls iPod in France. Retrieved on November 7, 2008. However, users that have bought a new sixth-generation iPod in late 2013 have reported a new option that allowed them to disable the EU volume limit.New Option to turn off EU Volume Cap? It has been said that these new iPods came with an updated software that allowed this change.Volume Limit Removed!! Older sixth-generation iPods, however, are unable to update to this software version.Why Can't I Update My Ipod Classic to 2.0.5
Connectivity
thumb|right|Four iPod wall chargers for North America, all made by Apple. These have FireWire (left) and USB (right three) connectors, which allow iPods to charge without a computer. The units have been miniaturized over time.
Originally, a FireWire connection to the host computer was used to update songs or recharge the battery. The battery could also be charged with a power adapter that was included with the first four generations.
The third generation began including a 30-pin dock connector, allowing for FireWire or USB connectivity. This provided better compatibility with non-Apple machines, as most of them did not have FireWire ports at the time. Eventually Apple began shipping iPods with USB cables instead of FireWire, although the latter was available separately. As of the first-generation iPod Nano and the fifth-generation iPod Classic, Apple discontinued using FireWire for data transfer (while still allowing for use of FireWire to charge the device) in an attempt to reduce cost and form factor. As of the second-generation iPod Touch and the fourth-generation iPod Nano, FireWire charging ability has been removed. The second-, third-, and fourth-generation iPod Shuffle uses a single 3.5 mm minijack phone connector which acts as both a headphone jack or a USB data and charging port for the dock/cable.
The dock connector also allowed the iPod to connect to accessories, which often supplement the iPod's music, video, and photo playback. Apple sells a few accessories, such as the now-discontinued iPod Hi-Fi, but most are manufactured by third parties such as Belkin and Griffin. Some peripherals use their own interface, while others use the iPod's own screen. Because the dock connector is a proprietary interface, the implementation of the interface requires paying royalties to Apple.
Apple introduced a new 8-pin dock connector, named Lightning, on September 12, 2012 with their announcement of the iPhone 5, the fifth generation iPod Touch, and the seventh generation iPod Nano, which all feature it. The new connector replaces the older 30-pin dock connector used by older iPods, iPhones, and iPads. Apple Lightning cables have pins on both sides of the plug so it can be inserted with either side facing up.Apple iPhone 5 features; Apple.com
Accessories
thumb|right|150px|The "Made for iPod" logo found on most classic iPod accessories
Many accessories have been made for the iPod line. A large number are made by third party companies, although many, such as the iPod Hi-Fi, are made by Apple. Some accessories add extra features that other music players have, such as sound recorders, FM radio tuners, wired remote controls, and audio/visual cables for TV connections. Other accessories offer unique features like the Nike+iPod pedometer and the iPod Camera Connector. Other notable accessories include external speakers, wireless remote controls, protective case, screen films, and wireless earphones.In-The-Ear Bluetooth Earphones. Retrieved on February 17, 2007. Among the first accessory manufacturers were Griffin Technology, Belkin, JBL, Bose, Monster Cable, and SendStation.
BMW released the first iPod automobile interface,iPod Your BMW. Retrieved on February 17, 2007. allowing drivers of newer BMW vehicles to control an iPod using either the built-in steering wheel controls or the radio head-unit buttons. Apple announced in 2005 that similar systems would be available for other vehicle brands, including Mercedes-Benz,Apple & Mercedes-Benz Unveil iPod Integration Kit, Apple Inc., January 11, 2005. Retrieved on June 20, 2006. Volvo,Apple & Volvo Announce iPod Connectivity For Entire 2005 US Model Line, Apple Inc., January 11, 2005. Retrieved on June 20, 2006. Nissan, Toyota, Alfa Romeo, Ferrari,Apple & Leading Car Companies Team Up to Deliver iPod Integration in 2005, Apple Inc., January 11, 2005. Retrieved on June 20, 2006. Acura, Audi, Honda,Honda Music Link for iPods, Honda. Retrieved on February 17, 2007. Renault, Infiniti and Volkswagen.Apple Teams Up With Acura, Audi, Honda & Volkswagen to Deliver Seamless iPod Experience, Apple Inc., September 7, 2005. Retrieved on June 20, 2006. Scion offers standard iPod connectivity on all their cars.
Gecko Gear, founded in 2006, sells 100 accessories ranging from protective covers to cases, screen protectors to armbands.
Some independent stereo manufacturers including JVC, Pioneer, Kenwood, Alpine, Sony, and Harman Kardon also have iPod-specific integration solutions. Alternative connection methods include adapter kits (that use the cassette deck or the CD changer port), audio input jacks, and FM transmitters such as the iTrip—although personal FM transmitters are illegal in some countries. Many car manufacturers have added audio input jacks as standard.Car Integration: iPod your car, Apple Inc.. Retrieved on February 17, 2007.
Beginning in mid-2007, four major airlines, United, Continental, Delta, and Emirates, reached agreements to install iPod seat connections. The free service will allow passengers to power and charge an iPod, and view video and music libraries on individual seat-back displays.Apple Teams Up With Continental, Delta, Emirates, & United to deliver iPod Integration, Apple Inc., November 14, 2006. Retrieved on December 7, 2006. Originally KLM and Air France were reported to be part of the deal with Apple, but they later released statements explaining that they were only contemplating the possibility of incorporating such systems.Marsal, Katie. Two of six airlines say there's no ink on iPod deal, AppleInsider, November 15, 2006. Retrieved on December 7, 2006.
Software
The iPod line can play several audio file formats including MP3, AAC/M4A, Protected AAC, AIFF, WAV, Audible audiobook, and Apple Lossless. The iPod photo introduced the ability to display JPEG, BMP, GIF, TIFF, and PNG image file formats. Fifth and sixth generation iPod Classics, as well as third generation iPod Nanos, can additionally play MPEG-4 (H.264/MPEG-4 AVC) and QuickTime video formats, with restrictions on video dimensions, encoding techniques and data-rates.The restrictions vary from generation to generation; for the earliest video iPods, video is required to be Baseline Profile (BP), up to Level 1.3, meaning most significantly no B-frames (BP), a maximum bitrate of 768 kb/s (BP Level 1.3), and a maximum framerate of 30 frame/s at 320×240 resolution. Newer iPods support BP up to level 3.0 (10,000 kb/s), for a maximum framerate of 30 frame/s at 640×480 resolution. Current specifications can be seen at iPod classic Technical Specs, and practical implementations can be seen in the libx264-ipod320.ffpreset and libx264-ipod640.ffpreset preset files for FFmpeg, as discussed in [Ffmpeg-user] Successful ipod h264 encoding, by Daniel Rogers, Jun 11, 2006. Originally, iPod software only worked with Classic Mac OS and Mac OS X; iPod software for Microsoft Windows was launched with the second generation model. Unlike most other media players, Apple does not support Microsoft's WMA audio format—but a converter for WMA files without Digital Rights Management (DRM) is provided with the Windows version of iTunes. MIDI files also cannot be played, but can be converted to audio files using the "Advanced" menu in iTunes. Alternative open-source audio formats, such as Ogg Vorbis and FLAC, are not supported without installing custom firmware onto an iPod (e.g., Rockbox).
During installation, an iPod is associated with one host computer. Each time an iPod connects to its host computer, iTunes can synchronize entire music libraries or music playlists either automatically or manually. Song ratings can be set on an iPod and synchronized later to the iTunes library, and vice versa. A user can access, play, and add music on a second computer if an iPod is set to manual and not automatic sync, but anything added or edited will be reversed upon connecting and syncing with the main computer and its library. If a user wishes to automatically sync music with another computer, an iPod's library will be entirely wiped and replaced with the other computer's library.
Interface
thumb|The signature iPod click wheel.
iPods with color displays use anti-aliased graphics and text, with sliding animations. All iPods (except the 3rd-generation iPod Shuffle, the 6th & 7th generation iPod Nano, and iPod Touch) have five buttons and the later generations have the buttons integrated into the click wheel – an innovation that gives an uncluttered, minimalist interface. The buttons perform basic functions such as menu, play, pause, next track, and previous track. Other operations, such as scrolling through menu items and controlling the volume, are performed by using the click wheel in a rotational manner. The 3rd-generation iPod Shuffle does not have any controls on the actual player; instead it has a small control on the earphone cable, with volume-up and -down buttons and a single button for play and pause, next track, etc. The iPod Touch has no click-wheel; instead it uses a 3.5" touch screen along with a home button, sleep/wake button and (on the second and third generations of the iPod Touch) volume-up and -down buttons. The user interface for the iPod Touch is identical to that of the iPhone. Differences include a lack of a phone application. Both devices use iOS.
iTunes Store
The iTunes Store (introduced April 29, 2003) is an online media store run by Apple and accessed through iTunes. The store became the market leader soon after its launchiTunes Music Store Catalog Tops One Million Songs, Apple Inc., August 10, 2004. Retrieved on December 28, 2006. and Apple announced the sale of videos through the store on October 12, 2005. Full-length movies became available on September 12, 2006.Scott-Joynt, Jeremy. Apple targets TV and film market, BBC News, September 12, 2006. Retrieved on September 12, 2006.
At the time the store was introduced, purchased audio files used the AAC format with added encryption, based on the FairPlay DRM system. Up to five authorized computers and an unlimited number of iPods could play the files. Burning the files with iTunes as an audio CD, then re-importing would create music files without the DRM. The DRM could also be removed using third-party software. However, in a deal with Apple, EMI began selling DRM-free, higher-quality songs on the iTunes Stores, in a category called "iTunes Plus." While individual songs were made available at a cost of US$1.29, 30¢ more than the cost of a regular DRM song, entire albums were available for the same price, US$9.99, as DRM encoded albums. On October 17, 2007, Apple lowered the cost of individual iTunes Plus songs to US$0.99 per song, the same as DRM encoded tracks. On January 6, 2009, Apple announced that DRM has been removed from 80% of the music catalog, and that it would be removed from all music by April 2009.
iPods cannot play music files from competing music stores that use rival-DRM technologies like Microsoft's protected WMA or RealNetworks' Helix DRM. Example stores include Napster and MSN Music. RealNetworks claims that Apple is creating problems for itself by using FairPlay to lock users into using the iTunes Store. Steve Jobs stated that Apple makes little profit from song sales, although Apple uses the store to promote iPod sales. However, iPods can also play music files from online stores that do not use DRM, such as eMusic or Amie Street.
Universal Music Group decided not to renew their contract with the iTunes Store on July 3, 2007. Universal will now supply iTunes in an 'at will' capacity.Evans, Jonny. Universal confirms iTunes contract change, Macworld UK, July 4, 2007. Retrieved on July 5, 2007.
Apple debuted the iTunes Wi-Fi Music Store on September 5, 2007, in its Media Event entitled "The Beat Goes On...". This service allows users to access the Music Store from either an iPhone or an iPod Touch and download songs directly to the device that can be synced to the user's iTunes Library over a WiFi connection, or, in the case of an iPhone, the telephone network.
Games
Video games are playable on various versions of iPods. The original iPod had the game Brick (originally invented by Apple's co-founder Steve Wozniak) included as an easter egg hidden feature; later firmware versions added it as a menu option. Later revisions of the iPod added three more games: Parachute, Solitaire, and Music Quiz.
In September 2006, the iTunes Store began to offer additional games for purchase with the launch of iTunes 7, compatible with the fifth generation iPod with iPod software 1.2 or later. Those games were: Bejeweled, Cubis 2, Mahjong, Mini Golf, Pac-Man, Tetris, Texas Hold 'Em, Vortex, Asphalt 4: Elite Racing and Zuma. Additional games have since been added. These games work on the 6th and 5th generation iPod Classic and the 5th and 4th generation iPod Nano.
With third parties like Namco, Square Enix, Electronic Arts, Sega, and Hudson Soft all making games for the iPod, Apple's MP3 player has taken steps towards entering the video game handheld console market. Even video game magazines like GamePro and EGM have reviewed and rated most of their games as of late.
The games are in the form of .ipg files, which are actually .zip archives in disguise. When unzipped, they reveal executable files along with common audio and image files, leading to the possibility of third party games. Apple has not publicly released a software development kit (SDK) for iPod-specific development."What's Inside an iPod Game?" bensinclair.com, September 14, 2006. Apps produced with the iPhone SDK are compatible only with the iOS on the iPod Touch and iPhone, which cannot run clickwheel-based games.
File storage and transfer
All iPods except for the iPod Touch can function in "disk mode" as mass storage devices to store data files but this may not be the default behavior, and in the case of the iPod Touch, requires special software. If an iPod is formatted on a Mac OS computer, it uses the HFS+ file system format, which allows it to serve as a boot disk for a Mac computer. If it is formatted on Windows, the FAT32 format is used. With the release of the Windows-compatible iPod, the default file system used on the iPod line switched from HFS+ to FAT32, although it can be reformatted to either file system (excluding the iPod Shuffle which is strictly FAT32). Generally, if a new iPod (excluding the iPod Shuffle) is initially plugged into a computer running Windows, it will be formatted with FAT32, and if initially plugged into a Mac running Mac OS it will be formatted with HFS+.
Unlike many other MP3 players, simply copying audio or video files to the drive with a typical file management application will not allow an iPod to properly access them. The user must use software that has been specifically designed to transfer media files to iPods, so that the files are playable and viewable. Usually iTunes is used to transfer media to an iPod, though several alternative third-party applications are available on a number of different platforms.
iTunes 7 and above can transfer purchased media of the iTunes Store from an iPod to a computer, provided that computer containing the DRM protected media is authorized to play it.
Media files are stored on an iPod in a hidden folder, along with a proprietary database file. The hidden content can be accessed on the host operating system by enabling hidden files to be shown. The media files can then be recovered manually by copying the files or folders off the iPod. Many third-party applications also allow easy copying of media files off of an iPod.
Models and features
While the suffix "Classic" was not introduced until the sixth generation, it has been applied here retroactively to all generic iPods for clarity.
Patent disputes
In 2005, Apple faced two lawsuits claiming patent infringement by the iPod line and its associated technologies:Apple faces patent lawsuits over its iPod, ChannelRegister, March 10, 2005. Retrieved on February 17, 2007. Advanced Audio Devices claimed the iPod line breached its patent on a "music jukebox",U.S. Patent 6,587,403 — Advanced Audio Devices' "music jukebox" patent. while a Hong Kong-based IP portfolio company called Pat-rights filed a suit claiming that Apple's FairPlay technology breached a patentU.S. Patent 6,665,797 — "Protection of software again against unauthorized use" (corrected to "Computer Apparatus/Software Access Control"). issued to inventor Ho Keung Tse. The latter case also includes the online music stores of Sony, RealNetworks, Napster, and Musicmatch as defendants.Apple, Sony among those named in new DRM lawsuit, AppleInsider, August 16, 2005. Retrieved on February 17, 2007.
Apple's application to the United States Patent and Trademark Office for a patent on "rotational user inputs",U.S. patent application 20030095096 Apple Inc.'s application on "rotational user inputs". as used on the iPod interface, received a third "non-final rejection" (NFR) in August 2005. Also in August 2005, Creative Technology, one of Apple's main rivals in the MP3 player market, announced that it held a patentU.S. Patent 6,928,433 Creative Technology's "Zen" patent. on part of the music selection interface used by the iPod line, which Creative Technology dubbed the "Zen Patent", granted on August 9, 2005.Creative wins MP3 player patent, BBC News, August 30, 2005. Retrieved on February 17, 2007. On May 15, 2006, Creative filed another suit against Apple with the United States District Court for the Northern District of California. Creative also asked the United States International Trade Commission to investigate whether Apple was breaching U.S. trade laws by importing iPods into the United States.
On August 24, 2006, Apple and Creative announced a broad settlement to end their legal disputes. Apple will pay Creative US$100 million for a paid-up license, to use Creative's awarded patent in all Apple products. As part of the agreement, Apple will recoup part of its payment, if Creative is successful in licensing the patent. Creative then announced its intention to produce iPod accessories by joining the Made for iPod program.Apple & Creative Announce Broad Settlement..., , Apple Inc.. Retrieved on February 17, 2007.
Sales
thumb|right|400px|iPod quarterly sales. Click for table of data and sources. Note that Q1 is October through December of previous year, the holiday season.
Since October 2004, the iPod line has dominated digital music player sales in the United States, with over 90% of the market for hard drive-based players and over 70% of the market for all types of players.Marsal, Katie. iPod: how big can it get?, AppleInsider, May 24, 2006. Retrieved on February 17, 2007. During the year from January 2004 to January 2005, the high rate of sales caused its U.S. market share to increase from 31% to 65% and in July 2005, this market share was measured at 74%. In January 2007 the iPod market share reached 72.7% according to Bloomberg Online.
On January 8, 2004, Hewlett-Packard (HP) announced that they would sell HP-branded iPods under a license agreement from Apple. Several new retail channels were used—including Wal-Mart—and these iPods eventually made up 5% of all iPod sales. In July 2005, HP stopped selling iPods due to unfavorable terms and conditions imposed by Apple.HP to stop selling Apple's iPods, AppleInsider, July 29, 2005. Retrieved on August 6, 2007.
In January 2007, Apple reported record quarterly revenue of US$7.1 billion, of which 48% was made from iPod sales.
On April 9, 2007, it was announced that Apple had sold its one-hundred millionth iPod, making it the biggest selling digital music player of all time. In April 2007, Apple reported second quarter revenue of US$5.2 billion, of which 32% was made from iPod sales. Apple and several industry analysts suggest that iPod users are likely to purchase other Apple products such as Mac computers.
On October 22, 2007, Apple reported quarterly revenue of US$6.22 billion, of which 30.69% came from Apple notebook sales, 19.22% from desktop sales and 26% from iPod sales. Apple's 2007 year revenue increased to US$24.01 billion with US$3.5 billion in profits. Apple ended the fiscal year 2007 with US$15.4 billion in cash and no debt.Apple Reports Fourth Quarter 2007 Results, Apple Inc., October 22, 2007. Retrieved on October 22, 2007.
On January 22, 2008, Apple reported the best quarter revenue and earnings in Apple's history so far. Apple posted record revenue of US$9.6 billion and record net quarterly profit of US$1.58 billion. 42% of Apple's revenue for the First fiscal quarter of 2008 came from iPod sales, followed by 21% from notebook sales and 16% from desktop sales.Apple Inc. (January 22, 2008). Apple Reports First Quarter Results. Press release. Retrieved on January 23, 2008
On October 21, 2008, Apple reported that only 14.21% of total revenue for fiscal quarter 4 of year 2008 came from iPods.AppleInsider (October 27, 2008). Retrieved on October 27, 2008 At the September 9, 2009 keynote presentation at the Apple Event, Phil Schiller announced total cumulative sales of iPods exceeded 220 million.World of Apple. (September 9, 2009). Live Coverage From Apple’s “It’s Only Rock and Roll” Event. Press release. Retrieved on September 9, 2009 The continual decline of iPod sales since 2009 has not been a surprising trend for the Apple corporation, as Apple CFO Peter Oppenheimer explained in June 2009: "We expect our traditional MP3 players to decline over time as we cannibalize ourselves with the iPod Touch and the iPhone." Since 2009, the company's iPod sales have continually decreased every financial quarter and in 2013 a new model was not introduced onto the market.
, Apple reported that total number of iPods sold worldwide was 350 million.
Industry impact
iPods have won several awards ranging from engineering excellence,iPod and Bluetooth lead to prizes, BBC News, June 3, 2005. Retrieved on March 20, 2007. to most innovative audio product, to fourth best computer product of 2006.Apple wins 5 'World Class' awards, MacNN. Retrieved on February 17, 2007. iPods often receive favorable reviews; scoring on looks, clean design, and ease of use. PC World says that iPod line has "altered the landscape for portable audio players". Several industries are modifying their products to work better with both the iPod line and the AAC audio format. Examples include CD copy-protection schemes,Apple, iPod, and CD Copy Protection, MacRumors. Retrieved on February 17, 2007. and mobile phones, such as phones from Sony Ericsson and Nokia, which play AAC files rather than WMA.
Besides earning a reputation as a respected entertainment device, the iPod has also been accepted as a business device. Government departments, major institutions and international organisations have turned to the iPod line as a delivery mechanism for business communication and training, such as the Royal and Western Infirmaries in Glasgow, Scotland, where iPods are used to train new staff.Hospitals train staff with iPods, BBC News, March 29, 2006. Retrieved on June 16, 2007.
iPods have also gained popularity for use in education. Apple offers more information on educational uses for iPods on their website, including a collection of lesson plans. There has also been academic research done in this area in nursing education and more general K-16 education.Pedagogies afforded by new technologies: The introduction of iPods in one secondary school Duke University provided iPods to all incoming freshmen in the fall of 2004, and the iPod program continues today with modifications. Entertainment Weekly put it on its end-of-the-decade, "best-of" list, saying, "Yes, children, there really was a time when we roamed the earth without thousands of our favorite jams tucked comfortably into our hip pockets. Weird."
The iPod has also been credited with accelerating shifts within the music industry. The iPod's popularization of digital music storage allows users to abandon listening to entire albums and instead be able to choose specific singles which hastened the end of the Album Era in popular music.Tejas Morey. "How iTunes Changed The Music Industry Forever." MensXP (Times of India). Retrieved 5 January 2014.
Criticism
Battery problems
The advertised battery life on most models is different from the real-world achievable life. For example, the fifth generation 30 GB iPod is advertised as having up to 14 hours of music playback. An MP3.com report stated that this was virtually unachievable under real-life usage conditions, with a writer for MP3.com getting on average less than 8 hours from an iPod.MP3 Insider: The truth about your battery life, mp3.com, March 13, 2006. Retrieved on July 10, 2006. In 2003, class action lawsuits were brought against Apple complaining that the battery charges lasted for shorter lengths of time than stated and that the battery degraded over time.Apple investigates iPod batteries, BBC News, February 10, 2004. Retrieved on March 20, 2007. The lawsuits were settled by offering individuals either US$50 store credit or a free battery replacement.Horwitz, Jeremy. Apple’s iPod Battery Settlement, Explained, iLounge, June 10, 2005. Retrieved on August 27, 2006.
iPod batteries are not designed to be removed or replaced by the user, although some users have been able to open the case themselves, usually following instructions from third-party vendors of iPod replacement batteries. Compounding the problem, Apple initially would not replace worn-out batteries. The official policy was that the customer should buy a refurbished replacement iPod, at a cost almost equivalent to a brand new one. All lithium-ion batteries lose capacity during their lifetime even when not in useThe Curse of Lithium Ion Batteries, MP3 Newswire, January 6, 2006. Retrieved on November 30, 2006. (guidelines are available for prolonging life-span) and this situation led to a market for third-party battery replacement kits.
Apple announced a battery replacement program on November 14, 2003, a week beforeiPod Battery FAQ. Retrieved on November 26, 2006. a high publicity stunt and website by the Neistat Brothers.Neistat, Casey. A Message From the Neistat Brothers, November 20, 2003. Retrieved on February 17, 2007. The initial cost was US$99,Apple offers iPod battery replacement service, MacMinute, November 14, 2003. Retrieved on November 26, 2006. and it was lowered to US$59 in 2005. One week later, Apple offered an extended iPod warranty for US$59.AppleCare for iPod now available, MacMinute, November 21, 2003. Retrieved on November 26, 2006. For the iPod Nano, soldering tools are needed because the battery is soldered onto the main board. Fifth generation iPods have their battery attached to the backplate with adhesive.Ecker, Clint. Vivisection of the Video iPod, Ars Technica, October 19, 2005. Retrieved on November 26, 2006.Disassemble Guide for Video iPod. Retrieved on November 26, 2006.
The first generation iPod Nano may overheat and pose a health and safety risk. Affected iPod Nanos were sold between September 2005 and December 2006. This is due to a flawed battery used by Apple from a single battery manufacturer. Apple recommended that owners of affected iPod Nanos stop using them. Under an Apple product replacement program, affected Nanos were replaced with current generation Nanos free of charge.
Reliability and durability
iPods have been criticized for alleged short life-span and fragile hard drives. A 2005 survey conducted on the MacInTouch website found that the iPod line had an average failure rate of 13.7% (although they note that comments from respondents indicate that "the true iPod failure rate may be lower than it appears"). It concluded that some models were more durable than others.iPod Reliability Survey, MacInTouch, November 28, 2005. Retrieved on October 29, 2006. In particular, failure rates for iPods employing hard drives was usually above 20% while those with flash memory had a failure rate below 10%. In late 2005, many users complained that the surface of the first generation iPod Nano can become scratched easily, rendering the screen unusable.Apple responds to iPod nano screen concerns, Macworld, September 27, 2005. Retrieved on February 17, 2007.Arthur, Charles. iPod Nano owners in screen scratch trauma, The Register, September 25, 2005. Retrieved on February 17, 2007. A class action lawsuit was also filed.Fried, Ina. Suit filed over Nano scratches, CNet News, October 21, 2005. Retrieved on February 17, 2007. Apple initially considered the issue a minor defect, but later began shipping these iPods with protective sleeves.
Labor disputes
On June 11, 2006, the British tabloid The Mail on Sunday reported that iPods are mainly manufactured by workers who earn no more than US$50 per month and work 15-hour shifts.Inside Apple's iPod factories, Macworld UK, June 12, 2006. Retrieved on March 20, 2007. Apple investigated the case with independent auditors and found that, while some of the plant's labour practices met Apple's Code of Conduct, others did not: employees worked over 60 hours a week for 35% of the time, and worked more than six consecutive days for 25% of the time.Millard, Elizabeth. Is It Ethical To Own an iPod?. Retrieved on March 20, 2007.
Foxconn, Apple's manufacturer, initially denied the abuses,Foxconn denies iPod 'sweatshop' claims, MacNN, June 19, 2006. Retrieved on February 17, 2007. but when an auditing team from Apple found that workers had been working longer hours than were allowed under Chinese law, they promised to prevent workers working more hours than the code allowed. Apple hired a workplace standards auditing company, Verité, and joined the Electronic Industry Code of Conduct Implementation Group to oversee the measures. On December 31, 2006, workers at the Foxconn factory in Longhua, Shenzhen formed a union affiliated with the All-China Federation of Trade Unions, the Chinese government-approved union umbrella organization.McDonald's and KFC seeking to resolve Chinese minimum wage issue ..., April 5, 2007, nytimes.com. Retrieved 2010 5 27.Wal-Mart backs down and allows Chinese workers to join union, August 11, 2006, Jonathan Watts, The Guardian
In 2010, a number of workers committed suicide at a Foxconn operations in China. Apple, HP, and others stated that they were investigating the situation. Foxconn guards have been videotaped beating employees. Another employee killed himself in 2009 when an Apple prototype went missing, and claimed in messages to friends, that he had been beaten and interrogated.Suicides Spark Inquiries Apple, H-P to Examine Asian Supplier After String of Deaths at Factory, Jason Dean, Ting-i Tsai, May 27, 2010, accessed May 27, 2010The Foxconn Suicides, May 28, 2010, wsj.com, WSJ opinion, accessed May 27, 2010
As of 2006, the iPod was produced by about 14,000 workers in the U.S. and 27,000 overseas. Further, the salaries attributed to this product were overwhelmingly distributed to highly skilled U.S. professionals, as opposed to lower skilled U.S. retail employees or overseas manufacturing labor. One interpretation of this result is that U.S. innovation can create more jobs overseas than domestically.
See also
Comparison of portable media players
Comparison of iPod managers
iPhone
Podcast
References
External links
– official site at Apple Inc.
iPod troubleshooting basics and service FAQ at Apple Inc.
Apple's 21st century Walkman article, Brent Schlender, Fortune, November 12, 2001
, Steven Levy, Newsweek, July 26, 2004
The Perfect Thing article, Steven Levy, Wired, November 2006
iPod (1st generation) complete disassembly at TakeItApart.com
Category:Apple Inc. hardware
Category:ITunes
Category:Portable media players
Category:Foxconn
Category:Computer-related introductions in 2001
Category:Products introduced in 2001 | 89,847 | 2017-01 |
Bacteria | Bacteria (; common noun bacteria, singular bacterium) constitute a large domain of prokaryotic microorganisms. Typically a few micrometres in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. Bacteria were among the first life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep portions of Earth's crust. Bacteria also live in symbiotic and parasitic relationships with plants and animals.
There are typically 40 million bacterial cells in a gram of soil and a million bacterial cells in a millilitre of fresh water. There are approximately 5×1030 bacteria on Earth, forming a biomass which exceeds that of all plants and animals.C.Michael Hogan. 2010. Bacteria. Encyclopedia of Earth. eds. Sidney Draggan and C.J.Cleveland, National Council for Science and the Environment, Washington DC Bacteria are vital in recycling nutrients, with many of the stages in nutrient cycles dependent on these organisms, such as the fixation of nitrogen from the atmosphere and putrefaction. In the biological communities surrounding hydrothermal vents and cold seeps, bacteria provide the nutrients needed to sustain life by converting dissolved compounds, such as hydrogen sulphide and methane, to energy. On 17 March 2013, researchers reported data that suggested bacterial life forms thrive in the Mariana Trench, which with a depth of up to 11 kilometres is the deepest part of the Earth's oceans. Other researchers reported related studies that microbes thrive inside rocks up to 580 metres below the sea floor under 2.6 kilometres of ocean off the coast of the northwestern United States. According to one of the researchers, "You can find microbes everywhere—they're extremely adaptable to conditions, and survive wherever they are."
Most bacteria have not been characterised, and only about half of the bacterial phyla have species that can be grown in the laboratory. The study of bacteria is known as bacteriology, a branch of microbiology.
There are approximately ten times as many bacterial cells in the human flora as there are human cells in the body, with the largest number of the human flora being in the gut flora, and a large number on the skin. The vast majority of the bacteria in the body are rendered harmless by the protective effects of the immune system, and some are beneficial. However, several species of bacteria are pathogenic and cause infectious diseases, including cholera, syphilis, anthrax, leprosy, and bubonic plague. The most common fatal bacterial diseases are respiratory infections, with tuberculosis alone killing about 2 million people per year, mostly in sub-Saharan Africa. In developed countries, antibiotics are used to treat bacterial infections and are also used in farming, making antibiotic resistance a growing problem. In industry, bacteria are important in sewage treatment and the breakdown of oil spills, the production of cheese and yogurt through fermentation, and the recovery of gold, palladium, copper and other metals in the mining sector, as well as in biotechnology, and the manufacture of antibiotics and other chemicals.
Once regarded as plants constituting the class Schizomycetes, bacteria are now classified as prokaryotes. Unlike cells of animals and other eukaryotes, bacterial cells do not contain a nucleus and rarely harbour membrane-bound organelles. Although the term bacteria traditionally included all prokaryotes, the scientific classification changed after the discovery in the 1990s that prokaryotes consist of two very different groups of organisms that evolved from an ancient common ancestor. These evolutionary domains are called Bacteria and Archaea.
Etymology
The word bacteria is the plural of the New Latin , which is the latinisation of the Greek (bakterion),. the diminutive of βακτηρία (bakteria), meaning "staff, cane",. because the first ones to be discovered were rod-shaped.bacterium, on Oxford Dictionaries.
Origin and early evolution
The ancestors of modern bacteria were unicellular microorganisms that were the first forms of life to appear on Earth, about 4 billion years ago. For about 3 billion years, most organisms were microscopic, and bacteria and archaea were the dominant forms of life. In 2008, fossils of macroorganisms were discovered and named as the Francevillian biota. Although bacterial fossils exist, such as stromatolites, their lack of distinctive morphology prevents them from being used to examine the history of bacterial evolution, or to date the time of origin of a particular bacterial species. However, gene sequences can be used to reconstruct the bacterial phylogeny, and these studies indicate that bacteria diverged first from the archaeal/eukaryotic lineage.
Bacteria were also involved in the second great evolutionary divergence, that of the archaea and eukaryotes. Here, eukaryotes resulted from the entering of ancient bacteria into endosymbiotic associations with the ancestors of eukaryotic cells, which were themselves possibly related to the Archaea. This involved the engulfment by proto-eukaryotic cells of alphaproteobacterial symbionts to form either mitochondria or hydrogenosomes, which are still found in all known Eukarya (sometimes in highly reduced form, e.g. in ancient "amitochondrial" protozoa). Later on, some eukaryotes that already contained mitochondria also engulfed cyanobacterial-like organisms. This led to the formation of chloroplasts in algae and plants. There are also some algae that originated from even later endosymbiotic events. Here, eukaryotes engulfed a eukaryotic algae that developed into a "second-generation" plastid. This is known as secondary endosymbiosis.
Morphology
left|thumb|350px|alt=a diagram showing bacteria morphology|Bacteria display many cell morphologies and arrangements
Bacteria display a wide diversity of shapes and sizes, called morphologies. Bacterial cells are about one-tenth the size of eukaryotic cells and are typically 0.5–5.0 micrometres in length. However, a few species are visible to the unaided eye—for example, Thiomargarita namibiensis is up to half a millimetre long and Epulopiscium fishelsoni reaches 0.7 mm. Among the smallest bacteria are members of the genus Mycoplasma, which measure only 0.3 micrometres, as small as the largest viruses. Some bacteria may be even smaller, but these ultramicrobacteria are not well-studied.
Most bacterial species are either spherical, called cocci (sing. coccus, from Greek kókkos, grain, seed), or rod-shaped, called bacilli (sing. bacillus, from Latin baculus, stick). Elongation is associated with swimming.Dusenbery, David B. (2009). Living at Micro Scale, pp. 20–25. Harvard University Press, Cambridge, Mass. ISBN 978-0-674-03116-6. Some bacteria, called vibrio, are shaped like slightly curved rods or comma-shaped; others can be spiral-shaped, called spirilla, or tightly coiled, called spirochaetes. A small number of species even have tetrahedral or cuboidal shapes. More recently, some bacteria were discovered deep under Earth's crust that grow as branching filamentous types with a star-shaped cross-section. The large surface area to volume ratio of this morphology may give these bacteria an advantage in nutrient-poor environments. This wide variety of shapes is determined by the bacterial cell wall and cytoskeleton, and is important because it can influence the ability of bacteria to acquire nutrients, attach to surfaces, swim through liquids and escape predators.
thumb|alt=an orange biofilm of thermophilic bacteria with white highlights|A biofilm of thermophilic bacteria in the outflow of Mickey Hot Springs, Oregon, approximately 20 mm thick.
Many bacterial species exist simply as single cells, others associate in characteristic patterns: Neisseria form diploids (pairs), Streptococcus form chains, and Staphylococcus group together in "bunch of grapes" clusters. Bacteria can also be elongated to form filaments, for example the Actinobacteria. Filamentous bacteria are often surrounded by a sheath that contains many individual cells. Certain types, such as species of the genus Nocardia, even form complex, branched filaments, similar in appearance to fungal mycelia.
Bacteria often attach to surfaces and form dense aggregations called biofilms or bacterial mats. These films can range from a few micrometres in thickness to up to half a metre in depth, and may contain multiple species of bacteria, protists and archaea. Bacteria living in biofilms display a complex arrangement of cells and extracellular components, forming secondary structures, such as microcolonies, through which there are networks of channels to enable better diffusion of nutrients. In natural environments, such as soil or the surfaces of plants, the majority of bacteria are bound to surfaces in biofilms. Biofilms are also important in medicine, as these structures are often present during chronic bacterial infections or in infections of implanted medical devices, and bacteria protected within biofilms are much harder to kill than individual isolated bacteria.
Even more complex morphological changes are sometimes possible. For example, when starved of amino acids, Myxobacteria detect surrounding cells in a process known as quorum sensing, migrate towards each other, and aggregate to form fruiting bodies up to 500 micrometres long and containing approximately 100,000 bacterial cells. In these fruiting bodies, the bacteria perform separate tasks; this type of cooperation is a simple type of multicellular organisation. For example, about one in 10 cells migrate to the top of these fruiting bodies and differentiate into a specialised dormant state called myxospores, which are more resistant to drying and other adverse environmental conditions than are ordinary cells.
Cellular structure
thumb|right|alt=Prokaryote cell with structure and parts|Structure and contents of a typical gram-positive bacterial cell (seen by the fact that only one cell membrane is present).
Intracellular structures
The bacterial cell is surrounded by a cell membrane (also known as a lipid, cytoplasmic or plasma membrane). This membrane encloses the contents of the cell and acts as a barrier to hold nutrients, proteins and other essential components of the cytoplasm within the cell. As they are prokaryotes, bacteria do not usually have membrane-bound organelles in their cytoplasm, and thus contain few large intracellular structures. They lack a true nucleus, mitochondria, chloroplasts and the other organelles present in eukaryotic cells. Bacteria were once seen as simple bags of cytoplasm, but structures such as the prokaryotic cytoskeleton and the localisation of proteins to specific locations within the cytoplasm that give bacteria some complexity have been discovered. These subcellular levels of organisation have been called "bacterial hyperstructures".
Bacterial microcompartments, such as carboxysomes, provide a further level of organisation; they are compartments within bacteria that are surrounded by polyhedral protein shells, rather than by lipid membranes. These "polyhedral organelles" localise and compartmentalise bacterial metabolism, a function performed by the membrane-bound organelles in eukaryotes.
Many important biochemical reactions, such as energy generation, use concentration gradients across membranes. The general lack of internal membranes in bacteria means reactions such as electron transport occur across the cell membrane between the cytoplasm and the periplasmic space. However, in many photosynthetic bacteria the plasma membrane is highly folded and fills most of the cell with layers of light-gathering membrane. These light-gathering complexes may even form lipid-enclosed structures called chlorosomes in green sulfur bacteria. Other proteins import nutrients across the cell membrane, or expel undesired molecules from the cytoplasm.
thumb|left|450px|alt=protein-enclosed bacterial organelles with electron microscope image, and drawing of structure|Carboxysomes are protein-enclosed bacterial organelles. Top left is an electron microscope image of carboxysomes in Halothiobacillus neapolitanus, below is an image of purified carboxysomes. On the right is a model of their structure. Scale bars are 100 nm.
Bacteria do not have a membrane-bound nucleus, and their genetic material is typically a single circular DNA chromosome located in the cytoplasm in an irregularly shaped body called the nucleoid. The nucleoid contains the chromosome with its associated proteins and RNA. The phylum Planctomycetes and candidate phylum Poribacteria may be exceptions to the general absence of internal membranes in bacteria, because they appear to have a double membrane around their nucleoids and contain other membrane-bound cellular structures. Like all living organisms, bacteria contain ribosomes, often grouped in chains called polyribosomes, for the production of proteins, but the structure of the bacterial ribosome is different from that of eukaryotes and Archaea. Bacterial ribosomes have a sedimentation rate of 70S (measured in Svedberg units): their subunits have rates of 30S and 50S. Some antibiotics bind specifically to 70S ribosomes and inhibit bacterial protein synthesis. Those antibiotics kill bacteria without affecting the larger 80S ribosomes of eukaryotic cells and without harming the host.
Some bacteria produce intracellular nutrient storage granules for later use, such as glycogen, polyphosphate, sulfur or polyhydroxyalkanoates. Certain bacterial species, such as the photosynthetic Cyanobacteria, produce internal gas vesicles, which they use to regulate their buoyancy—allowing them to move up or down into water layers with different light intensities and nutrient levels. Intracellular membranes called chromatophores are also found in membranes of phototrophic bacteria. Used primarily for photosynthesis, they contain bacteriochlorophyll pigments and carotenoids. An early idea was that bacteria might contain membrane folds termed mesosomes, but these were later shown to be artefacts produced by the chemicals used to prepare the cells for electron microscopy. Inclusions are considered to be nonliving components of the cell that do not possess metabolic activity and are not bounded by membranes. The most common inclusions are glycogen, lipid droplets, crystals, and pigments. Volutin granules are cytoplasmic inclusions of complexed inorganic polyphosphate. These granules are called metachromatic granules due to their displaying the metachromatic effect; they appear red or blue when stained with the blue dyes methylene blue or toluidine blue. Gas vacuoles, which are freely permeable to gas, are membrane-bound vesicles present in some species of Cyanobacteria. They allow the bacteria to control their buoyancy. Microcompartments are widespread, membrane-bound organelles that are made of a protein shell that surrounds and encloses various enzymes. Carboxysomes are bacterial microcompartments that contain enzymes involved in carbon fixation. Magnetosomes are bacterial microcompartments, present in magnetotactic bacteria, that contain magnetic crystals.
Extracellular structures
In most bacteria, a cell wall is present on the outside of the cell membrane. The cell membrane and cell wall comprise the cell envelope. A common bacterial cell wall material is peptidoglycan (called "murein" in older sources), which is made from polysaccharide chains cross-linked by peptides containing D-amino acids. Bacterial cell walls are different from the cell walls of plants and fungi, which are made of cellulose and chitin, respectively. The cell wall of bacteria is also distinct from that of Archaea, which do not contain peptidoglycan. The cell wall is essential to the survival of many bacteria, and the antibiotic penicillin is able to kill bacteria by inhibiting a step in the synthesis of peptidoglycan.
There are broadly speaking two different types of cell wall in bacteria, a thick one in the gram-positives and a thinner one in the gram-negatives. The names originate from the reaction of cells to the Gram stain, a long-standing test for the classification of bacterial species.
Gram-positive bacteria possess a thick cell wall containing many layers of peptidoglycan and teichoic acids. In contrast, gram-negative bacteria have a relatively thin cell wall consisting of a few layers of peptidoglycan surrounded by a second lipid membrane containing lipopolysaccharides and lipoproteins. Lipopolysaccharides, also called endotoxins, are composed of polysaccharides and lipid A that is responsible for much of the toxicity of gram-negative bacteria. Most bacteria have the gram-negative cell wall, and only the Firmicutes and Actinobacteria have the alternative gram-positive arrangement. These two groups were previously known as the low G+C and high G+C gram-positive bacteria, respectively. These differences in structure can produce differences in antibiotic susceptibility; for instance, vancomycin can kill only gram-positive bacteria and is ineffective against gram-negative pathogens, such as Haemophilus influenzae or Pseudomonas aeruginosa. If the bacterial cell wall is entirely removed, it is called a protoplast, whereas if it is partially removed, it is called a spheroplast. β-Lactam antibiotics, such as penicillin, inhibit the formation of peptidoglycan cross-links in the bacterial cell wall. The enzyme lysozyme, found in human tears, also digests the cell wall of bacteria and is the body's main defence against eye infections.
Acid-fast bacteria, such as Mycobacteria, are resistant to decolorisation by acids during staining procedures. The high mycolic acid content of Mycobacteria, is responsible for the staining pattern of poor absorption followed by high retention. The most common staining technique used to identify acid-fast bacteria is the Ziehl-Neelsen stain or acid-fast stain, in which the acid-fast bacilli are stained bright-red and stand out clearly against a blue background. L-form bacteria are strains of bacteria that lack cell walls. The main pathogenic bacteria in this class is Mycoplasma (not to be confused with Mycobacteria).
In many bacteria, an S-layer of rigidly arrayed protein molecules covers the outside of the cell. This layer provides chemical and physical protection for the cell surface and can act as a macromolecular diffusion barrier. S-layers have diverse but mostly poorly understood functions, but are known to act as virulence factors in Campylobacter and contain surface enzymes in Bacillus stearothermophilus.
thumb|left|alt=Helicobacter pylori electron micrograph, showing multiple flagella on the cell surface|Helicobacter pylori electron micrograph, showing multiple flagella on the cell surface
Flagella are rigid protein structures, about 20 nanometres in diameter and up to 20 micrometres in length, that are used for motility. Flagella are driven by the energy released by the transfer of ions down an electrochemical gradient across the cell membrane.
Fimbriae (sometimes called "attachment pili") are fine filaments of protein, usually 2–10 nanometres in diameter and up to several micrometres in length. They are distributed over the surface of the cell, and resemble fine hairs when seen under the electron microscope. Fimbriae are believed to be involved in attachment to solid surfaces or to other cells, and are essential for the virulence of some bacterial pathogens. Pili (sing. pilus) are cellular appendages, slightly larger than fimbriae, that can transfer genetic material between bacterial cells in a process called conjugation where they are called conjugation pili or "sex pili" (see bacterial genetics, below). They can also generate movement where they are called type IV pili (see movement, below).
Glycocalyx are produced by many bacteria to surround their cells, and vary in structural complexity: ranging from a disorganised slime layer of extra-cellular polymer to a highly structured capsule. These structures can protect cells from engulfment by eukaryotic cells such as macrophages (part of the human immune system). They can also act as antigens and be involved in cell recognition, as well as aiding attachment to surfaces and the formation of biofilms.
The assembly of these extracellular structures is dependent on bacterial secretion systems. These transfer proteins from the cytoplasm into the periplasm or into the environment around the cell. Many types of secretion systems are known and these structures are often essential for the virulence of pathogens, so are intensively studied.
Endospores
thumb|right|alt=Anthrax stained purple|Bacillus anthracis (stained purple) growing in cerebrospinal fluid
Certain genera of gram-positive bacteria, such as Bacillus, Clostridium, Sporohalobacter, Anaerobacter, and Heliobacterium, can form highly resistant, dormant structures called endospores. In almost all cases, one endospore is formed and this is not a reproductive process, although Anaerobacter can make up to seven endospores in a single cell. Endospores have a central core of cytoplasm containing DNA and ribosomes surrounded by a cortex layer and protected by an impermeable and rigid coat. Dipicolinic acid is a chemical compound that composes 5% to 15% of the dry weight of bacterial spores. It is implicated as responsible for the heat resistance of the endospore.
Endospores show no detectable metabolism and can survive extreme physical and chemical stresses, such as high levels of UV light, gamma radiation, detergents, disinfectants, heat, freezing, pressure, and desiccation. In this dormant state, these organisms may remain viable for millions of years, and endospores even allow bacteria to survive exposure to the vacuum and radiation in space. According to scientist Dr. Steinn Sigurdsson, "There are viable bacterial spores that have been found that are 40 million years old on Earth—and we know they're very hardened to radiation." Endospore-forming bacteria can also cause disease: for example, anthrax can be contracted by the inhalation of Bacillus anthracis endospores, and contamination of deep puncture wounds with Clostridium tetani endospores causes tetanus.
Metabolism
Bacteria exhibit an extremely wide variety of metabolic types. The distribution of metabolic traits within a group of bacteria has traditionally been used to define their taxonomy, but these traits often do not correspond with modern genetic classifications. Bacterial metabolism is classified into nutritional groups on the basis of three major criteria: the kind of energy used for growth, the source of carbon, and the electron donors used for growth. An additional criterion of respiratory microorganisms are the electron acceptors used for aerobic or anaerobic respiration.
+ Nutritional types in bacterial metabolismNutritional typeSource of energySource of carbonExamples Phototrophs Sunlight Organic compounds (photoheterotrophs) or carbon fixation (photoautotrophs) Cyanobacteria, Green sulfur bacteria, Chloroflexi, or Purple bacteria LithotrophsInorganic compounds Organic compounds (lithoheterotrophs) or carbon fixation (lithoautotrophs) Thermodesulfobacteria, Hydrogenophilaceae, or Nitrospirae OrganotrophsOrganic compounds Organic compounds (chemoheterotrophs) or carbon fixation (chemoautotrophs) Bacillus, Clostridium or Enterobacteriaceae
Carbon metabolism in bacteria is either heterotrophic, where organic carbon compounds are used as carbon sources, or autotrophic, meaning that cellular carbon is obtained by fixing carbon dioxide. Heterotrophic bacteria include parasitic types. Typical autotrophic bacteria are phototrophic cyanobacteria, green sulfur-bacteria and some purple bacteria, but also many chemolithotrophic species, such as nitrifying or sulfur-oxidising bacteria. Energy metabolism of bacteria is either based on phototrophy, the use of light through photosynthesis, or based on chemotrophy, the use of chemical substances for energy, which are mostly oxidised at the expense of oxygen or alternative electron acceptors (aerobic/anaerobic respiration).
thumb|right|alt=blue green algae filaments|Filaments of photosynthetic cyanobacteria
Bacteria are further divided into lithotrophs that use inorganic electron donors and organotrophs that use organic compounds as electron donors. Chemotrophic organisms use the respective electron donors for energy conservation (by aerobic/anaerobic respiration or fermentation) and biosynthetic reactions (e.g., carbon dioxide fixation), whereas phototrophic organisms use them only for biosynthetic purposes. Respiratory organisms use chemical compounds as a source of energy by taking electrons from the reduced substrate and transferring them to a terminal electron acceptor in a redox reaction. This reaction releases energy that can be used to synthesise ATP and drive metabolism. In aerobic organisms, oxygen is used as the electron acceptor. In anaerobic organisms other inorganic compounds, such as nitrate, sulfate or carbon dioxide are used as electron acceptors. This leads to the ecologically important processes of denitrification, sulfate reduction, and acetogenesis, respectively.
Another way of life of chemotrophs in the absence of possible electron acceptors is fermentation, wherein the electrons taken from the reduced substrates are transferred to oxidised intermediates to generate reduced fermentation products (e.g., lactate, ethanol, hydrogen, butyric acid). Fermentation is possible, because the energy content of the substrates is higher than that of the products, which allows the organisms to synthesise ATP and drive their metabolism.
These processes are also important in biological responses to pollution; for example, sulfate-reducing bacteria are largely responsible for the production of the highly toxic forms of mercury (methyl- and dimethylmercury) in the environment. Non-respiratory anaerobes use fermentation to generate energy and reducing power, secreting metabolic by-products (such as ethanol in brewing) as waste. Facultative anaerobes can switch between fermentation and different terminal electron acceptors depending on the environmental conditions in which they find themselves.
Lithotrophic bacteria can use inorganic compounds as a source of energy. Common inorganic electron donors are hydrogen, carbon monoxide, ammonia (leading to nitrification), ferrous iron and other reduced metal ions, and several reduced sulfur compounds. In unusual circumstances, the gas methane can be used by methanotrophic bacteria as both a source of electrons and a substrate for carbon anabolism. In both aerobic phototrophy and chemolithotrophy, oxygen is used as a terminal electron acceptor, whereas under anaerobic conditions inorganic compounds are used instead. Most lithotrophic organisms are autotrophic, whereas organotrophic organisms are heterotrophic.
In addition to fixing carbon dioxide in photosynthesis, some bacteria also fix nitrogen gas (nitrogen fixation) using the enzyme nitrogenase. This environmentally important trait can be found in bacteria of nearly all the metabolic types listed above, but is not universal.
Regardless of the type of metabolic process they employ, the majority of bacteria are able to take in raw materials only in the form of relatively small molecules, which enter the cell by diffusion or through molecular channels in cell membranes. The Planctomycetes are the exception (as they are in possessing membranes around their nuclear material). It has recently been shown that Gemmata obscuriglobus is able to take in large molecules via a process that in some ways resembles endocytosis, the process used by eukaryotic cells to engulf external items.
Growth and reproduction
thumb|250px|alt=drawing of showing the processes of binary fission, mitosis, and meiosis|Many bacteria reproduce through binary fission, which is compared to mitosis and meiosis in this image.
Unlike in multicellular organisms, increases in cell size (cell growth) and reproduction by cell division are tightly linked in unicellular organisms. Bacteria grow to a fixed size and then reproduce through binary fission, a form of asexual reproduction. Under optimal conditions, bacteria can grow and divide extremely rapidly, and bacterial populations can double as quickly as every 9.8 minutes. In cell division, two identical clone daughter cells are produced. Some bacteria, while still reproducing asexually, form more complex reproductive structures that help disperse the newly formed daughter cells. Examples include fruiting body formation by Myxobacteria and aerial hyphae formation by Streptomyces, or budding. Budding involves a cell forming a protrusion that breaks away and produces a daughter cell.
thumb|left|alt=E. coli colony |A colony of Escherichia coli
In the laboratory, bacteria are usually grown using solid or liquid media. Solid growth media, such as agar plates, are used to isolate pure cultures of a bacterial strain. However, liquid growth media are used when measurement of growth or large volumes of cells are required. Growth in stirred liquid media occurs as an even cell suspension, making the cultures easy to divide and transfer, although isolating single bacteria from liquid media is difficult. The use of selective media (media with specific nutrients added or deficient, or with antibiotics added) can help identify specific organisms.
Most laboratory techniques for growing bacteria use high levels of nutrients to produce large amounts of cells cheaply and quickly. However, in natural environments, nutrients are limited, meaning that bacteria cannot continue to reproduce indefinitely. This nutrient limitation has led the evolution of different growth strategies (see r/K selection theory). Some organisms can grow extremely rapidly when nutrients become available, such as the formation of algal (and cyanobacterial) blooms that often occur in lakes during the summer. Other organisms have adaptations to harsh environments, such as the production of multiple antibiotics by Streptomyces that inhibit the growth of competing microorganisms. In nature, many organisms live in communities (e.g., biofilms) that may allow for increased supply of nutrients and protection from environmental stresses. These relationships can be essential for growth of a particular organism or group of organisms (syntrophy).
Bacterial growth follows four phases. When a population of bacteria first enter a high-nutrient environment that allows growth, the cells need to adapt to their new environment. The first phase of growth is the lag phase, a period of slow growth when the cells are adapting to the high-nutrient environment and preparing for fast growth. The lag phase has high biosynthesis rates, as proteins necessary for rapid growth are produced. The second phase of growth is the log phase, also known as the logarithmic or exponential phase. The log phase is marked by rapid exponential growth. The rate at which cells grow during this phase is known as the growth rate (k), and the time it takes the cells to double is known as the generation time (g). During log phase, nutrients are metabolised at maximum speed until one of the nutrients is depleted and starts limiting growth. The third phase of growth is the stationary phase and is caused by depleted nutrients. The cells reduce their metabolic activity and consume non-essential cellular proteins. The stationary phase is a transition from rapid growth to a stress response state and there is increased expression of genes involved in DNA repair, antioxidant metabolism and nutrient transport. The final phase is the death phase where the bacteria run out of nutrients and die.
Genomes
The genomes of thousands of bacterial species have been sequenced, with at least 9,000 sequences completed and more than 42,000 left as "permanent" drafts (as of Sep 2016).
Most bacteria have a single circular chromosome that can range in size from only 160,000 base pairs in the endosymbiotic bacteria Candidatus Carsonella ruddii, to 12,200,000 base pairs in the soil-dwelling bacteria Sorangium cellulosum. The genes in bacterial genomes are usually a single continuous stretch of DNA and although several different types of introns do exist in bacteria, these are much rarer than in eukaryotes. Some bacteria, including the Spirochaetes of the genus Borrelia are a notable exception to this arrangement. Borrelia burgdorferi, the cause of Lyme disease, contains a single linear chromosome and several linear and circular plasmids.
Plasmids are small extra-chromosomal DNAs that may contain genes for antibiotic resistance or virulence factors. Plasmids replicate independently of chromosomes, so it is possible that plasmids could be lost in bacterial cell division. Against this possibility is the fact that a single bacterium can contain hundreds of copies of a single plasmid.
Genetics
Bacteria, as asexual organisms, inherit identical copies of their parent's genes (i.e., they are clonal). However, all bacteria can evolve by selection on changes to their genetic material DNA caused by genetic recombination or mutations. Mutations come from errors made during the replication of DNA or from exposure to mutagens. Mutation rates vary widely among different species of bacteria and even among different clones of a single species of bacteria. Genetic changes in bacterial genomes come from either random mutation during replication or "stress-directed mutation", where genes involved in a particular growth-limiting process have an increased mutation rate.
DNA transfer
Some bacteria also transfer genetic material between cells. This can occur in three main ways. First, bacteria can take up exogenous DNA from their environment, in a process called transformation. Genes can also be transferred by the process of transduction, when the integration of a bacteriophage introduces foreign DNA into the chromosome. The third method of gene transfer is conjugation, whereby DNA is transferred through direct cell contact.
Transduction of bacterial genes by bacteriophage appears to be a consequence of infrequent errors during intracellular assembly of virus particles, rather than a bacterial adaptation. Conjugation, in the much-studied E. coli system is determined by plasmid genes, and is an adaptation for transferring copies of the plasmid from one bacterial host to another. It is seldom that a conjugative plasmid integrates into the host bacterial chromosome, and subsequently transfers part of the host bacterial DNA to another bacterium. Plasmid-mediated transfer of host bacterial DNA also appears to be an accidental process rather than a bacterial adaptation.
Transformation, unlike transduction or conjugation, depends on numerous bacterial gene products that specifically interact to perform this complex process, and thus transformation is clearly a bacterial adaptation for DNA transfer. In order for a bacterium to bind, take up and recombine donor DNA into its own chromosome, it must first enter a special physiological state termed competence (see Natural competence). In Bacillus subtilis, about 40 genes are required for the development of competence. The length of DNA transferred during B. subtilis transformation can be between a third of a chromosome up to the whole chromosome. Transformation appears to be common among bacterial species, and thus far at least 60 species are known to have the natural ability to become competent for transformation. The development of competence in nature is usually associated with stressful environmental conditions, and seems to be an adaptation for facilitating repair of DNA damage in recipient cells.Bernstein H, Bernstein C, Michod RE (2012). "DNA repair as the primary adaptive function of sex in bacteria and eukaryotes". Chapter 1: pp. 1–49 in: DNA Repair: New Research, Sakura Kimura and Sora Shimizu (eds.). Nova Sci. Publ., Hauppauge, N.Y. ISBN 978-1-62100-808-8.
In ordinary circumstances, transduction, conjugation, and transformation involve transfer of DNA between individual bacteria of the same species, but occasionally transfer may occur between individuals of different bacterial species and this may have significant consequences, such as the transfer of antibiotic resistance. In such cases, gene acquisition from other bacteria or the environment is called horizontal gene transfer and may be common under natural conditions. Gene transfer is particularly important in antibiotic resistance as it allows the rapid transfer of resistance genes between different pathogens.
Bacteriophages
Bacteriophages are viruses that infect bacteria. Many types of bacteriophage exist, some simply infect and lyse their host bacteria, while others insert into the bacterial chromosome. A bacteriophage can contain genes that contribute to its host's phenotype: for example, in the evolution of Escherichia coli O157:H7 and Clostridium botulinum, the toxin genes in an integrated phage converted a harmless ancestral bacterium into a lethal pathogen. Bacteria resist phage infection through restriction modification systems that degrade foreign DNA, and a system that uses CRISPR sequences to retain fragments of the genomes of phage that the bacteria have come into contact with in the past, which allows them to block virus replication through a form of RNA interference. This CRISPR system provides bacteria with acquired immunity to infection.
Behaviour
Secretion
Bacteria frequently secrete chemicals into their environment in order to modify it favourably. The secretions are often proteins and may act as enzymes that digest some form of food in the environment.
Bioluminescence
A few bacteria have chemical systems that generate light. This bioluminescence often occurs in bacteria that live in association with fish, and the light probably serves to attract fish or other large animals.Dusenbery, David B. (1996). Life at Small Scale. Scientific American Library. ISBN 0-7167-5060-0.
Multicellularity
Bacteria often function as multicellular aggregates known as biofilms, exchanging a variety of molecular signals for inter-cell communication, and engaging in coordinated multicellular behaviour.
The communal benefits of multicellular cooperation include a cellular division of labour, accessing resources that cannot effectively be used by single cells, collectively defending against antagonists, and optimising population survival by differentiating into distinct cell types. For example, bacteria in biofilms can have more than 500 times increased resistance to antibacterial agents than individual "planktonic" bacteria of the same species.
One type of inter-cellular communication by a molecular signal is called quorum sensing, which serves the purpose of determining whether there is a local population density that is sufficiently high that it is productive to invest in processes that are only successful if large numbers of similar organisms behave similarly, as in excreting digestive enzymes or emitting light.
Quorum sensing allows bacteria to coordinate gene expression, and enables them to produce, release and detect autoinducers or pheromones which accumulate with the growth in cell population.
Movement
thumb|350px|alt=diagram of Flagellum bacteria|Flagellum of gram-negative bacteria. The base drives the rotation of the hook and filament.
Many bacteria can move using a variety of mechanisms: flagella are used for swimming through fluids; bacterial gliding and twitching motility move bacteria across surfaces; and changes of buoyancy allow vertical motion.
Swimming bacteria frequently move near 10 body lengths per second and a few as fast as 100. This makes them at least as fast as fish, on a relative scale.Dusenbery, David B. (2009). Living at Micro Scale, p. 136. Harvard University Press, Cambridge, Mass. ISBN 978-0-674-03116-6.
In bacterial gliding and twitching motility, bacteria use their type IV pili as a grappling hook, repeatedly extending it, anchoring it and then retracting it with remarkable force (>80 pN).
Flagella are semi-rigid cylindrical structures that are rotated and function much like the propeller on a ship. Objects as small as bacteria operate a low Reynolds number and cylindrical forms are more efficient than the flat, paddle-like, forms appropriate at human-size scale.Dusenbery, David B. (2009). Living at Micro Scale, Chapter 13. Harvard University Press, Cambridge, Mass. ISBN 978-0-674-03116-6.
Bacterial species differ in the number and arrangement of flagella on their surface; some have a single flagellum (monotrichous), a flagellum at each end (amphitrichous), clusters of flagella at the poles of the cell (lophotrichous), while others have flagella distributed over the entire surface of the cell (peritrichous). The bacterial flagella is the best-understood motility structure in any organism and is made of about 20 proteins, with approximately another 30 proteins required for its regulation and assembly. The flagellum is a rotating structure driven by a reversible motor at the base that uses the electrochemical gradient across the membrane for power. This motor drives the motion of the filament, which acts as a propeller.
Many bacteria (such as E. coli) have two distinct modes of movement: forward movement (swimming) and tumbling. The tumbling allows them to reorient and makes their movement a three-dimensional random walk. (See external links below for link to videos.) The flagella of a unique group of bacteria, the spirochaetes, are found between two membranes in the periplasmic space. They have a distinctive helical body that twists about as it moves.
Motile bacteria are attracted or repelled by certain stimuli in behaviours called taxes: these include chemotaxis, phototaxis, energy taxis, and magnetotaxis. In one peculiar group, the myxobacteria, individual bacteria move together to form waves of cells that then differentiate to form fruiting bodies containing spores. The myxobacteria move only when on solid surfaces, unlike E. coli, which is motile in liquid or solid media.
Several Listeria and Shigella species move inside host cells by usurping the cytoskeleton, which is normally used to move organelles inside the cell. By promoting actin polymerisation at one pole of their cells, they can form a kind of tail that pushes them through the host cell's cytoplasm.
Classification and identification
thumb|alt=blue stain of Streptococcus mutans|Streptococcus mutans visualised with a Gram stain
Classification seeks to describe the diversity of bacterial species by naming and grouping organisms based on similarities. Bacteria can be classified on the basis of cell structure, cellular metabolism or on differences in cell components, such as DNA, fatty acids, pigments, antigens and quinones. While these schemes allowed the identification and classification of bacterial strains, it was unclear whether these differences represented variation between distinct species or between strains of the same species. This uncertainty was due to the lack of distinctive structures in most bacteria, as well as lateral gene transfer between unrelated species. Due to lateral gene transfer, some closely related bacteria can have very different morphologies and metabolisms. To overcome this uncertainty, modern bacterial classification emphasises molecular systematics, using genetic techniques such as guanine cytosine ratio determination, genome-genome hybridisation, as well as sequencing genes that have not undergone extensive lateral gene transfer, such as the rRNA gene. Classification of bacteria is determined by publication in the International Journal of Systematic Bacteriology, and Bergey's Manual of Systematic Bacteriology. The International Committee on Systematic Bacteriology (ICSB) maintains international rules for the naming of bacteria and taxonomic categories and for the ranking of them in the International Code of Nomenclature of Bacteria.
The term "bacteria" was traditionally applied to all microscopic, single-cell prokaryotes. However, molecular systematics showed prokaryotic life to consist of two separate domains, originally called Eubacteria and Archaebacteria, but now called Bacteria and Archaea that evolved independently from an ancient common ancestor. The archaea and eukaryotes are more closely related to each other than either is to the bacteria. These two domains, along with Eukarya, are the basis of the three-domain system, which is currently the most widely used classification system in microbiolology. However, due to the relatively recent introduction of molecular systematics and a rapid increase in the number of genome sequences that are available, bacterial classification remains a changing and expanding field. For example, a few biologists argue that the Archaea and Eukaryotes evolved from gram-positive bacteria.
The identification of bacteria in the laboratory is particularly relevant in medicine, where the correct treatment is determined by the bacterial species causing an infection. Consequently, the need to identify human pathogens was a major impetus for the development of techniques to identify bacteria.
The Gram stain, developed in 1884 by Hans Christian Gram, characterises bacteria based on the structural characteristics of their cell walls. The thick layers of peptidoglycan in the "gram-positive" cell wall stain purple, while the thin "gram-negative" cell wall appears pink. By combining morphology and Gram-staining, most bacteria can be classified as belonging to one of four groups (gram-positive cocci, gram-positive bacilli, gram-negative cocci and gram-negative bacilli). Some organisms are best identified by stains other than the Gram stain, particularly mycobacteria or Nocardia, which show acid-fastness on Ziehl–Neelsen or similar stains. Other organisms may need to be identified by their growth in special media, or by other techniques, such as serology.
Culture techniques are designed to promote the growth and identify particular bacteria, while restricting the growth of the other bacteria in the sample. Often these techniques are designed for specific specimens; for example, a sputum sample will be treated to identify organisms that cause pneumonia, while stool specimens are cultured on selective media to identify organisms that cause diarrhoea, while preventing growth of non-pathogenic bacteria. Specimens that are normally sterile, such as blood, urine or spinal fluid, are cultured under conditions designed to grow all possible organisms. Once a pathogenic organism has been isolated, it can be further characterised by its morphology, growth patterns (such as aerobic or anaerobic growth), patterns of hemolysis, and staining.
As with bacterial classification, identification of bacteria is increasingly using molecular methods. Diagnostics using DNA-based tools, such as polymerase chain reaction, are increasingly popular due to their specificity and speed, compared to culture-based methods. These methods also allow the detection and identification of "viable but nonculturable" cells that are metabolically active but non-dividing. However, even using these improved methods, the total number of bacterial species is not known and cannot even be estimated with any certainty. Following present classification, there are a little less than 9,300 known species of prokaryotes, which includes bacteria and archaea; but attempts to estimate the true number of bacterial diversity have ranged from 107 to 109 total species—and even these diverse estimates may be off by many orders of magnitude.
Interactions with other organisms
thumb|300px|alt=chart showing bacterial infections upon various parts of human body|Overview of bacterial infections and main species involved.LEF.org > Bacterial Infections Updated: 19 January 2006. Retrieved on 11 April 2009
Despite their apparent simplicity, bacteria can form complex associations with other organisms. These symbiotic associations can be divided into parasitism, mutualism and commensalism. Due to their small size, commensal bacteria are ubiquitous and grow on animals and plants exactly as they will grow on any other surface. However, their growth can be increased by warmth and sweat, and large populations of these organisms in humans are the cause of body odour.
Predators
Some species of bacteria kill and then consume other microorganisms, these species are called predatory bacteria. These include organisms such as Myxococcus xanthus, which forms swarms of cells that kill and digest any bacteria they encounter. Other bacterial predators either attach to their prey in order to digest them and absorb nutrients, such as Vampirovibrio chlorellavorus, or invade another cell and multiply inside the cytosol, such as Daptobacter. These predatory bacteria are thought to have evolved from saprophages that consumed dead microorganisms, through adaptations that allowed them to entrap and kill other organisms.
Mutualists
Certain bacteria form close spatial associations that are essential for their survival. One such mutualistic association, called interspecies hydrogen transfer, occurs between clusters of anaerobic bacteria that consume organic acids, such as butyric acid or propionic acid, and produce hydrogen, and methanogenic Archaea that consume hydrogen. The bacteria in this association are unable to consume the organic acids as this reaction produces hydrogen that accumulates in their surroundings. Only the intimate association with the hydrogen-consuming Archaea keeps the hydrogen concentration low enough to allow the bacteria to grow.
In soil, microorganisms that reside in the rhizosphere (a zone that includes the root surface and the soil that adheres to the root after gentle shaking) carry out nitrogen fixation, converting nitrogen gas to nitrogenous compounds. This serves to provide an easily absorbable form of nitrogen for many plants, which cannot fix nitrogen themselves. Many other bacteria are found as symbionts in humans and other organisms. For example, the presence of over 1,000 bacterial species in the normal human gut flora of the intestines can contribute to gut immunity, synthesise vitamins, such as folic acid, vitamin K and biotin, convert sugars to lactic acid (see Lactobacillus), as well as fermenting complex undigestible carbohydrates. The presence of this gut flora also inhibits the growth of potentially pathogenic bacteria (usually through competitive exclusion) and these beneficial bacteria are consequently sold as probiotic dietary supplements.
Pathogens
thumb|alt=Color-enhanced scanning electron micrograph of red Salmonella typhimurium in yellow human cells|Colour-enhanced scanning electron micrograph showing Salmonella typhimurium (red) invading cultured human cells
If bacteria form a parasitic association with other organisms, they are classed as pathogens. Pathogenic bacteria are a major cause of human death and disease and cause infections such as tetanus, typhoid fever, diphtheria, syphilis, cholera, foodborne illness, leprosy and tuberculosis. A pathogenic cause for a known medical disease may only be discovered many years after, as was the case with Helicobacter pylori and peptic ulcer disease. Bacterial diseases are also important in agriculture, with bacteria causing leaf spot, fire blight and wilts in plants, as well as Johne's disease, mastitis, salmonella and anthrax in farm animals.
Each species of pathogen has a characteristic spectrum of interactions with its human hosts. Some organisms, such as Staphylococcus or Streptococcus, can cause skin infections, pneumonia, meningitis and even overwhelming sepsis, a systemic inflammatory response producing shock, massive vasodilation and death. Yet these organisms are also part of the normal human flora and usually exist on the skin or in the nose without causing any disease at all. Other organisms invariably cause disease in humans, such as the Rickettsia, which are obligate intracellular parasites able to grow and reproduce only within the cells of other organisms. One species of Rickettsia causes typhus, while another causes Rocky Mountain spotted fever. Chlamydia, another phylum of obligate intracellular parasites, contains species that can cause pneumonia, or urinary tract infection and may be involved in coronary heart disease. Finally, some species, such as Pseudomonas aeruginosa, Burkholderia cenocepacia, and Mycobacterium avium, are opportunistic pathogens and cause disease mainly in people suffering from immunosuppression or cystic fibrosis.
Bacterial infections may be treated with antibiotics, which are classified as bacteriocidal if they kill bacteria, or bacteriostatic if they just prevent bacterial growth. There are many types of antibiotics and each class inhibits a process that is different in the pathogen from that found in the host. An example of how antibiotics produce selective toxicity are chloramphenicol and puromycin, which inhibit the bacterial ribosome, but not the structurally different eukaryotic ribosome. Antibiotics are used both in treating human disease and in intensive farming to promote animal growth, where they may be contributing to the rapid development of antibiotic resistance in bacterial populations. Infections can be prevented by antiseptic measures such as sterilising the skin prior to piercing it with the needle of a syringe, and by proper care of indwelling catheters. Surgical and dental instruments are also sterilised to prevent contamination by bacteria. Disinfectants such as bleach are used to kill bacteria or other pathogens on surfaces to prevent contamination and further reduce the risk of infection.
Significance in technology and industry
Bacteria, often lactic acid bacteria, such as Lactobacillus and Lactococcus, in combination with yeasts and moulds, have been used for thousands of years in the preparation of fermented foods, such as cheese, pickles, soy sauce, sauerkraut, vinegar, wine and yogurt.
The ability of bacteria to degrade a variety of organic compounds is remarkable and has been used in waste processing and bioremediation. Bacteria capable of digesting the hydrocarbons in petroleum are often used to clean up oil spills. Fertiliser was added to some of the beaches in Prince William Sound in an attempt to promote the growth of these naturally occurring bacteria after the 1989 Exxon Valdez oil spill. These efforts were effective on beaches that were not too thickly covered in oil. Bacteria are also used for the bioremediation of industrial toxic wastes. In the chemical industry, bacteria are most important in the production of enantiomerically pure chemicals for use as pharmaceuticals or agrichemicals.
Bacteria can also be used in the place of pesticides in the biological pest control. This commonly involves Bacillus thuringiensis (also called BT), a gram-positive, soil dwelling bacterium. Subspecies of this bacteria are used as a Lepidopteran-specific insecticides under trade names such as Dipel and Thuricide. Because of their specificity, these pesticides are regarded as environmentally friendly, with little or no effect on humans, wildlife, pollinators and most other beneficial insects.
Because of their ability to quickly grow and the relative ease with which they can be manipulated, bacteria are the workhorses for the fields of molecular biology, genetics and biochemistry. By making mutations in bacterial DNA and examining the resulting phenotypes, scientists can determine the function of genes, enzymes and metabolic pathways in bacteria, then apply this knowledge to more complex organisms. This aim of understanding the biochemistry of a cell reaches its most complex expression in the synthesis of huge amounts of enzyme kinetic and gene expression data into mathematical models of entire organisms. This is achievable in some well-studied bacteria, with models of Escherichia coli metabolism now being produced and tested. This understanding of bacterial metabolism and genetics allows the use of biotechnology to bioengineer bacteria for the production of therapeutic proteins, such as insulin, growth factors, or antibodies.
Because of their importance for research in general, samples of bacterial strains are isolated and preserved in Biological Resource Centers. This ensures the availability of the strain to scientists worldwide.
History of bacteriology
thumb|right|alt=painting of Antonie van Leeuwenhoek, in robe and frilled shirt, with ink pen and paper|Antonie van Leeuwenhoek, the first microbiologist and the first person to observe bacteria using a microscope.
Bacteria were first observed by the Dutch microscopist Antonie van Leeuwenhoek in 1676, using a single-lens microscope of his own design. He then published his observations in a series of letters to the Royal Society of London. Bacteria were Leeuwenhoek's most remarkable microscopic discovery. They were just at the limit of what his simple lenses could make out and, in one of the most striking hiatuses in the history of science, no one else would see them again for over a century.Asimov, Isaac (1982), Asimov's Biographical Encyclopedia of Science and Technology, 2nd edition, Garden City, New York: Doubleday and Company, pg 143. Only then were his by-then-largely-forgotten observations of bacteria—as opposed to his famous "animalcules" (spermatozoa)—taken seriously.
Christian Gottfried Ehrenberg introduced the word "bacterium" in 1828.Ehrenberg's Symbolae Physioe. Animalia evertebrata. Decas prima. Berlin, 1828. In fact, his Bacterium was a genus that contained non-spore-forming rod-shaped bacteria, as opposed to Bacillus, a genus of spore-forming rod-shaped bacteria defined by Ehrenberg in 1835.EHRENBERG (C.G.): Dritter Beitrag zur Erkenntniss grosser Organisation in der Richtung des kleinsten Raumes. Physikalische Abhandlungen der Koeniglichen Akademie der Wissenschaften zu Berlin aus den Jahren 1833–1835, 1835, pp. 143–336.
Louis Pasteur demonstrated in 1859 that the growth of microorganisms causes the fermentation process, and that this growth is not due to spontaneous generation. (Yeasts and moulds, commonly associated with fermentation, are not bacteria, but rather fungi.) Along with his contemporary Robert Koch, Pasteur was an early advocate of the germ theory of disease.
Robert Koch, a pioneer in medical microbiology, worked on cholera, anthrax and tuberculosis. In his research into tuberculosis Koch finally proved the germ theory, for which he received a Nobel Prize in 1905. In Koch's postulates, he set out criteria to test if an organism is the cause of a disease, and these postulates are still used today.
Though it was known in the nineteenth century that bacteria are the cause of many diseases, no effective antibacterial treatments were available. In 1910, Paul Ehrlich developed the first antibiotic, by changing dyes that selectively stained Treponema pallidum—the spirochaete that causes syphilis—into compounds that selectively killed the pathogen. Ehrlich had been awarded a 1908 Nobel Prize for his work on immunology, and pioneered the use of stains to detect and identify bacteria, with his work being the basis of the Gram stain and the Ziehl–Neelsen stain.
A major step forward in the study of bacteria came in 1977 when Carl Woese recognised that archaea have a separate line of evolutionary descent from bacteria. This new phylogenetic taxonomy depended on the sequencing of 16S ribosomal RNA, and divided prokaryotes into two evolutionary domains, as part of the three-domain system.
See also
Bacteriotherapy
Extremophile
Genetically modified bacteria
List of bacterial orders
Panspermia
Polysaccharide encapsulated bacteria
Psychrotrophic bacteria
References
Further reading
External links
MicrobeWiki, an extensive wiki about bacteria and viruses
Bacteria that affect crops and other plants
Bacterial Nomenclature Up-To-Date from DSMZ
Genera of the domain Bacteria—list of Prokaryotic names with Standing in Nomenclature
The largest bacteria
Tree of Life: Eubacteria
Videos of bacteria swimming and tumbling, use of optical tweezers and other videos.
Planet of the Bacteria by Stephen Jay Gould
On-line text book on bacteriology
Animated guide to bacterial cell structure.
Bacteria Make Major Evolutionary Shift in the Lab
Online collaboration for bacterial taxonomy.
PATRIC, a Bioinformatics Resource Center for bacterial pathogens, funded by NIAID
Bacterial Chemotaxis Interactive Simulator—A web-app that uses several simple algorithms to simulate bacterial chemotaxis.
Cell-Cell Communication in Bacteria on-line lecture by Bonnie Bassler, and TED: Discovering bacteria's amazing communication system
Sulfur-cycling fossil bacteria from the 1.8-Ga Duck Creek Formation provide promising evidence of evolution's null hypothesis, Proceedings of the National Academy of Sciences of the United States of America. Summarised in: Scientists discover bacteria that haven't evolved in more than 2 billion years, LiveScience and BusinessInsider
Category:Bacteriology
Category:Prokaryotes
Bacteria
Category:Microscopic organisms described by Antonie van Leeuwenhoek | 9,028,799 | 2017-01 |
Matter | In the classical physics observed in everyday life, if something has mass and takes up space, it is said to be composed of matter; this includes atoms (and thus molecules) and anything made up of these, but not other energy phenomena or waves such as light or sound. More generally, however, in (modern) physics, matter is not a fundamental concept because a universal definition of it is elusive: elementary constituents of atoms may not take up space individually, and massless particles may be composed to form objects that have mass (even when at rest).
All the everyday objects that we can bump into, touch or squeeze
are ultimately composed of atoms. This ordinary atomic matter is in turn made up of interacting subatomic particles—usually a nucleus of protons and neutrons, and a cloud of orbiting electrons.
Typically, science considers these composite particles matter because they have both rest mass and volume. By contrast, massless particles, such as photons, are not considered matter, because they have neither rest mass nor volume. However, not all particles with rest mass have a classical volume, since fundamental particles such as quarks and leptons (sometimes equated with matter) are considered "point particles" with no effective size or volume. Nevertheless, quarks and leptons together make up "ordinary matter", and their interactions contribute to the effective volume of the composite particles that make up ordinary matter.
Matter exists in states (or phases): the classical solid, liquid, and gas; as well as the more exotic plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma.
For much of the history of the natural sciences people have contemplated the exact nature of matter. The idea that matter was built of discrete building blocks, the so-called particulate theory of matter, was first put forward by the Greek philosophers Leucippus (~490 BC) and Democritus (~470–380 BC).
Comparison with mass
Matter should not be confused with mass, as the two are not quite the same in modern physics. For example, mass is a conserved quantity, which means that its value is unchanging through time, within closed systems. However, matter is not conserved in such systems, although this is not obvious in ordinary conditions on Earth, where matter is approximately conserved. Still, special relativity shows that matter may disappear by conversion into energy, even inside closed systems, and it can also be created from energy, within such systems. However, because mass (like energy) can neither be created nor destroyed, the quantity of mass and the quantity of energy remain the same during a transformation of matter (which represents a certain amount of energy) into non-material (i.e., non-matter) energy. This is also true in the reverse transformation of energy into matter.
Different fields of science use the term matter in different, and sometimes incompatible, ways. Some of these ways are based on loose historical meanings, from a time when there was no reason to distinguish mass and matter. As such, there is no single universally agreed scientific meaning of the word "matter". Scientifically, the term "mass" is well-defined, but "matter" is not. Sometimes in the field of physics "matter" is simply equated with particles that exhibit rest mass (i.e., that cannot travel at the speed of light), such as quarks and leptons. However, in both physics and chemistry, matter exhibits both wave-like and particle-like properties, the so-called wave–particle duality.
Definition
Based on mass, volume, and space
The common definition of matter is anything that has mass and volume (occupies space). For example, a car would be said to be made of matter, as it has mass and volume (occupies space).
The observation that matter occupies space goes back to antiquity. However, an explanation for why matter occupies space is recent, and is argued to be a result of the phenomenon described in the Pauli exclusion principle. Two particular examples where the exclusion principle clearly relates matter to the occupation of space are white dwarf stars and neutron stars, discussed further below.
Based on atoms
A definition of "matter" based on its physical and chemical structure is: matter is made up of atoms. As an example, deoxyribonucleic acid molecules (DNA) are matter under this definition because they are made of atoms. This definition can extend to include charged atoms and molecules, so as to include plasmas (gases of ions) and electrolytes (ionic solutions), which are not obviously included in the atoms definition. Alternatively, one can adopt the protons, neutrons, and electrons definition.
Based on protons, neutrons and electrons
A definition of "matter" more fine-scale than the atoms and molecules definition is: matter is made up of what atoms and molecules are made of, meaning anything made of positively charged protons, neutral neutrons, and negatively charged electrons. This definition goes beyond atoms and molecules, however, to include substances made from these building blocks that are not simply atoms or molecules, for example white dwarf matter—typically, carbon and oxygen nuclei in a sea of degenerate electrons. At a microscopic level, the constituent "particles" of matter such as protons, neutrons, and electrons obey the laws of quantum mechanics and exhibit wave–particle duality. At an even deeper level, protons and neutrons are made up of quarks and the force fields (gluons) that bind them together (see Quarks and leptons definition below).
Based on quarks and leptons
thumb|325px|Under the "quarks and leptons" definition, the elementary and composite particles made of the quarks (in purple) and leptons (in green) would be matter—while the gauge bosons (in red) would not be matter. However, interaction energy inherent to composite particles (for example, gluons involved in neutrons and protons) contribute to the mass of ordinary matter.
As seen in the above discussion, many early definitions of what can be called ordinary matter were based upon its structure or building blocks. On the scale of elementary particles, a definition that follows this tradition can be stated as: ordinary matter is everything that is composed of elementary fermions, namely quarks and leptons. The connection between these formulations follows.
Leptons (the most famous being the electron), and quarks (of which baryons, such as protons and neutrons, are made) combine to form atoms, which in turn form molecules. Because atoms and molecules are said to be matter, it is natural to phrase the definition as: ordinary matter is anything that is made of the same things that atoms and molecules are made of. (However, notice that one also can make from these building blocks matter that is not atoms or molecules.) Then, because electrons are leptons, and protons, and neutrons are made of quarks, this definition in turn leads to the definition of matter as being quarks and leptons, which are the two types of elementary fermions. Carithers and Grannis state: Ordinary matter is composed entirely of first-generation particles, namely the [up] and [down] quarks, plus the electron and its neutrino. (Higher generations particles quickly decay into first-generation particles, and thus are not commonly encountered.)
This definition of ordinary matter is more subtle than it first appears. All the particles that make up ordinary matter (leptons and quarks) are elementary fermions, while all the force carriers are elementary bosons. The W and Z bosons that mediate the weak force are not made of quarks or leptons, and so are not ordinary matter, even if they have mass.The W boson mass is 80.398 GeV; see Figure 1 in In other words, mass is not something that is exclusive to ordinary matter.
The quark–lepton definition of ordinary matter, however, identifies not only the elementary building blocks of matter, but also includes composites made from the constituents (atoms and molecules, for example). Such composites contain an interaction energy that holds the constituents together, and may constitute the bulk of the mass of the composite. As an example, to a great extent, the mass of an atom is simply the sum of the masses of its constituent protons, neutrons and electrons. However, digging deeper, the protons and neutrons are made up of quarks bound together by gluon fields (see dynamics of quantum chromodynamics) and these gluons fields contribute significantly to the mass of hadrons. In other words, most of what composes the "mass" of ordinary matter is due to the binding energy of quarks within protons and neutrons. For example, the sum of the mass of the three quarks in a nucleon is approximately , which is low compared to the mass of a nucleon (approximately ). The bottom line is that most of the mass of everyday objects comes from the interaction energy of its elementary components.
The Standard Model groups matter particles into three generations, where each generation consists of two quarks and two leptons. The first generation is the up and down quarks, the electron and the electron neutrino; the second includes the charm and strange quarks, the muon and the muon neutrino; the third generation consists of the top and bottom quarks and the tau and tau neutrino. The most natural explanation for this would be that quarks and leptons of higher generations are excited states of the first generations. If this turns out to be the case, it would imply that quarks and leptons are composite particles, rather than elementary particles.
Based on theories of relativity
In the context of relativity, mass is not an additive quantity, in the sense that one can add the rest masses of particles in a system to get the total rest mass of the system. Thus, in relativity usually a more general view is that it is not the sum of rest masses, but the energy–momentum tensor that quantifies the amount of matter. This tensor gives the rest mass for the entire system. "Matter" therefore is sometimes considered as anything that contributes to the energy–momentum of a system, that is, anything that is not purely gravity. This view is commonly held in fields that deal with general relativity such as cosmology. In this view, light and other massless particles and fields are part of matter.
The reason for this is that in this definition, electromagnetic radiation (such as light) as well as the energy of electromagnetic fields contributes to the mass of systems, and therefore appears to add matter to them. For example, light radiation (or thermal radiation) trapped inside a box would contribute to the mass of the box, as would any kind of energy inside the box, including the kinetic energy of particles held by the box. Nevertheless, isolated individual particles of light (photons) and the isolated kinetic energy of massive particles, are normally not considered to be matter.
A difference between matter and mass therefore may seem to arise when single particles are examined. In such cases, the mass of single photons is zero. For particles with rest mass, such as leptons and quarks, isolation of the particle in a frame where it is not moving, removes its kinetic energy.
A source of definition difficulty in relativity arises from two definitions of mass in common use, one of which is formally equivalent to total energy (and is thus observer dependent), and the other of which is referred to as rest mass or invariant mass and is independent of the observer. Only "rest mass" is loosely equated with matter (since it can be weighed). Invariant mass is usually applied in physics to unbound systems of particles. However, energies which contribute to the "invariant mass" may be weighed also in special circumstances, such as when a system that has invariant mass is confined and has no net momentum (as in the box example above). Thus, a photon with no mass may (confusingly) still add mass to a system in which it is trapped. The same is true of the kinetic energy of particles, which by definition is not part of their rest mass, but which does add rest mass to systems in which these particles reside (an example is the mass added by the motion of gas molecules of a bottle of gas, or by the thermal energy of any hot object).
Since such mass (kinetic energies of particles, the energy of trapped electromagnetic radiation and stored potential energy of repulsive fields) is measured as part of the mass of ordinary matter in complex systems, the "matter" status of "massless particles" and fields of force becomes unclear in such systems. These problems contribute to the lack of a rigorous definition of matter in science, although mass is easier to define as the total stress–energy above (this is also what is weighed on a scale, and what is the source of gravity).
Structure
In particle physics, fermions are particles that obey Fermi–Dirac statistics. Fermions can be elementary, like the electron—or composite, like the proton and neutron. In the Standard Model, there are two types of elementary fermions: quarks and leptons, which are discussed next.
Quarks
Quarks are particles of spin-, implying that they are fermions. They carry an electric charge of − e (down-type quarks) or + e (up-type quarks). For comparison, an electron has a charge of −1 e. They also carry colour charge, which is the equivalent of the electric charge for the strong interaction. Quarks also undergo radioactive decay, meaning that they are subject to the weak interaction. Quarks are massive particles, and therefore are also subject to gravity.
+Quark properties name symbol spin electric charge(e) mass(MeV/c2) mass comparable to antiparticle antiparticlesymbol up-type quarks up + 1.5 to 3.3 ~ 5 electrons antiup charm + 1160 to 1340 ~1 proton anticharm top + 169,100 to 173,300 ~180 protons or~1 tungsten atom antitop down-type quarks down − 3.5 to 6.0 ~10 electrons antidown strange − 70 to 130 ~ 200 electrons antistrange bottom − 4130 to 4370 ~ 5 protons antibottom
thumb|120px|Quark structure of a proton: 2 up quarks and 1 down quark.
Baryonic matter
Baryons are strongly interacting fermions, and so are subject to Fermi–Dirac statistics. Amongst the baryons are the protons and neutrons, which occur in atomic nuclei, but many other unstable baryons exist as well. The term baryon usually refers to triquarks—particles made of three quarks. "Exotic" baryons made of four quarks and one antiquark are known as the pentaquarks, but their existence is not generally accepted.
Baryonic matter is the part of the universe that is made of baryons (including all atoms). This part of the universe does not include dark energy, dark matter, black holes or various forms of degenerate matter, such as compose white dwarf stars and neutron stars. Microwave light seen by Wilkinson Microwave Anisotropy Probe (WMAP), suggests that only about 4.6% of that part of the universe within range of the best telescopes (that is, matter that may be visible because light could reach us from it), is made of baryonic matter. About 23% is dark matter, and about 72% is dark energy.
As a matter of fact, the great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10 per cent of the ordinary matter contribution to the mass-energy density of the universe.
250px|thumb|A comparison between the white dwarf IK Pegasi B (center), its A-class companion IK Pegasi A (left) and the Sun (right). This white dwarf has a surface temperature of 35,500 K.
Degenerate matter
In physics, degenerate matter refers to the ground state of a gas of fermions at a temperature near absolute zero. The Pauli exclusion principle requires that only two fermions can occupy a quantum state, one spin-up and the other spin-down. Hence, at zero temperature, the fermions fill up sufficient levels to accommodate all the available fermions—and in the case of many fermions, the maximum kinetic energy (called the Fermi energy) and the pressure of the gas becomes very large, and depends on the number of fermions rather than the temperature, unlike normal states of matter.
Degenerate matter is thought to occur during the evolution of heavy stars. The demonstration by Subrahmanyan Chandrasekhar that white dwarf stars have a maximum allowed mass because of the exclusion principle caused a revolution in the theory of star evolution.
Degenerate matter includes the part of the universe that is made up of neutron stars and white dwarfs.
Strange matter
Strange matter is a particular form of quark matter, usually thought of as a liquid of up, down, and strange quarks. It is contrasted with nuclear matter, which is a liquid of neutrons and protons (which themselves are built out of up and down quarks), and with non-strange quark matter, which is a quark liquid that contains only up and down quarks. At high enough density, strange matter is expected to be color superconducting. Strange matter is hypothesized to occur in the core of neutron stars, or, more speculatively, as isolated droplets that may vary in size from femtometers (strangelets) to kilometers (quark stars).
Two meanings of the term "strange matter"
In particle physics and astrophysics, the term is used in two ways, one broader and the other more specific.
The broader meaning is just quark matter that contains three flavors of quarks: up, down, and strange. In this definition, there is a critical pressure and an associated critical density, and when nuclear matter (made of protons and neutrons) is compressed beyond this density, the protons and neutrons dissociate into quarks, yielding quark matter (probably strange matter).
The narrower meaning is quark matter that is more stable than nuclear matter. The idea that this could happen is the "strange matter hypothesis" of Bodmer and Witten. In this definition, the critical pressure is zero: the true ground state of matter is always quark matter. The nuclei that we see in the matter around us, which are droplets of nuclear matter, are actually metastable, and given enough time (or the right external stimulus) would decay into droplets of strange matter, i.e. strangelets.
Leptons
Leptons are particles of spin-, meaning that they are fermions. They carry an electric charge of −1 e (charged leptons) or 0 e (neutrinos). Unlike quarks, leptons do not carry colour charge, meaning that they do not experience the strong interaction. Leptons also undergo radioactive decay, meaning that they are subject to the weak interaction. Leptons are massive particles, therefore are subject to gravity.
+Lepton properties name symbol spin electric charge(e) mass(MeV/c2) mass comparable to antiparticle antiparticlesymbol charged leptons electron −1 0.5110 1 electron antielectron muon −1 105.7 ~ 200 electrons antimuon tau −1 1,777 ~ 2 protons antitau neutrinos electron neutrino 0 < 0.000460 < electron electron antineutrino muon neutrino 0 < 0.19 < electron muon antineutrino tau neutrino 0 < 18.2 < 40 electrons tau antineutrino
Phases
thumb|250px||Phase diagram for a typical substance at a fixed volume. Vertical axis is Pressure, horizontal axis is Temperature. The green line marks the freezing point (above the green line is solid, below it is liquid) and the blue line the boiling point (above it is liquid and below it is gas). So, for example, at higher T, a higher P is necessary to maintain the substance in liquid phase. At the triple point the three phases; liquid, gas and solid; can coexist. Above the critical point there is no detectable difference between the phases. The dotted line shows the anomalous behavior of water: ice melts at constant temperature with increasing pressure.
In bulk, matter can exist in several different forms, or states of aggregation, known as phases, depending on ambient pressure, temperature and volume. A phase is a form of matter that has a relatively uniform chemical composition and physical properties (such as density, specific heat, refractive index, and so forth). These phases include the three familiar ones (solids, liquids, and gases), as well as more exotic states of matter (such as plasmas, superfluids, supersolids, Bose–Einstein condensates, ...). A fluid may be a liquid, gas or plasma. There are also paramagnetic and ferromagnetic phases of magnetic materials. As conditions change, matter may change from one phase into another. These phenomena are called phase transitions, and are studied in the field of thermodynamics. In nanomaterials, the vastly increased ratio of surface area to volume results in matter that can exhibit properties entirely different from those of bulk material, and not well described by any bulk phase (see nanomaterials for more details).
Phases are sometimes called states of matter, but this term can lead to confusion with thermodynamic states. For example, two gases maintained at different pressures are in different thermodynamic states (different pressures), but in the same phase (both are gases).
Antimatter
In particle physics and quantum chemistry, antimatter is matter that is composed of the antiparticles of those that constitute ordinary matter. If a particle and its antiparticle come into contact with each other, the two annihilate; that is, they may both be converted into other particles with equal energy in accordance with Einstein's equation . These new particles may be high-energy photons (gamma rays) or other particle–antiparticle pairs. The resulting particles are endowed with an amount of kinetic energy equal to the difference between the rest mass of the products of the annihilation and the rest mass of the original particle–antiparticle pair, which is often quite large.
Antimatter is not found naturally on Earth, except very briefly and in vanishingly small quantities (as the result of radioactive decay, lightning or cosmic rays). This is because antimatter that came to exist on Earth outside the confines of a suitable physics laboratory would almost instantly meet the ordinary matter that Earth is made of, and be annihilated. Antiparticles and some stable antimatter (such as antihydrogen) can be made in tiny amounts, but not in enough quantity to do more than test a few of its theoretical properties.
There is considerable speculation both in science and science fiction as to why the observable universe is apparently almost entirely matter, and whether other places are almost entirely antimatter instead. In the early universe, it is thought that matter and antimatter were equally represented, and the disappearance of antimatter requires an asymmetry in physical laws called the charge parity (or CP symmetry) violation. CP symmetry violation can be obtained from the Standard Model, but at this time the apparent asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. Possible processes by which it came about are explored in more detail under baryogenesis.
Other types
thumb |300px |Pie chart showing the fractions of energy in the universe contributed by different sources. Ordinary matter is divided into luminous matter (the stars and luminous gases and 0.005% radiation) and nonluminous matter (intergalactic gas and about 0.1% neutrinos and 0.04% supermassive black holes). Ordinary matter is uncommon. Modeled after Ostriker and Steinhardt. For more information, see NASA.
Ordinary matter, in the quarks and leptons definition, constitutes about 4% of the energy of the observable universe. The remaining energy is theorized to be due to exotic forms, of which 23% is dark matter and 73% is dark energy.
thumb |300px |Galaxy rotation curve for the Milky Way. Vertical axis is speed of rotation about the galactic center. Horizontal axis is distance from the galactic center. The sun is marked with a yellow ball. The observed curve of speed of rotation is blue. The predicted curve based upon stellar mass and gas in the Milky Way is red. The difference is due to dark matter or perhaps a modification of the law of gravity. Scatter in observations is indicated roughly by gray bars.
Dark matter
In astrophysics and cosmology, dark matter is matter of unknown composition that does not emit or reflect enough electromagnetic radiation to be observed directly, but whose presence can be inferred from gravitational effects on visible matter. Observational evidence of the early universe and the big bang theory require that this matter have energy and mass, but is not composed of either elementary fermions (as above) OR gauge bosons. The commonly accepted view is that most of the dark matter is non-baryonic in nature. As such, it is composed of particles as yet unobserved in the laboratory. Perhaps they are supersymmetric particles, which are not Standard Model particles, but relics formed at very high energies in the early phase of the universe and still floating about.
Dark energy
In cosmology, dark energy is the name given to the antigravitating influence that is accelerating the rate of expansion of the universe. It is known not to be composed of known particles like protons, neutrons or electrons, nor of the particles of dark matter, because these all gravitate.
Exotic matter
Exotic matter is a hypothetical concept of particle physics. It covers any material that violates one or more classical conditions or is not made of known baryonic particles. Such materials would possess qualities like negative mass or being repelled rather than attracted by gravity.
Historical development
Antiquity (c. 610 BC–c. 322 BC)
The pre-Socratics were among the first recorded speculators about the underlying nature of the visible world. Thales (c. 624 BC–c. 546 BC) regarded water as the fundamental material of the world. Anaximander (c. 610 BC–c. 546 BC) posited that the basic material was wholly characterless or limitless: the Infinite (apeiron). Anaximenes (flourished 585 BC, d. 528 BC) posited that the basic stuff was pneuma or air. Heraclitus (c. 535–c. 475 BC) seems to say the basic element is fire, though perhaps he means that all is change. Empedocles (c. 490–430 BC) spoke of four elements of which everything was made: earth, water, air, and fire. Meanwhile, Parmenides argued that change does not exist, and Democritus argued that everything is composed of minuscule, inert bodies of all shapes called atoms, a philosophy called atomism. All of these notions had deep philosophical problems.Discussed by Aristotle in Physics, esp. book I, but also later; as well as Metaphysics I–II.
Aristotle (384 BC – 322 BC) was the first to put the conception on a sound philosophical basis, which he did in his natural philosophy, especially in Physics book I.For a good explanation and elaboration, see He adopted as reasonable suppositions the four Empedoclean elements, but added a fifth, aether. Nevertheless, these elements are not basic in Aristotle's mind. Rather they, like everything else in the visible world, are composed of the basic principles matter and form.
The word Aristotle uses for matter, ὕλη (hyle or hule), can be literally translated as wood or timber, that is, "raw material" for building. Indeed, Aristotle's conception of matter is intrinsically linked to something being made or composed. In other words, in contrast to the early modern conception of matter as simply occupying space, matter for Aristotle is definitionally linked to process or change: matter is what underlies a change of substance. For example, a horse eats grass: the horse changes the grass into itself; the grass as such does not persist in the horse, but some aspect of it—its matter—does. The matter is not specifically described (e.g., as atoms), but consists of whatever persists in the change of substance from grass to horse. Matter in this understanding does not exist independently (i.e., as a substance), but exists interdependently (i.e., as a "principle") with form and only insofar as it underlies change. It can be helpful to conceive of the relationship of matter and form as very similar to that between parts and whole. For Aristotle, matter as such can only receive actuality from form; it has no activity or actuality in itself, similar to the way that parts as such only have their existence in a whole (otherwise they would be independent wholes).
Seventeenth and eighteenth centuries
René Descartes (1596–1650) originated the modern conception of matter. He was primarily a geometer. Instead of, like Aristotle, deducing the existence of matter from the physical reality of change, Descartes arbitrarily postulated matter to be an abstract, mathematical substance that occupies space:
For Descartes, matter has only the property of extension, so its only activity aside from locomotion is to exclude other bodies:though even this property seems to be non-essential (René Descartes, Principles of Philosophy II [1644], "On the Principles of Material Things", no. 4.) this is the mechanical philosophy. Descartes makes an absolute distinction between mind, which he defines as unextended, thinking substance, and matter, which he defines as unthinking, extended substance. They are independent things. In contrast, Aristotle defines matter and the formal/forming principle as complementary principles that together compose one independent thing (substance). In short, Aristotle defines matter (roughly speaking) as what things are actually made of (with a potential independent existence), but Descartes elevates matter to an actual independent thing in itself.
The continuity and difference between Descartes' and Aristotle's conceptions is noteworthy. In both conceptions, matter is passive or inert. In the respective conceptions matter has different relationships to intelligence. For Aristotle, matter and intelligence (form) exist together in an interdependent relationship, whereas for Descartes, matter and intelligence (mind) are definitionally opposed, independent substances.
Descartes' justification for restricting the inherent qualities of matter to extension is its permanence, but his real criterion is not permanence (which equally applied to color and resistance), but his desire to use geometry to explain all material properties.E.A. Burtt, Metaphysical Foundations of Modern Science (Garden City, New York: Doubleday and Company, 1954), 117–118. Like Descartes, Hobbes, Boyle, and Locke argued that the inherent properties of bodies were limited to extension, and that so-called secondary qualities, like color, were only products of human perception.J.E. McGuire and P.M. Heimann, "The Rejection of Newton's Concept of Matter in the Eighteenth Century", The Concept of Matter in Modern Philosophy ed. Ernan McMullin (Notre Dame: University of Notre Dame Press, 1978), 104–118 (105).
Isaac Newton (1643–1727) inherited Descartes' mechanical conception of matter. In the third of his "Rules of Reasoning in Philosophy", Newton lists the universal qualities of matter as "extension, hardness, impenetrability, mobility, and inertia".Isaac Newton, Mathematical Principles of
Natural Philosophy, trans. A. Motte, revised by F. Cajori (Berkeley: University of California Press, 1934), pp. 398–400. Further analyzed by Maurice A. Finocchiaro, "Newton's Third Rule of Philosophizing: A Role for Logic in Historiography", Isis 65:1 (Mar. 1974), pp. 66–73. Similarly in Optics he conjectures that God created matter as "solid, massy, hard, impenetrable, movable particles", which were "...even so very hard as never to wear or break in pieces".Isaac Newton, Optics, Book III, pt. 1, query 31. The "primary" properties of matter were amenable to mathematical description, unlike "secondary" qualities such as color or taste. Like Descartes, Newton rejected the essential nature of secondary qualities.McGuire and Heimann, 104.
Newton developed Descartes' notion of matter by restoring to matter intrinsic properties in addition to extension (at least on a limited basis), such as mass. Newton's use of gravitational force, which worked "at a distance", effectively repudiated Descartes' mechanics, in which interactions happened exclusively by contact.
Though Newton's gravity would seem to be a power of bodies, Newton himself did not admit it to be an essential property of matter. Carrying the logic forward more consistently, Joseph Priestley (1733-1804) argued that corporeal properties transcend contact mechanics: chemical properties require the capacity for attraction. He argued matter has other inherent powers besides the so-called primary qualities of Descartes, et al.McGuire and Heimann, 113.
Nineteenth and twentieth centuries
Since Priestley's time, there has been a massive expansion in knowledge of the constituents of the material world (viz., molecules, atoms, subatomic particles), but there has been no further development in the definition of matter. Rather the question has been set aside. Noam Chomsky (born 1928) summarizes the situation that has prevailed since that time:
So matter is whatever physics studies and the object of study of physics is matter: there is no independent general definition of matter, apart from its fitting into the methodology of measurement and controlled experimentation. In sum, the boundaries between what constitutes matter and everything else remains as vague as the demarcation problem of delimiting science from everything else.Nevertheless, it remains true that the mathematization regarded as requisite for a modern physical theory carries its own implicit notion of matter, which is very like Descartes', despite the demonstrated vacuity of the latter's notions.
In the 19th century, following the development of the periodic table, and of atomic theory, atoms were seen as being the fundamental constituents of matter; atoms formed molecules and compounds.
The common definition in terms of occupying space and having mass is in contrast with most physical and chemical definitions of matter, which rely instead upon its structure and upon attributes not necessarily related to volume and mass. At the turn of the nineteenth century, the knowledge of matter began a rapid evolution.
Aspects of the Newtonian view still held sway. James Clerk Maxwell discussed matter in his work Matter and Motion. He carefully separates "matter" from space and time, and defines it in terms of the object referred to in Newton's first law of motion.
However, the Newtonian picture was not the whole story. In the 19th century, the term "matter" was actively discussed by a host of scientists and philosophers, and a brief outline can be found in Levere. A textbook discussion from 1870 suggests matter is what is made up of atoms:Three divisions of matter are recognized in science: masses, molecules and atoms. A Mass of matter is any portion of matter appreciable by the senses. A Molecule is the smallest particle of matter into which a body can be divided without losing its identity. An Atom is a still smaller particle produced by division of a molecule.
Rather than simply having the attributes of mass and occupying space, matter was held to have chemical and electrical properties. In 1909 the famous physicist J. J. Thomson (1856-1940) wrote about the "constitution of matter" and was concerned with the possible connection between matter and electrical charge.
There is an entire literature concerning the "structure of matter", ranging from the "electrical structure" in the early 20th century, to the more recent "quark structure of matter", introduced today with the remark: Understanding the quark structure of matter has been one of the most important advances in contemporary physics. In this connection, physicists speak of matter fields, and speak of particles as "quantum excitations of a mode of the matter field". And here is a quote from de Sabbata and Gasperini: "With the word "matter" we denote, in this context, the sources of the interactions, that is spinor fields (like quarks and leptons), which are believed to be the fundamental components of matter, or scalar fields, like the Higgs particles, which are used to introduced mass in a gauge theory (and that, however, could be composed of more fundamental fermion fields)."
In the late 19th century with the discovery of the electron, and in the early 20th century, with the discovery of the atomic nucleus, and the birth of particle physics, matter was seen as made up of electrons, protons and neutrons interacting to form atoms. Today, we know that even protons and neutrons are not indivisible, they can be divided into quarks, while electrons are part of a particle family called leptons. Both quarks and leptons are elementary particles, and are currently seen as being the fundamental constituents of matter.The history of the concept of matter is a history of the fundamental length scales used to define matter. Different building blocks apply depending upon whether one defines matter on an atomic or elementary particle level. One may use a definition that matter is atoms, or that matter is hadrons, or that matter is leptons and quarks depending upon the scale at which one wishes to define matter.
These quarks and leptons interact through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard Model of particle physics is currently the best explanation for all of physics, but despite decades of efforts, gravity cannot yet be accounted for at the quantum level; it is only described by classical physics (see quantum gravity and graviton). Interactions between quarks and leptons are the result of an exchange of force-carrying particles (such as photons) between quarks and leptons. The force-carrying particles are not themselves building blocks. As one consequence, mass and energy (which cannot be created or destroyed) cannot always be related to matter (which can be created out of non-matter particles such as photons, or even out of pure energy, such as kinetic energy). Force carriers are usually not considered matter: the carriers of the electric force (photons) possess energy (see Planck relation) and the carriers of the weak force (W and Z bosons) are massive, but neither are considered matter either.See for example, , and However, while these particles are not considered matter, they do contribute to the total mass of atoms, subatomic particles, and all systems that contain them.
Summary
The modern conception of matter has been refined many times in history, in light of the improvement in knowledge of just what the basic building blocks are, and in how they interact.
The term "matter" is used throughout physics in a bewildering variety of contexts: for example, one refers to "condensed matter physics", "elementary matter", "partonic" matter, "dark" matter, "anti"-matter, "strange" matter, and "nuclear" matter. In discussions of matter and antimatter, normal matter has been referred to by Alfvén as koinomatter (Gk. common matter). It is fair to say that in physics, there is no broad consensus as to a general definition of matter, and the term "matter" usually is used in conjunction with a specifying modifier.
The history of the concept of matter is a history of the fundamental length scales used to define matter. Different building blocks apply depending upon whether one defines matter on an atomic or elementary particle level. One may use a definition that matter is atoms, or that matter is hadrons, or that matter is leptons and quarks depending upon the scale at which one wishes to define matter.
These quarks and leptons interact through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard Model of particle physics is currently the best explanation for all of physics, but despite decades of efforts, gravity cannot yet be accounted for at the quantum level; it is only described by classical physics (see quantum gravity and graviton).
See also
Antimatter
Ambiplasma
Antihydrogen
Antiparticle
Particle accelerator
Cosmology
Cosmological constant
Friedmann equations
Physical ontology
Dark matter
Axion
Minimal Supersymmetric Standard Model
Neutralino
Nonbaryonic dark matter
Scalar field dark matter
Philosophy
Atomism
Materialism
Physicalism
Substance theory
Other
Mass–energy equivalence
Mattergy
Pattern formation
Periodic Systems of Small Molecules
References
Further reading
Stephen Toulmin and June Goodfield, The Architecture of Matter (Chicago: University of Chicago Press, 1962).
Richard J. Connell, Matter and Becoming (Chicago: The Priory Press, 1966).
Ernan McMullin, The Concept of Matter in Greek and Medieval Philosophy (Notre Dame, Indiana: Univ. of Notre Dame Press, 1965).
Ernan McMullin, The Concept of Matter in Modern Philosophy (Notre Dame, Indiana: University of Notre Dame Press, 1978).
External links
Visionlearning Module on Matter
Matter in the universe How much Matter is in the Universe?
NASA on superfluid core of neutron star
Matter and Energy: A False Dichotomy – Conversations About Science with Theoretical Physicist Matt Strassler
| 19,673,093 | 2017-01 |
Poultry | thumb|300px|Poultry of the World
Poultry () are domesticated birds kept by humans for the eggs they produce, their meat, their feathers, or sometimes as pets. These birds are most typically members of the superorder Galloanserae (fowl), especially the order Galliformes (which includes chickens, quails and turkeys) and the family Anatidae, in order Anseriformes, commonly known as "waterfowl" and including domestic ducks and domestic geese. Poultry also includes other birds that are killed for their meat, such as the young of pigeons (known as squabs) but does not include similar wild birds hunted for sport or food and known as game. The word "poultry" comes from the French/Norman word poule, itself derived from the Latin word pullus, which means small animal.
The domestication of poultry took place several thousand years ago. This may have originally been as a result of people hatching and rearing young birds from eggs collected from the wild, but later involved keeping the birds permanently in captivity. Domesticated chickens may have been used for cockfighting at first and quail kept for their songs, but soon it was realised how useful it was having a captive-bred source of food. Selective breeding for fast growth, egg-laying ability, conformation, plumage and docility took place over the centuries, and modern breeds often look very different from their wild ancestors. Although some birds are still kept in small flocks in extensive systems, most birds available in the market today are reared in intensive commercial enterprises. Poultry is the second most widely eaten type of meat globally and, along with eggs, provides nutritionally beneficial food containing high-quality protein accompanied by a low proportion of fat. All poultry meat should be properly handled and sufficiently cooked in order to reduce the risk of food poisoning.
Etymology
The word "poultry" comes from the Middle English "pultrie", from Old French pouletrie, from pouletier, poultry dealer, from poulet, pullet. The word "pullet" itself comes from Middle English pulet, from Old French polet, both from Latin pullus, a young fowl, young animal or chicken. The word "fowl" is of Germanic origin (cf. Old English Fugol, German Vogel, Danish Fugl).
Definition
"Poultry" is a term used for any kind of domesticated bird, captive-raised for its utility, and traditionally the word has been used to refer to wildfowl (Galliformes) and waterfowl (Anseriformes). "Poultry" can be defined as domestic fowls, including chickens, turkeys, geese and ducks, raised for the production of meat or eggs and the word is also used for the flesh of these birds used as food. The Encyclopædia Britannica lists the same bird groups but also includes guinea fowl and squabs (young pigeons). In R. D. Crawford's Poultry breeding and genetics, squabs are omitted but Japanese quail and common pheasant are added to the list, the latter frequently being bred in captivity and released into the wild. In his 1848 classic book on poultry, Ornamental and Domestic Poultry: Their History, and Management, Edmund Dixon included chapters on the peafowl, guinea fowl, mute swan, turkey, various types of geese, the muscovy duck, other ducks and all types of chickens including bantams. In colloquial speech, the term "fowl" is often used near-synonymously with "domesticated chicken" (Gallus gallus), or with "poultry" or even just "bird", and many languages do not distinguish between "poultry" and "fowl". Both words are also used for the flesh of these birds. Poultry can be distinguished from "game", defined as wild birds or mammals hunted for food or sport, a word also used to describe the flesh of these when eaten.
Examples
Bird Wild ancestor Domestication Utilization Picture Chicken Red junglefowl Southeast Asia meat, feathers, eggs, ornamentation, leather120px Duck Muscovy duck/Mallard various meat, feathers, eggs120pxEmuEmuvarious, 20th centurymeat, leather, oil120pxEgyptian gooseEgyptian gooseEgyptmeat, feathers, eggs, ornamentation120px Goose Greylag goose/Swan goose various meat, feathers, eggs 120px Indian peafowl Indian Peafowl various meat, feathers, ornamentation, landscaping120px Mute swan Mute swan various feathers, eggs, landscaping120pxOstrichOstrichvarious, 20th centurymeat, eggs, feathers, leather120pxPigeonRock doveMiddle Eastmeat, feathers, ornamentation120px Quail Japanese quail, Northern bobwhite Japan, Virginia meat, eggs, feathers, pets120px Turkey Wild turkey Mexico meat, feathers120pxGrey francolin Grey francolinPakistan, North India meat, fighting, pets120px Guineafowl Helmeted guineafowl Africa meat, pest consumption, and alarm calling120px Common pheasant Common pheasant Eurasia meat120px Golden pheasant Golden pheasant Eurasia meat, mainly ornamental120pxGreater rheaGreater Rheavarious, 20th centurymeat, leather, oil, eggs120px
Chickens
upright|thumb|right|Cock with comb and wattles
Chickens are medium-sized, chunky birds with an upright stance and characterised by fleshy red combs and wattles on their heads. Males, known as cocks, are usually larger, more boldly coloured, and have more exaggerated plumage than females (hens). Chickens are gregarious, omnivorous, ground-dwelling birds that in their natural surroundings search among the leaf litter for seeds, invertebrates, and other small animals. They seldom fly except as a result of perceived danger, preferring to run into the undergrowth if approached. Today's domestic chicken (Gallus gallus domesticus) is mainly descended from the wild red junglefowl of Asia, with some additional input from grey junglefowl. Domestication is believed to have taken place between 7,000 and 10,000 years ago, and what are thought to be fossilized chicken bones have been found in northeastern China dated to around 5,400 BC. Archaeologists believe domestication was originally for the purpose of cockfighting, the male bird being a doughty fighter. By 4,000 years ago, chickens seem to have reached the Indus Valley and 250 years later, they arrived in Egypt. They were still used for fighting and were regarded as symbols of fertility. The Romans used them in divination, and the Egyptians made a breakthrough when they learned the difficult technique of artificial incubation. Since then, the keeping of chickens has spread around the world for the production of food with the domestic fowl being a valuable source of both eggs and meat.
Since their domestication, a large number of breeds of chickens have been established, but with the exception of the white Leghorn, most commercial birds are of hybrid origin. In about 1800, chickens began to be kept on a larger scale, and modern high-output poultry farms were present in the United Kingdom from around 1920 and became established in the United States soon after the Second World War. By the mid-20th century, the poultry meat-producing industry was of greater importance than the egg-laying industry. Poultry breeding has produced breeds and strains to fulfil different needs; light-framed, egg-laying birds that can produce 300 eggs a year; fast-growing, fleshy birds destined for consumption at a young age, and utility birds which produce both an acceptable number of eggs and a well-fleshed carcase. Male birds are unwanted in the egg-laying industry and can often be identified as soon as they are hatch for subsequent culling. In meat breeds, these birds are sometimes castrated (often chemically) to prevent aggression. The resulting bird, called a capon, has more tender and flavorful meat, as well.
thumb|right|Roman mosaic depicting a cockfight
A bantam is a small variety of domestic chicken, either a miniature version of a member of a standard breed, or a "true bantam" with no larger counterpart. The name derives from the town of Bantam in Java where European sailors bought the local small chickens for their shipboard supplies. Bantams may be a quarter to a third of the size of standard birds and lay similarly small eggs. They are kept by small-holders and hobbyists for egg production, use as broody hens, ornamental purposes, and showing.
Cockfighting
Cockfighting is said to be the world's oldest spectator sport and may have originated in Persia 6,000 years ago. Two mature males (cocks or roosters) are set to fight each other, and will do so with great vigour until one is critically injured or killed. Breeds such as the Aseel were developed in the Indian subcontinent for their aggressive behaviour. The sport formed part of the culture of the ancient Indians, Chinese, Greeks, and Romans, and large sums were won or lost depending on the outcome of an encounter. Cockfighting has been banned in many countries during the last century on the grounds of cruelty to animals.
Ducks
Ducks are medium-sized aquatic birds with broad bills, eyes on the side of the head, fairly long necks, short legs set far back on the body, and webbed feet. Males, known as drakes, are often larger than females (simply known as ducks) and are differently coloured in some breeds. Domestic ducks are omnivores, eating a variety of animal and plant materials such as aquatic insects, molluscs, worms, small amphibians, waterweeds, and grasses. They feed in shallow water by dabbling, with their heads underwater and their tails upended. Most domestic ducks are too heavy to fly, and they are social birds, preferring to live and move around together in groups. They keep their plumage waterproof by preening, a process that spreads the secretions of the preen gland over their feathers.
thumb|right|Pekin ducks
Clay models of ducks found in China dating back to 4000 BC may indicate the domestication of ducks took place there during the Yangshao culture. Even if this is not the case, domestication of the duck took place in the Far East at least 1500 years earlier than in the West. Lucius Columella, writing in the first century BC, advised those who sought to rear ducks to collect wildfowl eggs and put them under a broody hen, because when raised in this way, the ducks "lay aside their wild nature and without hesitation breed when shut up in the bird pen". Despite this, ducks did not appear in agricultural texts in Western Europe until about 810 AD, when they began to be mentioned alongside geese, chickens, and peafowl as being used for rental payments made by tenants to landowners.
It is widely agreed that the mallard (Anas platyrhynchos) is the ancestor of all breeds of domestic duck (with the exception of the Muscovy duck (Cairina moschata), which is not closely related to other ducks). Ducks are farmed mainly for their meat, eggs, and down. As is the case with chickens, various breeds have been developed, selected for egg-laying ability, fast growth, and a well-covered carcase. The most common commercial breed in the United Kingdom and the United States is the Pekin duck, which can lay 200 eggs a year and can reach a weight of in 44 days. In the Western world, ducks are not as popular as chickens, because the latter produce larger quantities of white, lean meat and are easier to keep intensively, making the price of chicken meat lower than that of duck meat. While popular in haute cuisine, duck appears less frequently in the mass-market food industry. However, things are different in the East. Ducks are more popular there than chickens and are mostly still herded in the traditional way and selected for their ability to find sufficient food in harvested rice fields and other wet environments.
Geese
thumb|left|upright|An Emden goose, a descendent of the wild greylag goose
The greylag goose (Anser anser) was domesticated by the Egyptians at least 3000 years ago, and a different wild species, the swan goose (Anser cygnoides), domesticated in Siberia about a thousand years later, is known as a Chinese goose. The two hybridise with each other and the large knob at the base of the beak, a noticeable feature of the Chinese goose, is present to a varying extent in these hybrids. The hybrids are fertile and have resulted in several of the modern breeds. Despite their early domestication, geese have never gained the commercial importance of chickens and ducks.
Domestic geese are much larger than their wild counterparts and tend to have thick necks, an upright posture, and large bodies with broad rear ends. The greylag-derived birds are large and fleshy and used for meat, while the Chinese geese have smaller frames and are mainly used for egg production. The fine down of both is valued for use in pillows and padded garments. They forage on grass and weeds, supplementing this with small invertebrates, and one of the attractions of rearing geese is their ability to grow and thrive on a grass-based system. They are very gregarious and have good memories and can be allowed to roam widely in the knowledge that they will return home by dusk. The Chinese goose is more aggressive and noisy than other geese and can be used as a guard animal to warn of intruders. The flesh of meat geese is dark-coloured and high in protein, but they deposit fat subcutaneously, although this fat contains mostly monounsaturated fatty acids. The birds are killed either around 10 or about 24 weeks. Between these ages, problems with dressing the carcase occur because of the presence of developing pin feathers.
In some countries, geese and ducks are force-fed to produce livers with an exceptionally high fat content for the production of foie gras. Over 75% of world production of this product occurs in France, with lesser industries in Hungary and Bulgaria and a growing production in China. Foie gras is considered a luxury in many parts of the world, but the process of feeding the birds in this way is banned in many countries on animal welfare grounds.
Turkeys
thumb|Male domesticated turkey sexually displaying by showing the snood hanging over the beak, the caruncles hanging from the throat, and the 'beard' of small, black, stiff feathers on the chest
Turkeys are large birds, their nearest relatives being the pheasant and the guineafowl. Males are larger than females and have spreading, fan-shaped tails and distinctive, fleshy wattles, called a snood, that hang from the top of the beak and are used in courtship display. Wild turkeys can fly, but seldom do so, preferring to run with a long, stratling gait. They roost in trees and forage on the ground, feeding on seeds, nuts, berries, grass,foliage, invertebrates, lizards, and small snakes.
The modern domesticated turkey is descended from one of six subspecies of wild turkey (Meleagris gallopavo) found in the present Mexican states of Jalisco, Guerrero and Veracruz.C. Michael Hogan. 2008. Wild turkey: Meleagris gallopavo, GlobalTwitcher.com, ed. N. L Stromberg Pre-Aztec tribes in south-central Mexico first domesticated the bird around 800 BC, and Pueblo Indians inhabiting the Colorado Plateau in the United States did likewise around 200 BC. They used the feathers for robes, blankets, and ceremonial purposes. More than 1,000 years later, they became an important food source. The first Europeans to encounter the bird misidentified it as a guineafowl, a bird known as a "turkey fowl" at that time because it had been introduced into Europe via Turkey.
Commercial turkeys are usually reared indoors under controlled conditions. These are often large buildings, purpose-built to provide ventilation and low light intensities (this reduces the birds' activity and thereby increases the rate of weight gain). The lights can be switched on for 24-hrs/day, or a range of step-wise light regimens to encourage the birds to feed often and therefore grow rapidly. Females achieve slaughter weight at about 15 weeks of age and males at about 19. Mature commercial birds may be twice as heavy as their wild counterparts. Many different breeds have been developed, but the majority of commercial birds are white, as this improves the appearance of the dressed carcass, the pin feathers being less visible. Turkeys were at one time mainly consumed on special occasions such as Christmas (10 million birds in the United Kingdom) or Thanksgiving (60 million birds in the United States). However, they are increasingly becoming part of the everyday diet in many parts of the world.
Quail
thumb|upright|left|Japanese quail
The quail is a small to medium-sized, cryptically coloured bird. In its natural environment, it is found in bushy places, in rough grassland, among agricultural crops, and in other places with dense cover. It feeds on seeds, insects, and other small invertebrates. Being a largely ground-dwelling, gregarious bird, domestication of the quail was not difficult, although many of its wild instincts are retained in captivity. It was known to the Egyptians long before the arrival of chickens and was depicted in hieroglyphs from 2575 BC. It migrated across Egypt in vast flocks and the birds could sometimes be picked up off the ground by hand. These were the common quail (Coturnix coturnix), but modern domesticated flocks are mostly of Japanese quail (Coturnix japonica) which was probably domesticated as early as the 11th century AD in Japan. They were originally kept as songbirds, and they are thought to have been regularly used in song contests.
In the early 20th century, Japanese breeders began to selectively breed for increased egg production. By 1940, the quail egg industry was flourishing, but the events of World War II led to the complete loss of quail lines bred for their song type, as well as almost all of those bred for egg production. After the war, the few surviving domesticated quail were used to rebuild the industry, and all current commercial and laboratory lines are considered to have originated from this population. Modern birds can lay upward of 300 eggs a year and countries such as Japan, India, China, Italy, Russia, and the United States have established commercial Japanese quail farming industries. Japanese quail are also used in biomedical research in fields such as genetics, embryology, nutrition, physiology, pathology, and toxicity studies. These quail are closely related to the common quail, and many young hybrid birds are released into the wild each year to replenish dwindling wild populations.
Other poultry
Guinea fowl originated in southern Africa, and the species most often kept as poultry is the helmeted guineafowl (Numida meleagris). It is a medium-sized grey or speckled bird with a small naked head with colourful wattles and a knob on top, and was domesticated by the time of the ancient Greeks and Romans. Guinea fowl are hardy, sociable birds that subsist mainly on insects, but also consume grasses and seeds. They will keep a vegetable garden clear of pests and will eat the ticks that carry Lyme disease. They happily roost in trees and give a loud vocal warning of the approach of predators. Their flesh and eggs can be eaten in the same way as chickens, young birds being ready for the table at the age of about four months.
A squab is the name given to the young of domestic pigeons that are destined for the table. Like other domesticated pigeons, birds used for this purpose are descended from the rock pigeon (Columba livia). Special utility breeds with desirable characteristics are used. Two eggs are laid and incubated for about 17 days. When they hatch, the squabs are fed by both parents on "pigeon's milk", a thick secretion high in protein produced by the crop. Squabs grow rapidly, but are slow to fledge and are ready to leave the nest at 26 to 30 days weighing about . By this time, the adult pigeons will have laid and be incubating another pair of eggs and a prolific pair should produce two squabs every four weeks during a breeding season lasting several months.
Poultry farming
thumb|Free-range ducks in Hainan Province, China
Worldwide, more chickens are kept than any other type of poultry, with over 50 billion birds being raised each year as a source of meat and eggs. Traditionally, such birds would have been kept extensively in small flocks, foraging during the day and housed at night. This is still the case in developing countries, where the women often make important contributions to family livelihoods through keeping poultry. However, rising world populations and urbanization have led to the bulk of production being in larger, more intensive specialist units. These are often situated close to where the feed is grown or near to where the meat is needed, and result in cheap, safe food being made available for urban communities. Profitability of production depends very much on the price of feed, which has been rising. High feed costs could limit further development of poultry production.
In free-range husbandry, the birds can roam freely outdoors for at least part of the day. Often, this is in large enclosures, but the birds have access to natural conditions and can exhibit their normal behaviours. A more intensive system is yarding, in which the birds have access to a fenced yard and poultry house at a higher stocking rate. Poultry can also be kept in a barn system, with no access to the open air, but with the ability to move around freely inside the building. The most intensive system for egg-laying chickens is battery cages, often set in multiple tiers. In these, several birds share a small cage which restricts their ability to move around and behave in a normal manner. The eggs are laid on the floor of the cage and roll into troughs outside for ease of collection. Battery cages for hens have been illegal in the EU since January 1, 2012.
Chickens raised intensively for their meat are known as "broilers". Breeds have been developed that can grow to an acceptable carcass size () in six weeks or less. Broilers grow so fast, their legs cannot always support their weight and their hearts and respiratory systems may not be able to supply enough oxygen to their developing muscles. Mortality rates at 1% are much higher than for less-intensively reared laying birds which take 18 weeks to reach similar weights. Processing the birds is done automatically with conveyor-belt efficiency. They are hung by their feet, stunned, killed, bled, scalded, plucked, have their heads and feet removed, eviscerated, washed, chilled, drained, weighed, and packed, all within the course of little over two hours.
Both intensive and free-range farming have animal welfare concerns. In intensive systems, cannibalism, feather pecking and vent pecking can be common, with some farmers using beak trimming as a preventative measure. Diseases can also be common and spread rapidly through the flock. In extensive systems, the birds are exposed to adverse weather conditions and are vulnerable to predators and disease-carrying wild birds. Barn systems have been found to have the worst bird welfare. In Southeast Asia, a lack of disease control in free-range farming has been associated with outbreaks of avian influenza.
Poultry shows
In many countries, national and regional poultry shows are held where enthusiasts exhibit their birds which are judged on certain phenotypical breed traits as specified by their respective breed standards. The idea of poultry exhibition may have originated after cockfighting was made illegal, as a way of maintaining a competitive element in poultry husbandry. Breed standards were drawn up for egg-laying, meat-type, and purely ornamental birds, aiming for uniformity. Sometimes, poultry shows are part of general livestock shows, and sometimes they are separate events such as the annual "National Championship Show" in the United Kingdom organised by the Poultry Club of Great Britain.
Poultry as food
Trade
thumb|Chicken and duck eggs on sale in Hong Kong
Poultry is the second most widely eaten type of meat in the world, accounting for about 30% of total meat production worldwide compared to pork at 38%. Sixteen billion birds are raised annually for consumption, more than half of these in industrialised, factory-like production units.Raloff, Janet. Food for Thought: Global Food Trends. Science News Online. May 31, 2003. Global broiler meat production rose to 84.6 million tonnes in 2013. The largest producers were the United States (20%), China (16.6%), Brazil (15.1%) and the European Union (11.3%). There are two distinct models of production; the European Union supply chain model seeks to supply products which can be traced back to the farm of origin. This model faces the increasing costs of implementing additional food safety requirements, welfare issues and environmental regulations. In contrast, the United States model turns the product into a commodity.
World production of duck meat was about 4.2 million tonnes in 2011 with China producing two thirds of the total, some 1.7 billion birds. Other notable duck-producing countries in the Far East include Vietnam, Thailand, Malaysia, Myanmar, Indonesia and South Korea (12% in total). France (3.5%) is the largest producer in the West, followed by other EU nations (3%) and North America (1.7%). China was also by far the largest producer of goose and guinea fowl meat, with a 94% share of the 2.6 million tonne global market.
Global egg production was expected to reach 65.5 million tonnes in 2013, surpassing all previous years. Between 2000 and 2010, egg production was growing globally at around 2% per year, but since then growth has slowed down to nearer 1%.
Cuts of poultry
thumb|In the poultry pavilion of the Rungis International Market, France
Poultry is available fresh or frozen, as whole birds or as joints (cuts), bone-in or deboned, seasoned in various ways, raw or ready cooked. The meatiest parts of a bird are the flight muscles on its chest, called "breast" meat, and the walking muscles on the legs, called the "thigh" and "drumstick". The wings are also eaten (Buffalo wings are a popular example in the United States) and may be split into three segments, the meatier "drumette", the "wingette" (also called the "flat"), and the wing tip (also called the "flapper"). In Japan, the wing is frequently separated, and these parts are referred to as 手羽元 (teba-moto "wing base") and 手羽先 (teba-saki "wing tip").
Dark meat, which avian myologists refer to as "red muscle", is used for sustained activity—chiefly walking, in the case of a chicken. The dark colour comes from the protein myoglobin, which plays a key role in oxygen uptake and storage within cells. White muscle, in contrast, is suitable only for short bursts of activity such as, for chickens, flying. Thus, the chicken's leg and thigh meat are dark, while its breast meat (which makes up the primary flight muscles) is white. Other birds with breast muscle more suitable for sustained flight, such as ducks and geese, have red muscle (and therefore dark meat) throughout. Some cuts of meat including poultry expose the microscopic regular structure of intracellular muscle fibrils which can diffract light and produce iridescent colours, an optical phenomenon sometimes called structural colouration.
Health and disease (humans)
thumb|left|Cuts from plucked chickens
Poultry meat and eggs provide nutritionally beneficial food containing protein of high quality. This is accompanied by low levels of fat which have a favourable mix of fatty acids. Chicken meat contains about two to three times as much polyunsaturated fat as most types of red meat when measured by weight. However, for boneless, skinless chicken breast, the amount is much lower. A 100-g serving of baked chicken breast contains 4 g of fat and 31 g of protein, compared to 10 g of fat and 27 g of protein for the same portion of broiled, lean skirt steak.Nutrition Data - 100g Chicken BreastNutrition Data - 100g Lean Skirt Steak
A 2011 study by the Translational Genomics Research Institute showed that 47% of the meat and poultry sold in United States grocery stores was contaminated with Staphylococcus aureus, and 52% of the bacteria concerned showed resistance to at least three groups of antibiotics. Thorough cooking of the product would kill these bacteria, but a risk of cross-contamination from improper handling of the raw product is still present. Also, some risk is present for consumers of poultry meat and eggs to bacterial infections such as Salmonella and Campylobacter. Poultry products may become contaminated by these bacteria during handling, processing, marketing, or storage, resulting in food-borne illness if the product is improperly cooked or handled.
In general, avian influenza is a disease of birds caused by bird-specific influenza A virus that is not normally transferred to people; however, people in contact with live poultry are at the greatest risk of becoming infected with the virus and this is of particular concern in areas such as Southeast Asia, where the disease is endemic in the wild bird population and domestic poultry can become infected. The virus possibly could mutate to become highly virulent and infectious in humans and cause an influenza pandemic.
Bacteria can be grown in the laboratory on nutrient culture media, but viruses need living cells in which to replicate. Many vaccines to infectious diseases can be grown in fertilised chicken eggs. Millions of eggs are used each year to generate the annual flu vaccine requirements, a complex process that takes about six months after the decision is made as to what strains of virus to include in the new vaccine. A problem with using eggs for this purpose is that people with egg allergies are unable to be immunised, but this disadvantage may be overcome as new techniques for cell-based rather than egg-based culture become available. Cell-based culture will also be useful in a pandemic when it may be difficult to acquire a sufficiently large quantity of suitable sterile, fertile eggs.
References
External links
Information on poultry diseases
PoultryCast podcast
PoultryHub.org - A wiki-based collaborative resource centre where people share information about poultry
The Poultry Guide - A to Z and FAQs
World Poultry.net - World poultry news
Poultre of Ukraine - State Poultry Breeding Station of the National Academy of Sciences of Ukraine agrarnіh
Category:Domesticated birds
!
Category:Livestock | 23,197 | 2017-01 |
Gymnastics | thumb|upright=1.35|Daniele Hypólito on the balance beam at the 2007 Pan American Games
Gymnastics is a sport involving the performance of exercises requiring balance. strength, flexibility, agility, endurance and control. The movements involved in gymnastics contribute to the development of the arms, legs, shoulders, chest and abdominal muscle groups. Alertness, precision, daring, self-confidence and self-discipline are mental traits that can also be developed through gymnastics. Gymnastics evolved from exercises used by the ancient Greeks that included skills for mounting and dismounting a horse, and from circus performance skills.
Most forms of competitive gymnastics events are governed by the Fédération Internationale de Gymnastique (FIG). Each country has its own national governing body (BIW) affiliated to FIG. Competitive artistic gymnastics is the best known of the gymnastic events. It typically involves the women's events of vault, uneven bars, balance beam and floor exercise. Men's events are floor exercise, pommel horse, still rings, vault, parallel bars and horizontal bar.
Other FIG disciplines include rhythmic gymnastics, trampolining and tumbling, acrobatic gymnastics and aerobic gymnastics. Disciplines not currently recognized by FIG include wheel gymnastics, aesthetic group gymnastics, men's rhythmic gymnastics and TeamGym. Participants can include children as young as 20 months old doing kindergym and children's gymnastics, recreational gymnasts of ages 3 and up, competitive gymnasts at varying levels of skill, and world-class athletes.
Etymology
The word gymnastics derives from the common Greek adjective ,γυμνός, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus project by way of the related verb γυμνάζω (gymnazo), whose meaning is to "train naked", "train in gymnastic exercise", generally "to train, to exercise".γυμνάζω, Henry George Liddell, Robert Scott, A Greek-English Lexicon, on Perseus project The verb had this meaning, because athletes in ancient times exercised and competed without clothing. It came into use in the 1570s, from Latin gymnasticus, from Greek gymnastikos "fond of or skilled in bodily exercise," from gymnazein "to exercise or train" (see gymnasium).
History
Gymnastics originated in ancient Greece and was originally intended for military training, where it was used by soldiers to prepare for warfare.
In the late eighteenth- and early nineteenth-century Germany, two pioneer physical educators – Johann Friedrich GutsMuths (1759–1839) and Friedrich Ludwig Jahn (1778–1852) – created exercises for boys and young men on apparatus they had designed that ultimately led to what is considered modern gymnastics. Don Francisco Amorós y Ondeano, was born on February 19, 1770 in Valencia and died on August 8, 1848 in Paris. He was a Spanish colonel, and the first person to introduce educative gymnastic in France. Jahn promoted the use of parallel bars, rings and high bars in international competition.
thumb|Early 20th-century gymnastics in Stockholm, Sweden
The Federation of International Gymnastics (FIG) was founded in Liege in 1881.Artistic Gymnastics History at fig-gymnastics.com By the end of the nineteenth century, men's gymnastics competition was popular enough to be included in the first "modern" Olympic Games in 1896. From then on until the early 1950s, both national and international competitions involved a changing variety of exercises gathered under the rubric, gymnastics, that included for example, synchronized team floor calisthenics, rope climbing, high jumping, running, and horizontal ladder. During the 1920s, women organized and participated in gymnastics events. The first women's Olympic competition was primitive, only involving synchronized calisthenics and track and field. These games were held in 1928, in Amsterdam.
By 1954, Olympic Games apparatus and events for both men and women had been standardized in modern format, and uniform grading structures (including a point system from 1 to 15) had been agreed upon. At this time, Soviet gymnasts astounded the world with highly disciplined and difficult performances, setting a precedent that continues. Television has helped publicize and initiate a modern age of gymnastics. Both men's and women's gymnastics now attract considerable international interest, and excellent gymnasts can be found on every continent.
In 2006, a new points system for Artistic gymnastics was put into play. With an A Score (or D score) being the difficulty score, which as of 2009 is based on the top 8 high scoring elements in a routine (excluding Vault). The B Score (or E Score), is the score for execution, and is given for how well the skills are performed.
FIG-recognized disciplines
The following disciplines are governed by FIG.
Artistic gymnastics
Artistic Gymnastics is usually divided into Men's and Women's Gymnastics. Men compete on six events: Floor Exercise, Pommel Horse, Still Rings, Vault, Parallel Bars, and Horizontal Bar, while women compete on four: Vault, Uneven Bars, Balance Beam, and Floor Exercise. In some countries, women at one time competed on the rings, high bar, and parallel bars (for example, in the 1950s in the USSR).
In 2006, FIG introduced a new points system for Artistic gymnastics in which scores are no longer limited to 10 points. The system is used in the US for elite level competition. Unlike the old code of points, there are two separate scores, an execution score and a difficulty score. In the previous system, the "execution score" was the only score. It was and still is out of 10.00, except for short exercises. During the gymnast's performance, the judges deduct this score only. A fall, on or off the event, is a 1.00 deduction, in elite level gymnastics. The introduction of the difficulty score is a significant change. The gymnast's difficulty score is based on what elements they perform and is subject to change if they do not perform or complete all the skills, or they do not connect a skill meant to be connected to another. Connection bonuses are where deviation happens most common between the intended and actual difficulty scores, as it can be difficult to connect multiple flight elements. It is very hard to connect skills if the first skill is not performed correctly. The new code of points allows the gymnasts to gain higher scores based on the difficulty of the skills they perform as well as their execution. There is no maximum score for difficulty, as it can keep increasing as the difficulty of the skills increase.
Artistic events for women
thumb|upright|Piked Tsukahara vault.
Vault
In the vaulting events, gymnasts sprint down a runway, jump onto a springboard (or perform a roundoff or handspring entry onto a springboard), land momentarily inverted on the hands on the vaulting horse or vaulting table (pre-flight segment), then propel themselves forward or backward off that platform to a two-footed landing (post-flight segment). Every gymnast starts at a different point on the vault runway depending on their height and strength. The post-flight segment may include one or more multiple saltos, somersaults, or twisting movements. A round-off entry vault, called a Yurchenko, is the most common vault in the higher levels in gymnastics. When performing a Yurchenko, gymnasts "round off" so their hands are on the runway while their feet land on the springboard. From the roundoff position, the gymnast travels backwards and executes a back handspring so that the hands land on the vaulting table. The gymnast then blocks off the vaulting platform into various twisting and/or somersaulting combinations. The post-flight segment brings the gymnast to her feet. In the lower levels of gymnastics, the gymnasts do not perform this move. These gymnasts will jump onto the springboard with both feet at the same time and either do a front handspring onto the vault or a roundoff onto the vault.
In 2001, the traditional vaulting horse was replaced with a new apparatus, sometimes known as a tongue, horse or vaulting table. The new apparatus is more stable, wider, and longer than the older vaulting horse, approximately 1 m in length and 1 m in width, giving gymnasts a larger blocking surface. This apparatus is thus considered safer than the vaulting horse used in the past. With the addition of this new, safer vaulting table, gymnasts are attempting more difficult and dangerous vaults.
thumb|Gymnast on uneven bars.
Uneven bars
On the uneven bars, the gymnast performs a timed routine on two horizontal bars set at different heights. These bars are made of fiberglass covered in wood laminate, to prevent them from breaking. In the past, bars were made of wood, but the bars were prone to breaking, providing an incentive to switch to newer technologies. The width and height of the bars may be adjusted to the size needed by individual gymnasts. In the past, the uneven parallel bars were closer together. The bars have been moved increasingly further apart, allowing gymnasts to perform swinging, circling, transitional, and release moves that may pass over, under, and between the two bars. At the Elite level, movements must pass through the handstand. Gymnasts often mount the uneven bars using a springboard or a small mat. Chalk and grips (a leather strip with holes for fingers to protect hands and improve performance) may be used by gymnasts performing this event. The chalk helps take the moisture out of gymnasts' hands to decrease friction and prevent rips (tears to the skin of the hands); dowel grips help gymnasts grip the bar.
Balance beam
thumb|Dorina Böczögő performing a one arm press hold during her balance beam mount, 2013.
The gymnast performs a choreographed routine of up to 90 seconds in length consisting of leaps, acrobatic skills, somersaults, turns and dance elements on a padded beam. The beam is from the ground, long, and wide. This stationary object can also be adjusted, to be raised higher or lower. The event requires balance, flexibility, grace, poise, and strength.
Floor
thumb|upright|Gymnast doing a stag leap on floor exercise.
In the past, the floor exercise event was executed on the bare floor or mats such as wrestling mats. The floor event now occurs on a carpeted 12m × 12m square, usually consisting of hard foam over a layer of plywood, which is supported by springs generally called a "spring" floor. This provides a firm surface that provides extra bounce or spring when compressed, allowing gymnasts to achieve greater height and a softer landing after the composed skill. Gymnasts perform a choreographed routine up to 90 seconds in the floor exercise event; Depending on the level, they may choose their own, or, if known as a "compulsory gymnast," default music must be played. Levels three to six the music is the same for each levels along with the skills within the routine. However, recently, the levels have switched. Now, levels 6-10 are optional levels and they get to have custom routines made. In the optional levels (levels six to ten) there are skill requirements for the routine but the athlete is able to pick her own music without any words. The routine should consist of tumbling passes, series of jumps, leaps, dance elements, acrobatic skills, and turns, or pivots, on one foot. A gymnast can perform up to four tumbling passes that usually includes at least one flight element without hand support. Each level of gymnastics requires the athlete to perform a different number of tumbling passes. In level 7 in the United States, a gymnast is required to do 2–3, and in levels 8–10, at least 3–4 tumbling passes are required.
Artistic events for men
Floor
Male gymnasts also perform on a 12meter x 12meter spring floor. A series of tumbling passes are performed to demonstrate flexibility, strength, and balance. Strength skills include circles, scales, and press handstands. Men's floor routines usually have multiple passes that have to total between 60–70 seconds and are performed without music, unlike the women's event. Rules require that male gymnasts touch each corner of the floor at least once during their routine.
thumb|Chris Cameron on the pommel horse
Pommel horse
A typical pommel horse exercise involves both single leg and double leg work. Single leg skills are generally found in the form of scissors, an element often done on the pommels. Double leg work however, is the main staple of this event. The gymnast swings both legs in a circular motion (clockwise or counterclockwise depending on preference) and performs such skills on all parts of the apparatus. To make the exercise more challenging, gymnasts will often include variations on a typical circling skill by turning (moores and spindles) or by straddling their legs (Flares). Routines end when the gymnast performs a dismount, either by swinging his body over the horse, or landing after a handstand variation.
Still rings
The rings are suspended on wire cable from a point 5.75 meters from the floor. The gymnasts must perform a routine demonstrating balance, strength, power, and dynamic motion while preventing the rings themselves from swinging. At least one static strength move is required, but some gymnasts may include two or three. A routine ends with a dismount.
Vault
Gymnasts sprint down a runway, which is a maximum of 25 meters in length, before hurdling onto a spring board. The gymnast is allowed to choose where they start on the runway. The body position is maintained while "punching" (blocking using only a shoulder movement) the vaulting platform. The gymnast then rotates to a standing position. In advanced gymnastics, multiple twists and somersaults may be added before landing. Successful vaults depend on the speed of the run, the length of the hurdle, the power the gymnast generates from the legs and shoulder girdle, the kinesthetic awareness in the air, how well they stuck the landing and the speed of rotation in the case of more difficult and complex vaults.
Parallel bars
Men perform on two bars executing a series of swings, balances, and releases that require great strength and coordination. The width between the bars is adjustable dependent upon the actual needs of the gymnasts and usually 2m high,.
Horizontal bar
A 2.8 cm thick steel or fiberglass bar raised 2.5 m above the landing area is all the gymnast has to hold onto as he performs giant swings or giants (forward or backward revolutions around the bar in the handstand position), release skills, twists, and changes of direction. By using all of the momentum from giants and then releasing at the proper point, enough height can be achieved for spectacular dismounts, such as a triple-back salto. Leather grips are usually used to help maintain a grip on the bar.
As with women, male gymnasts are also judged on all of their events including their execution, degree of difficulty, and overall presentation skills.
Scoring (code of points)
A gymnast's score comes from deductions taken from their start value. The start value of a routine is based on the difficulty of the elements the gymnast attempts and whether or not the gymnast meets composition requirements. The composition requirements are different for each apparatus; this score is called the D score. Deductions in execution and artistry are taken from a maximum of 10.0. This score is called the E score. The final score is calculated by taking deductions from the E score, and adding the result to the D score.
Since 2007, the scoring system has changed by adding bonus plus the execution and then adding those two together to get the final score.
Landing
In a tumbling pass, dismount or vault, landing is the final phase, following take off and flight Marinsek, M. (2010). BasicLanding. 59-67. This is a critical skill in terms of execution in competition scores, general performance, and injury occurrence. Without the necessary magnitude of energy dissipation during impact, the risk of sustaining injuries during somersaulting increases. These injuries commonly occur at the lower extremities such as: cartilage lesions, ligament tears, and bone bruises/fractures.Yeow, C., Lee, P., & Goh, J. (2009). Effect of landing height on frontal plane kinematics, kinetics and energy dissipation at lower extremity joints. Journal of Biomechanics , 1967-1973. To avoid such injuries, and to receive a high performance score, proper technique must be used by the gymnast. "The subsequent ground contact or impact landing phase must be achieved using a safe, aesthetic and well-executed double foot landing." Gittoes, M. J., & Irin, G. (2012). Biomechanical approaches to understanding the potentially injurious demands of gymnastic-style impact landings. Sports Medicine A Rehabilitation Therapy Technology, 1-9. A successful landing in gymnastics is classified as soft, meaning the knee and hip joints are at greater than 63 degrees of flexion.
A higher flight phase results in a higher vertical ground reaction force. Vertical ground reaction force represents external force which the gymnasts have to overcome with their muscle force and affects the gymnasts' linear and angular momentum. Another important variable that affects linear and angular momentum is time the landing takes. Gymnasts can decrease the impact force by increasing the time taken to perform the landing. Gymnasts can achieve this by increasing hip, knee and ankle amplitude.
Rhythmic gymnastics
thumb|Russian rhythmic gymnast Irina Tchachina stretching in her warm-up before practice.
According to FIG rules, only women compete in rhythmic gymnastics. This is a sport that combines elements of ballet, gymnastics, dance, and apparatus manipulation. The sport involves the performance of five separate routines with the use of five apparatus; ball, ribbon, hoop, clubs, rope—on a floor area, with a much greater emphasis on the aesthetic rather than the acrobatic. There are also group routines consisting of 5 gymnasts and 5 apparatuses of their choice. Rhythmic routines are scored out of a possible 30 points; the score for artistry (choreography and music) is averaged with the score for difficulty of the moves and then added to the score for execution.Fédération Internationale de Gymnastique, Code of Points – Rhythmic Gymnastics 2009–2012
International competitions are split between Juniors, under sixteen by their year of birth; and Seniors, for women sixteen and over again by their year of birth. Gymnasts in Russia and Europe typically start training at a very young age and those at their peak are typically in their late teens (15–19) or early twenties. The largest events in the sport are the Olympic Games, World Championships, European Championships, World Cup and Grand-Prix Series.
Rhythmic gymnastics apparatus
upright|thumb|Evgenia Kanaeva doing a Split leap in her hoop routine
thumb|upright|Soviet Galina Shugurova performing an Attitude balance in her ball apparatus
Ball It is made of either rubber or synthetic material (pliable plastic) provided it possesses the same elasticity as rubber. It is from 18 to 20 cm in diameter and must have a minimum weight of 400g. The ball can be of any colour and should rest in the gymnast's hand, not the wrist. Fundamental elements of a ball routine include throwing, bouncing, and rolling. The gymnast must use both hands and work on the whole floor area while showing continuous flowing movement. The ball is to emphasize the gymnast's flowing lines and body difficulty.
Hoop A hoop is an apparatus in rhythmic gymnastics and may be made of plastic or wood, provided that it retains its shape during the routine. The interior diameter is from 51 to 90 cm, and the hoop must weigh a minimum of 300g. The hoop may be of a natural colour or be partially of fully covered by one or several colours, and it may be covered with adhesive tape either of the same or different colour as the hoop. Fundamental requirements of a hoop routine include rotation around the hand or body and rolling, as well as swings, circles, throws, and passes through and over the hoop. The routines in hoop involves mastery in both apparatus handling and body difficulty like leaps, jumps and pivots.
Ribbon It is made of satin or another similar material cloth of any colour and may be multi-coloured as well as have designs on it. The ribbon itself must be at least 35g (1 oz), 4–6 cm (1.6–2.4") in width and for senior category a minimum length of 6m (20') (5m (16.25') for juniors). The ribbon must be in one piece. The end that is attached to the stick is doubled for a maximum length of 1m (3'). This is stitched down both sides. At the top, a very thin reinforcement or rows of machine stitching for a maximum length of 5 cm is authorized. This extremity may end in a strap, or have an eyelet (a small hole, edged with buttonhole stitch or a metal circle), to permit attaching the ribbon. The ribbon is fixed to the stick by means of a supple attachment such as thread, nylon cord, or a series of articulated rings. The attachment has a maximum length of 7 cm (2.8"), not counting the strap or metal ring at the end of the stick where it will be fastened. Compulsory elements for the ribbon include flicks, circles, snakes and spirals, and throws. It requires a high degree of co-ordination to form the spirals and circles as any knots which may accidentally form in the ribbon are penalised. During a ribbon routine, large, smooth and flowing movements are looked for.
Clubs Multi-piece clubs are the most popular clubs. The club is built along an internal rod, providing a base on which a handle made of polyolefin plastic is wrapped, providing an airspace between it and the internal rod. This airspace provides flex, cushioning impact, making the club softer on the hands. Foam ends and knobs further cushion the club. Multi-piece clubs are made in both a thin European style or larger bodied American style and in various lengths, generally ranging from 19 to 21 inches (480 to 530 mm). The handles and bodies are typically wrapped with decorative plastics and tapes. The skills involved are apparatus mastery and body elements, Clubs are thrown from alternate hands; each passes underneath the other clubs and is caught in the opposite hand to the one from which it was thrown. At its simplest, each club rotates once per throw, the handle moving down and away from the throwing hand at first. However, double and triple spins are frequently performed, allowing the club to be thrown higher for more advanced patterns and to allow tricks such as 360s to be performed underneath.
Trampolining and tumbling
thumb|upright|Double mini-trampoline competitor
Trampolining
Trampolining and tumbling consists of four events, individual and synchronized trampoline, double mini trampoline, and tumbling (also known as power tumbling or rod floor). Since 2000, individual trampoline has been included in the Olympic Games.
Individual trampoline
Individual routines in trampolining involve a build-up phase during which the gymnast jumps repeatedly to achieve height, followed by a sequence of ten bounces without pause during which the gymnast performs a sequence of aerial skills. Routines are marked out of a maximum score of 10 points. Additional points (with no maximum at the highest levels of competition) can be earned depending on the difficulty of the moves and the length of time taken to complete the ten skills which is an indication of the average height of the jumps. In high level competitions, there are two preliminary routines, one which has only two moves scored for difficulty and one where the athlete is free to perform any routine. This is followed by a final routine which is optional. Some competitions restart the score from zero for the finals, other add the final score to the preliminary results.
Synchronized trampoline
Synchronized trampoline is similar except that both competitors must perform the routine together and marks are awarded for synchronization as well as the form and difficulty of the moves.
Double-mini trampoline
Double mini trampoline involves a smaller trampoline with a run-up, two scoring moves are performed per routine. Moves cannot be repeated in the same order on the double-mini during a competition. Skills can be repeated if a skill is competed as a mounter in one routine and a dismount in another. The scores are marked in a similar manner to individual trampoline.
Tumbling
In tumbling, athletes perform an explosive series of flips and twists down a sprung tumbling track. Scoring is similar to trampolining. Tumbling was originally contested as one of the events in Men's Artistic Gymnastics at the 1932 Summer Olympics, and in 1955 and 1959 at the Pan American Games. From 1974 to 1998 it was included as an event for both genders at the Acrobatic Gymnastics World Championships. The event has also been contested since 1976 at the Trampoline World Championships. Since the recognition of Trampoline and Acrobatic Gymnastics as FIG disciplines in 1999, official Tumbling competitions are only allowed as an event in Trampoline gymnastics meets.
Acrobatic gymnastics
thumb|upright|Acrobatic Women's Pair performing a skill.
Acrobatic gymnastics (formerly Sport Acrobatics), often referred to as "Acro" if involved with the sport, acrobatic sports or simply sports acro, is a group gymnastic discipline for both men and women. Acrobats in groups of two, three and four perform routines with the heads, hands and feet of their partners. They may, subject to regulations (e.g. no lyrics), pick their own music.
There are four international age categories: 11-16, 12-18, 13-19, and Senior (15+), which are used in the World Championships and many other events around the world, including European Championships and World Games. All levels require a balance and dynamic routine, 12-18, 13-19, and Seniors are also required to perform a final (combined) routine.
Currently, acrobatic gymnastics is marked out of 30.00 (can be higher at Senior FIG level based on difficulty):
10.00 for routine difficulty, (valued from the tables of difficulties)
10.00 For technical performance, (how well the skills are executed)
10.00 For Artistry, (the overall performance of the routine, namely choreography)
Aerobic gymnastics
Aerobic gymnastics (formally Sport Aerobics) involves the performance of routines by individuals, pairs, trios or groups up to 6 people, emphasizing strength, flexibility, and aerobic fitness rather than acrobatic or balance skills. Routines are performed for all individuals on a 7x7m floor and also for 12–14 and 15-17 trios and mixed pairs. From 2009, all senior trios and mixed pairs were required to be on the larger floor (10x10m), all groups also perform on this floor. Routines generally last 60–90 seconds depending on age of participant and routine category.
Other disciplines
The following disciplines are not currently recognized by the Fédération Internationale de Gymnastique.
Aesthetic group gymnastics
Aesthetic Group Gymnastics (AGG) was developed from the Finnish "naisvoimistelu". It differs from Rhythmic Gymnastics in that body movement is large and continuous and teams are larger. Athletes do not use apparatus in international AGG competitions compared to Rhythmic Gymnastics where ball, ribbon, hoop and clubs are used on the floor area. The sport requires physical qualities such as flexibility, balance, speed, strength, coordination and sense of rhythm where movements of the body are emphasized through the flow, expression and aesthetic appeal. A good performance is characterized by uniformity and simultaneity. The competition program consists of versatile and varied body movements, such as body waves, swings, balances, pivots, jumps and leaps, dance steps, and lifts. The International Federation of Aesthetic Group Gymnastics (IFAGG) was established in 2003.Lajiesittely, Suomen Voimisteluliitto.
Men's rhythmic gymnastics
Men's rhythmic gymnastics is related to both men's artistic gymnastics and wushu martial arts. It emerged in Japan from stick gymnastics. Stick gymnastics has been taught and performed for many years with the aim of improving physical strength and health. Male athletes are judged on some of the same physical abilities and skills as their female counterparts, such as hand/body-eye co-ordination, but tumbling, strength, power, and martial arts skills are the main focus, as opposed to flexibility and dance in women's rhythmic gymnastics. There are a growing number of participants, competing alone and on a team; it is most popular in Asia, especially in Japan where high school and university teams compete fiercely. , there were 1000 men's rhythmic gymnasts in Japan.
The technical rules for the Japanese version of men's rhythmic gymnastics came around the 1970s. For individuals, only four types of apparatus are used: the double rings, the stick, the rope, and the clubs. Groups do not use any apparatus. The Japanese version includes tumbling performed on a spring floor. Points are awarded based a 10-point scale that measures the level of difficulty of the tumbling and apparatus handling. On November 27–29, 2003, Japan hosted first edition of the Men's Rhythmic Gymnastics World Championship.
TeamGym
TeamGym is a form of competition created by the European Union of Gymnastics, named originally EuroTeam. The first official competition was held in Finland in 1996. TeamGym events consist of three sections: women, men and mixed teams. Athletes compete in three different disciplines: floor, tumbling and trampette. In common for the performance is effective teamwork, good technique in the elements and spectacular acrobatic skills.TeamGym, British Gymnastics
Wheel Gymnastics
Wheel gymnasts do exercises in a large wheel known as the Rhönrad, gymnastics wheel, gym wheel, or German wheel, in the beginning also known as ayro wheel, aero wheel, and Rhon rod.
There are three core categories of exercise: straight line, spiral, and vault.
Mallakhamba
Mallakhamba (Marathi: मल्लखांब) is a traditional Indian sport in which a gymnast performs feats and poses in concert with a vertical wooden pole or rope. The word also refers to the pole used in the sport.
Mallakhamba derives from the terms malla which denotes a wrestler and khamba which means a pole. Mallakhamba can therefore be translated to English as "pole gymnastics". On April 9, 2013, the Indian state of Madhya Pradesh declared mallakhamba as the state sport.
Non-competitive gymnastics
General gymnastics enables people of all ages and abilities to participate in performance groups of 6 to more than 150 athletes. They perform synchronized, choreographed routines. Troupes may consist of both genders and are not separated into age divisions. The largest general gymnastics exhibition is the quadrennial World Gymnaestrada which was first held in 1939. In 1984 Gymnastics for All was officially recognized first as a Sport Program by the FIG (International Gymnastic Federation), and subsequently by national gymnastic federations worldwide with participants that now number 30 million.
Former apparatus and events
Rope (rhythmic gymnastics)
This apparatus may be made of hemp or a synthetic material which retains the qualities of lightness and suppleness. Its length is in proportion to the size of the gymnast. The rope should, when held down by the feet, reach both of the gymnasts' armpits. One or two knots at each end are for keeping hold of the rope while doing the routine. At the ends (to the exclusion of all other parts of the rope) an anti-slip material, either coloured or neutral may cover a maximum of 10 cm (3.94 in). The rope must be coloured, either all or partially and may either be of a uniform diameter or be progressively thicker in the center provided that this thickening is of the same material as the rope. The fundamental requirements of a rope routine include leaps and skipping. Other elements include swings, throws, circles, rotations and figures of eight. In 2011, the FIG decided to nullify the use of rope in rhythmic gymnastic competitions.
Rope climbing
Generally, competitors climbed either a 6m (6.1m = 20 ft in US) or an 8m (7.6m = 25 ft in US), 38 mm diameter (1.5-inch) natural fiber rope for speed, starting from a seated position on the floor and using only the hands and arms. Kicking the legs in a kind of "stride" was normally permitted. Many gymnasts can do this in the straddle or pike position, which eliminates the help generated from the legs though it can be done with legs as well.
Flying rings
Flying rings was an event similar to still rings, but with the performer executing a series of stunts while swinging. It was a gymnastic event sanctioned by both the NCAA and the AAU until the early 1960s.
Club Swinging
Club Swinging a.k.a. Indian Clubs was an event in Men's Artistic Gymnastics sometimes up until the 1950s. It was similar to the clubs in both Women's and Men's Rhythmic Gymnastics but much simpler with few throws allowed. It was practice. It was competed in the 1904 and 1932 summer Olympic Games.
Other (Men's Artistic)
Team Horizontal Bar and Parallel Bar in the 1896 Summer Olympics
Team Free and Swedish System in the 1912 and 1920 Summer Olympics
Combined and Triathlon in the 1904 Summer Olympics
Side Horse Vault in 1924 Summer Olympics
Tumbling in the 1932 Summer Olympics
Other (Women's Artistic)
Team Exercise at the 1928, 1936 and 1948 Summer Olympics
Parallel Bars at the 1938 World Championships
Team Portable Apparatus at the 1952 and 1956 Summer Olympics.
Popular culture
Books
Little Girls in Pretty Boxes
The Spirit of Gymnastics: The Biography of Hartley D'Oyley Price, by Tom Conkling, (1982);
Films
A Second Chance
A State of Mind
American Anthem
Billy Elliot
Flying
Gymkata
Little Girls in Pretty Boxes
Nadia
Peaceful Warrior
Perfect Body
Stick It
The Gabby Douglas Story
The Gymnast (Dreya Weber film)
Television
Make It or Break It
Video games
Athens 2004
Barbie Team Gymnastics
Beijing 2008
Capcom's Gold Medal Challenge '92
Dance Aerobics
Ener-G Gym Rockets
Imagine: Gymnast
London 2012
Mario & Sonic at the Olympic Games
Mario & Sonic at the London 2012 Olympic Games
Mario & Sonic at the Rio 2016 Olympic Games
Shawn Johnson Gymnastics
Summer Games
See also
Acrobatics
Acro dance
Cheerleading
Gymnasium (ancient Greece)
International Gymnastics Hall of Fame
List of gymnastics competitions
List of gymnastics terms
List of gymnasts
Major achievements in gymnastics by nation
Majorettes
NCAA Men's Gymnastics championship (US)
NCAA Women's Gymnastics championship (US)
Turners
Wheel gymnastics
World Gymnastics Championships
References
External links
International Federation of Gymnastics (FIG) official website
International Federation of Aesthetic Group Gymnastics official website
USA Gymnastics, the governing body for gymnastics in the US
British Gymnastics, the governing body for gymnastics in the UK
Brazilian Gymnastics, the governing body for gymnastics in the Brazil
Category:Summer Olympic sports
Category:Individual sports
Category:Sports rules and regulations
Category:Acrobatic sports | 12,551 | 2017-01 |
John, King of England | John (24 December 1166 – 19 October 1216), also known as John Lackland (Norman French: Johan sanz Terre),Norgate (1902), pp. 1–2. was King of England from 6 April 1199 until his death in 1216. John lost the Duchy of Normandy to King Philip II of France, resulting in the collapse of most of the Angevin Empire and contributing to the subsequent growth in power of the Capetian dynasty during the 13th century. The baronial revolt at the end of John's reign led to the sealing of Magna Carta, a document sometimes considered to be an early step in the evolution of the constitution of the United Kingdom.
John, the youngest of five sons of King Henry II of England and Eleanor of Aquitaine, was at first not expected to inherit significant lands. Following the failed rebellion of his elder brothers between 1173 and 1174, however, John became Henry's favourite child. He was appointed the Lord of Ireland in 1177 and given lands in England and on the continent. John's elder brothers William, Henry and Geoffrey died young; by the time Richard I became king in 1189, John was a potential heir to the throne. John unsuccessfully attempted a rebellion against Richard's royal administrators whilst his brother was participating in the Third Crusade. Despite this, after Richard died in 1199, John was proclaimed King of England, and came to an agreement with Philip II of France to recognise John's possession of the continental Angevin lands at the peace treaty of Le Goulet in 1200.
When war with France broke out again in 1202, John achieved early victories, but shortages of military resources and his treatment of Norman, Breton and Anjou nobles resulted in the collapse of his empire in northern France in 1204. John spent much of the next decade attempting to regain these lands, raising huge revenues, reforming his armed forces and rebuilding continental alliances. John's judicial reforms had a lasting impact on the English common law system, as well as providing an additional source of revenue. An argument with Pope Innocent III led to John's excommunication in 1209, a dispute finally settled by the king in 1213. John's attempt to defeat Philip in 1214 failed due to the French victory over John's allies at the battle of Bouvines. When he returned to England, John faced a rebellion by many of his barons, who were unhappy with his fiscal policies and his treatment of many of England's most powerful nobles. Although both John and the barons agreed to the Magna Carta peace treaty in 1215, neither side complied with its conditions. Civil war broke out shortly afterwards, with the barons aided by Louis of France. It soon descended into a stalemate. John died of dysentery contracted whilst on campaign in eastern England during late 1216; supporters of his son Henry III went on to achieve victory over Louis and the rebel barons the following year.
Contemporary chroniclers were mostly critical of John's performance as king, and his reign has since been the subject of significant debate and periodic revision by historians from the 16th century onwards. Historian Jim Bradbury has summarised the current historical opinion of John's positive qualities, observing that John is today usually considered a "hard-working administrator, an able man, an able general".Bradbury (2007), p.353. Nonetheless, modern historians agree that he also had many faults as king, including what historian Ralph Turner describes as "distasteful, even dangerous personality traits", such as pettiness, spitefulness and cruelty.Turner, p.23. These negative qualities provided extensive material for fiction writers in the Victorian era, and John remains a recurring character within Western popular culture, primarily as a villain in films and stories depicting the Robin Hood legends.
Early life (1166–89)
Childhood and the Angevin inheritance
thumb|300px|alt=A coloured map of medieval France, showing the Angevin territories in the west, the royal French territories in the east, and the Duchy of Toulouse in the south.|The Angevin continental empire (orange shades) in the late 12th century
John was born to Henry II of England and Eleanor of Aquitaine on 24 December 1166.Fryde, Greenway, Porter and Roy, p.37. Henry had inherited significant territories along the Atlantic seaboard—Anjou, Normandy and England—and expanded his empire by conquering Brittany.Warren, p.21. Henry married the powerful Eleanor of Aquitaine, who reigned over the Duchy of Aquitaine and had a tenuous claim to Toulouse and Auvergne in southern France, in addition to being the former wife of Louis VII of France. The result was the Angevin Empire, named after Henry's paternal title as Count of Anjou and, more specifically, its seat in Angers. The Empire, however, was inherently fragile: although all the lands owed allegiance to Henry, the disparate parts each had their own histories, traditions and governance structures.Barlow, p.275; Warren, p.23. As one moved south through Anjou and Aquitaine, the extent of Henry's power in the provinces diminished considerably, scarcely resembling the modern concept of an empire at all. Some of the traditional ties between parts of the empire such as Normandy and England were slowly dissolving over time.Barlow, p.284. It was unclear what would happen to the empire on Henry's death. Although the custom of primogeniture, under which an eldest son would inherit all his father's lands, was slowly becoming more widespread across Europe, it was less popular amongst the Norman kings of England.Barlow, p.305. Most believed that Henry would divide the empire, giving each son a substantial portion, and hoping that his children would continue to work together as allies after his death.Warren, p.27. To complicate matters, much of the Angevin empire was held by Henry only as a vassal of the King of France of the rival line of the House of Capet. Henry had often allied himself with the Holy Roman Emperor against France, making the feudal relationship even more challenging.Barlow, p.281.
Shortly after his birth, John was passed from Eleanor into the care of a wet nurse, a traditional practice for medieval noble families.Turner, p.31. Eleanor then left for Poitiers, the capital of Aquitaine, and sent John and his sister Joan north to Fontevrault Abbey.Warren, p.26. This may have been done with the aim of steering her youngest son, with no obvious inheritance, towards a future ecclesiastical career. Eleanor spent the next few years conspiring against her husband Henry and neither parent played a part in John's very early life. John was probably, like his brothers, assigned a magister whilst he was at Fontevrault, a teacher charged with his early education and with managing the servants of his immediate household; John was later taught by Ranulph Glanville, a leading English administrator.Turner, p.31; Warren, p.26. John spent some time as a member of the household of his eldest living brother Henry the Young King, where he probably received instruction in hunting and military skills.
John grew up to be around 5 ft 5 in (1.68 m) tall, relatively short, with a "powerful, barrel-chested body" and dark red hair; he looked to contemporaries like an inhabitant of Poitou.McLynn, pp.27, 77. John enjoyed reading and, unusually for the period, built up a travelling library of books.Warren, p.140. He enjoyed gambling, in particular at backgammon, and was an enthusiastic hunter, even by medieval standards.Warren, pp.139–40; McLynn, p.78 He liked music, although not songs.McLynn, p.78. John would become a "connoisseur of jewels", building up a large collection, and became famous for his opulent clothes and also, according to French chroniclers, for his fondness for bad wine.Warren, p.139; McLynn, p.78; Danziger and Gillingham, p.26. As John grew up, he became known for sometimes being "genial, witty, generous and hospitable"; at other moments, he could be jealous, over-sensitive and prone to fits of rage, "biting and gnawing his fingers" in anger.McLynn, p.78, 94; Turner, p.30.
Early life
thumb|alt=An illuminated manuscript, showing Henry and Aquitaine sat on thrones, accompanied by two staff. Two elaborate birds form a canopy over the pair of rulers.|John's parents, Henry II and Eleanor, holding court
During John's early years, Henry attempted to resolve the question of his succession. Henry the Young King had been crowned King of England in 1170, but was not given any formal powers by his father; he was also promised Normandy and Anjou as part of his future inheritance. Richard was to be appointed the Count of Poitou with control of Aquitaine, whilst Geoffrey was to become the Duke of Brittany.Carpenter (2004), p.223; Turner, p.35. At this time it seemed unlikely that John would ever inherit substantial lands, and he was jokingly nicknamed "Lackland" by his father.McLynn, p.36.
Henry II wanted to secure the southern borders of Aquitaine and decided to betroth his youngest son to Alais, the daughter and heiress of Humbert III of Savoy.Turner, p.36. As part of this agreement John was promised the future inheritance of Savoy, Piedmont, Maurienne, and the other possessions of Count Humbert. For his part in the potential marriage alliance, Henry II transferred the castles of Chinon, Loudun and Mirebeau into John's name; as John was only five years old his father would continue to control them for practical purposes. Henry the Young King was unimpressed by this; although he had yet to be granted control of any castles in his new kingdom, these were effectively his future property and had been given away without consultation. Alais made the trip over the Alps and joined Henry II's court, but she died before marrying John, which left the prince once again without an inheritance.
In 1173 John's elder brothers, backed by Eleanor, rose in revolt against Henry in the short-lived rebellion of 1173 to 1174. Growing irritated with his subordinate position to Henry II and increasingly worried that John might be given additional lands and castles at his expense, Henry the Young King travelled to Paris and allied himself with Louis VII.Carpenter (2004), p.223. Eleanor, irritated by her husband's persistent interference in Aquitaine, encouraged Richard and Geoffrey to join their brother Henry in Paris. Henry II triumphed over the coalition of his sons, but was generous to them in the peace settlement agreed at Montlouis. Henry the Young King was allowed to travel widely in Europe with his own household of knights, Richard was given Aquitaine back, and Geoffrey was allowed to return to Brittany; only Eleanor was imprisoned for her role in the revolt.Carpenter (2004), p.243.
John had spent the conflict travelling alongside his father, and was given widespread possessions across the Angevin empire as part of the Montlouis settlement; from then onwards, most observers regarded John as Henry II's favourite child, although he was the furthest removed in terms of the royal succession. Henry II began to find more lands for John, mostly at various nobles' expense. In 1175 he appropriated the estates of the late Earl of Cornwall and gave them to John. The following year, Henry disinherited the sisters of Isabelle of Gloucester, contrary to legal custom, and betrothed John to the now extremely wealthy Isabelle.Turner, p.37. In 1177, at the Council of Oxford, Henry dismissed William FitzAldelm as the Lord of Ireland and replaced him with the ten-year-old John.
thumb|left|350px|alt=An illuminated diagram showing Henry II and the heads of his children; coloured lines connect the two to show the lineal descent|13th-century depiction of Henry II and John's siblings, left to right: William, Henry, Richard, Matilda, Geoffrey, Eleanor, Joan and John
Henry the Young King fought a short war with his brother Richard in 1183 over the status of England, Normandy and Aquitaine. Henry II moved in support of Richard, and Henry the Young King died from dysentery at the end of the campaign. With his primary heir dead, Henry rearranged the plans for the succession: Richard was to be made King of England, albeit without any actual power until the death of his father; Geoffrey would retain Brittany; and John would now become the Duke of Aquitaine in place of Richard. Richard refused to give up Aquitaine; Henry II was furious and ordered John, with help from Geoffrey, to march south and retake the duchy by force. The two attacked the capital of Poitiers, and Richard responded by attacking Brittany. The war ended in stalemate and a tense family reconciliation in England at the end of 1184.
In 1185 John made his first visit to Ireland, accompanied by 300 knights and a team of administrators.Warren, p.35. Henry had tried to have John officially proclaimed King of Ireland, but Pope Lucius III would not agree. John's first period of rule in Ireland was not a success. Ireland had only recently been conquered by Anglo-Norman forces, and tensions were still rife between Henry II, the new settlers and the existing inhabitants.Warren, p.36. John infamously offended the local Irish rulers by making fun of their unfashionable long beards, failed to make allies amongst the Anglo-Norman settlers, began to lose ground militarily against the Irish and finally returned to England later in the year, blaming the viceroy, Hugh de Lacy, for the fiasco.
The problems amongst John's wider family continued to grow. His elder brother Geoffrey died during a tournament in 1186, leaving a posthumous son, Arthur, and an elder daughter, Eleanor.Warren, p.37. Geoffrey's death brought John slightly closer to the throne of England. The uncertainty about what would happen after Henry's death continued to grow; Richard was keen to join a new crusade and remained concerned that whilst he was away Henry would appoint John his formal successor.Turner, p.39; Warren, p.38.
Richard began discussions about a potential alliance with Philip II in Paris during 1187, and the next year Richard gave homage to Philip in exchange for support for a war against Henry.Turner, p.38. Richard and Philip fought a joint campaign against Henry, and by the summer of 1189 the king made peace, promising Richard the succession.Warren, p.38. John initially remained loyal to his father, but changed sides once it appeared that Richard would win. Henry died shortly afterwards.
Richard's reign (1189–99)
thumb|alt=A picture showing King Richard sat beside King Philip II, the latter is receiving a key from two Arabs; a castle, presumably Acre, can be seen in the top right of the picture.|Richard (l) and Philip II at Acre during the Third Crusade
When John's elder brother Richard became king in September 1189, he had already declared his intention of joining the Third Crusade. Richard set about raising the huge sums of money required for this expedition through the sale of lands, titles and appointments, and attempted to ensure that he would not face a revolt while away from his empire.Warren, pp.38–9. John was made Count of Mortain, was married to the wealthy Isabel of Gloucester, and was given valuable lands in Lancaster and the counties of Cornwall, Derby, Devon, Dorset, Nottingham and Somerset, all with the aim of buying his loyalty to Richard whilst the king was on crusade.Warren, pp.39–40. Richard retained royal control of key castles in these counties, thereby preventing John from accumulating too much military and political power, and, for the time being, the king named the four-year-old Arthur of Brittany as the heir to the throne.Barlow, p.293; Warren p.39. In return, John promised not to visit England for the next three years, thereby in theory giving Richard adequate time to conduct a successful crusade and return from the Levant without fear of John seizing power.Warren, p.40. Richard left political authority in England – the post of justiciar – jointly in the hands of Bishop Hugh de Puiset and William Mandeville, and made William Longchamp, the Bishop of Ely, his chancellor.Warren, p.39. Mandeville immediately died, and Longchamp took over as joint justiciar with Puiset, which would prove to be a less than satisfactory partnership. Eleanor, the queen mother, convinced Richard to allow John into England in his absence.
The political situation in England rapidly began to deteriorate. Longchamp refused to work with Puiset and became unpopular with the English nobility and clergy.Warren, p.41. John exploited this unpopularity to set himself up as an alternative ruler with his own royal court, complete with his own justiciar, chancellor and other royal posts, and was happy to be portrayed as an alternative regent, and possibly the next king.Warren, pp.40–1. Armed conflict broke out between John and Longchamp, and by October 1191 Longchamp was isolated in the Tower of London with John in control of the city of London, thanks to promises John had made to the citizens in return for recognition as Richard's heir presumptive.Inwood, p.58. At this point Walter of Coutances, the Archbishop of Rouen, returned to England, having been sent by Richard to restore order.Warren, p.42. John's position was undermined by Walter's relative popularity and by the news that Richard had married whilst in Cyprus, which presented the possibility that Richard would have legitimate children and heirs.Warren, p.43.
thumb|left|alt=An illuminated picture of King John riding a white horse and accompanied by four hounds. The king is chasing a stag, and several rabbits can be seen at the bottom of the picture.|John on a stag hunt
The political turmoil continued. John began to explore an alliance with the French king Philip II, freshly returned from the crusade. John hoped to acquire Normandy, Anjou and the other lands in France held by Richard in exchange for allying himself with Philip. John was persuaded not to pursue an alliance by his mother. Longchamp, who had left England after Walter's intervention, now returned, and argued that he had been wrongly removed as justiciar.Warren, p.44. John intervened, suppressing Longchamp's claims in return for promises of support from the royal administration, including a reaffirmation of his position as heir to the throne. When Richard still did not return from the crusade, John began to assert that his brother was dead or otherwise permanently lost. Richard had in fact been captured en route to England by the Duke of Austria and was handed over to Emperor Henry VI, who held him for ransom. John seized the opportunity and went to Paris, where he formed an alliance with Philip. He agreed to set aside his wife, Isabella of Gloucester, and marry Philip's sister, Alys, in exchange for Philip's support.Warren, p.45. Fighting broke out in England between forces loyal to Richard and those being gathered by John. John's military position was weak and he agreed to a truce; in early 1194 the king finally returned to England, and John's remaining forces surrendered.Warren, p.46. John retreated to Normandy, where Richard finally found him later that year. Richard declared that his younger brother – despite being 27 years old – was merely "a child who has had evil counsellors" and forgave him, but removed his lands with the exception of Ireland.Warren, pp.46–7.
For the remaining years of Richard's reign, John supported his brother on the continent, apparently loyally.Warren, p.47. Richard's policy on the continent was to attempt to regain through steady, limited campaigns the castles he had lost to Philip II whilst on crusade. He allied himself with the leaders of Flanders, Boulogne and the Holy Roman Empire to apply pressure on Philip from Germany.Fryde (2007), p.336. In 1195 John successfully conducted a sudden attack and siege of Évreux castle, and subsequently managed the defences of Normandy against Philip. The following year, John seized the town of Gamaches and led a raiding party within of Paris, capturing the Bishop of Beauvais. In return for this service, Richard withdrew his malevolentia (ill-will) towards John, restored him to the county of Gloucestershire and made him again the Count of Mortain.
Early reign (1199–1204)
Accession to the throne, 1199
thumb|alt=A photograph of a tall grey castle, with a taller keep visible beyond the main walls.|The donjon of Château Gaillard; the loss of the castle would prove devastating for John's military position in Normandy
After Richard's death on 6 April 1199 there were two potential claimants to the Angevin throne: John, whose claim rested on being the sole surviving son of Henry II, and young Arthur I of Brittany, who held a claim as the son of John's elder brother Geoffrey.Carpenter (2004), p.264. Richard appears to have started to recognise John as his heir presumptive in the final years before his death, but the matter was not clear-cut and medieval law gave little guidance as to how the competing claims should be decided.Barlow, p.305; Turner, p.48. With Norman law favouring John as the only surviving son of Henry II and Angevin law favouring Arthur as the only son of Henry's elder son, the matter rapidly became an open conflict. John was supported by the bulk of the English and Norman nobility and was crowned at Westminster, backed by his mother, Eleanor. Arthur was supported by the majority of the Breton, Maine and Anjou nobles and received the support of Philip II, who remained committed to breaking up the Angevin territories on the continent.Warren, p.53. With Arthur's army pressing up the Loire valley towards Angers and Philip's forces moving down the valley towards Tours, John's continental empire was in danger of being cut in two.Warren, p.51.
Warfare in Normandy at the time was shaped by the defensive potential of castles and the increasing costs of conducting campaigns.Barrett, p.91. The Norman frontiers had limited natural defences but were heavily reinforced with castles, such as Château Gaillard, at strategic points, built and maintained at considerable expense.Warren, pp.57–8; Barlow, p.280. It was difficult for a commander to advance far into fresh territory without having secured his lines of communication by capturing these fortifications, which slowed the progress of any attack.Warren, p.57. Armies of the period could be formed from either feudal or mercenary forces.Warren, p.59. Feudal levies could only be raised for a fixed length of time before they returned home, forcing an end to a campaign; mercenary forces, often called Brabançons after the Duchy of Brabant but actually recruited from across northern Europe, could operate all year long and provide a commander with more strategic options to pursue a campaign, but cost much more than equivalent feudal forces.Huscroft, pp.169–70. As a result, commanders of the period were increasingly drawing on larger numbers of mercenaries.Huscroft, p.170.
After his coronation, John moved south into France with military forces and adopted a defensive posture along the eastern and southern Normandy borders.Carpenter (2004), p.264; Turner, p.100. Both sides paused for desultory negotiations before the war recommenced; John's position was now stronger, thanks to confirmation that the counts Baldwin IX of Flanders and Renaud of Boulogne had renewed the anti-French alliances they had previously agreed to with Richard. The powerful Anjou nobleman William des Roches was persuaded to switch sides from Arthur to John; suddenly the balance seemed to be tipping away from Philip and Arthur in favour of John.Warren, p.54. Neither side was keen to continue the conflict, and following a papal truce the two leaders met in January 1200 to negotiate possible terms for peace. From John's perspective, what then followed represented an opportunity to stabilise control over his continental possessions and produce a lasting peace with Philip in Paris. John and Philip negotiated the May 1200 Treaty of Le Goulet; by this treaty, Philip recognised John as the rightful heir to Richard in respect to his French possessions, temporarily abandoning the wider claims of his client, Arthur.Turner, p.98. John, in turn, abandoned Richard's former policy of containing Philip through alliances with Flanders and Boulogne, and accepted Philip's right as the legitimate feudal overlord of John's lands in France.Warren, p.55. John's policy earned him the disrespectful title of "John Softsword" from some English chroniclers, who contrasted his behaviour with his more aggressive brother, Richard.Warren, p.63.
Le Goulet peace, 1200–02
thumb|alt=A photograph of a medieval tomb with a carving of Isabella on top. She is lying with her hands clasped, wearing a blue dress.|The tomb of Isabella of Angoulême, John's second wife, in Fontevraud Abbey
The new peace would only last for two years; war recommenced in the aftermath of John's decision in August 1200 to marry Isabella of Angoulême. In order to remarry, John first needed to abandon Isabel, Countess of Gloucester, his first wife; John accomplished this by arguing that he had failed to get the necessary papal permission to marry Isabel in the first place – as a cousin, John could not have legally wed her without this. It remains unclear why John chose to marry Isabella of Angoulême. Contemporary chroniclers argued that John had fallen deeply in love with Isabella, and John may have been motivated by desire for an apparently beautiful, if rather young, girl. On the other hand, the Angoumois lands that came with Isabella were strategically vital to John: by marrying Isabella, John was acquiring a key land route between Poitou and Gascony, which significantly strengthened his grip on Aquitaine.Turner, p.99.
Unfortunately, Isabella was already engaged to Hugh of Lusignan, an important member of a key Poitou noble family and brother of Count Raoul of Eu, who possessed lands along the sensitive eastern Normandy border. Just as John stood to benefit strategically from marrying Isabella, so the marriage threatened the interests of the Lusignans, whose own lands currently provided the key route for royal goods and troops across Aquitaine.Turner, pp.98–9. Rather than negotiating some form of compensation, John treated Hugh "with contempt"; this resulted in a Lusignan uprising that was promptly crushed by John, who also intervened to suppress Raoul in Normandy.
Although John was the Count of Poitou and therefore the rightful feudal lord over the Lusignans, they could legitimately appeal John's actions in France to his own feudal lord, Philip. Hugh did exactly this in 1201 and Philip summoned John to attend court in Paris in 1202, citing the Le Goulet treaty to strengthen his case. John was unwilling to weaken his authority in western France in this way. He argued that he need not attend Philip's court because of his special status as the Duke of Normandy, who was exempt by feudal tradition from being called to the French court. Philip argued that he was summoning John not as the Duke of Normandy, but as the Count of Poitou, which carried no such special status. When John still refused to come, Philip declared John in breach of his feudal responsibilities, reassigned all of John's lands that fell under the French crown to Arthur – with the exception of Normandy, which he took back for himself – and began a fresh war against John.
Loss of Normandy, 1202–04
thumb|275px|alt=A map of France showing John's bold sweep towards Mirebeau with a red arrow.|John's successful 1202 campaign, which culminated in the victory of the battle of Mirebeau; red arrows indicate the movement of John's forces, blue those of Philip II's forces and light blue those of Philip's Breton and Lusignan allies
John initially adopted a defensive posture similar to that of 1199: avoiding open battle and carefully defending his key castles.Turner, p.100. John's operations became more chaotic as the campaign progressed, and Philip began to make steady progress in the east. John became aware in July that Arthur's forces were threatening his mother, Eleanor, at Mirebeau Castle. Accompanied by William de Roches, his seneschal in Anjou, he swung his mercenary army rapidly south to protect her. His forces caught Arthur by surprise and captured the entire rebel leadership at the battle of Mirebeau. With his southern flank weakening, Philip was forced to withdraw in the east and turn south himself to contain John's army.
John's position in France was considerably strengthened by the victory at Mirebeau, but John's treatment of his new prisoners and of his ally, William de Roches, quickly undermined these gains. De Roches was a powerful Anjou noble, but John largely ignored him, causing considerable offence, whilst the king kept the rebel leaders in such bad conditions that twenty-two of them died.Turner, pp.100–1. At this time most of the regional nobility were closely linked through kinship, and this behaviour towards their relatives was regarded as unacceptable.Turner, p.101. William de Roches and other of John's regional allies in Anjou and Brittany deserted him in favour of Philip, and Brittany rose in fresh revolt. John's financial situation was tenuous: once factors such as the comparative military costs of materiel and soldiers were taken into account, Philip enjoyed a considerable, although not overwhelming, advantage of resources over John.Holt (1984), p.94; Turner, p.94; Bradbury (1998), p.159; Moss, p.119.
Further desertions of John's local allies at the beginning of 1203 steadily reduced John's freedom to manoeuvre in the region. He attempted to convince Pope Innocent III to intervene in the conflict, but Innocent's efforts were unsuccessful. As the situation became worse for John, he appears to have decided to have Arthur killed, with the aim of removing his potential rival and of undermining the rebel movement in Brittany. Arthur had initially been imprisoned at Falaise and was then moved to Rouen. After this, Arthur's fate remains uncertain, but modern historians believe he was murdered by John. The annals of Margam Abbey suggest that "John had captured Arthur and kept him alive in prison for some time in the castle of Rouen ... when John was drunk he slew Arthur with his own hand and tying a heavy stone to the body cast it into the Seine."McLynn, p.306. Rumours of the manner of Arthur's death further reduced support for John across the region. Arthur's sister, Eleanor, who had also been captured at Mirebeau, was kept imprisoned by John for many years, albeit in relatively good conditions.Warren, p.83.
thumb|350px|left|alt=A map of Normandy, showing Philip's invasion with a sequence of blue arrows, and the Breton advance from the west shown in light blue.|Phillip II's successful invasion of Normandy in 1204; blue arrows indicate the movement of Philip II's forces and light blue Philip's Breton allies
In late 1203, John attempted to relieve Château Gaillard, which although besieged by Philip was guarding the eastern flank of Normandy. John attempted a synchronised operation involving land-based and water-borne forces, considered by most historians today to have been imaginative in conception, but overly complex for forces of the period to have carried out successfully.Turner, p.102. John's relief operation was blocked by Philip's forces, and John turned back to Brittany in an attempt to draw Philip away from eastern Normandy. John successfully devastated much of Brittany, but did not deflect Philip's main thrust into the east of Normandy. Opinions vary amongst historians as to the military skill shown by John during this campaign, with most recent historians arguing that his performance was passable, although not impressive.
John's situation began to deteriorate rapidly. The eastern border region of Normandy had been extensively cultivated by Philip and his predecessors for several years, whilst Angevin authority in the south had been undermined by Richard's giving away of various key castles some years before.Power, pp.135–6. His use of routier mercenaries in the central regions had rapidly eaten away his remaining support in this area too, which set the stage for a sudden collapse of Angevin power.Power, p.135. John retreated back across the Channel in December, sending orders for the establishment of a fresh defensive line to the west of Chateau Gaillard. In March 1204, Gaillard fell. John's mother Eleanor died the following month. This was not just a personal blow for John, but threatened to unravel the widespread Angevin alliances across the far south of France. Philip moved south around the new defensive line and struck upwards at the heart of the Duchy, now facing little resistance. By August, Philip had taken Normandy and advanced south to occupy Anjou and Poitou as well.Turner, pp.102–3. John's only remaining possession on the Continent was now the Duchy of Aquitaine.Turner, p.103.
John as king
Kingship and royal administration
thumb|alt=A photograph of a hand written medieval pipe roll, with a handwritten list of entries and a formal stamp in the centre of the document|A pipe roll, part of the increasingly sophisticated system of royal governance at the turn of the 13th century
The nature of government under the Angevin monarchs was ill-defined and uncertain. John's predecessors had ruled using the principle of vis et voluntas, or "force and will", taking executive and sometimes arbitrary decisions, often justified on the basis that a king was above the law.Turner, p.149. Both Henry II and Richard had argued that kings possessed a quality of "divine majesty"; John continued this trend and claimed an "almost imperial status" for himself as ruler. During the 12th century, there were contrary opinions expressed about the nature of kingship, and many contemporary writers believed that monarchs should rule in accordance with the custom and the law, and take counsel of the leading members of the realm. There was as yet no model for what should happen if a king refused to do so. Despite his claim to unique authority within England, John would sometimes justify his actions on the basis that he had taken council with the barons. Modern historians remain divided as to whether John suffered from a case of "royal schizophrenia" in his approach to government, or if his actions merely reflected the complex model of Angevin kingship in the early 13th century.Warren, p.178; Turner, p.156.
John inherited a sophisticated system of administration in England, with a range of royal agents answering to the Royal Household: the Chancery kept written records and communications; the Treasury and the Exchequer dealt with income and expenditure respectively; and various judges were deployed to deliver justice around the kingdom.Warren, p.127. Thanks to the efforts of men like Hubert Walter, this trend towards improved record keeping continued into his reign.Bartlett, p.200. Like previous kings, John managed a peripatetic court that travelled around the kingdom, dealing with both local and national matters as he went.Warren, p.130. John was very active in the administration of England and was involved in every aspect of government.Warren, p.132. In part he was following in the tradition of Henry I and Henry II, but by the 13th century the volume of administrative work had greatly increased, which put much more pressure on a king who wished to rule in this style. John was in England for much longer periods than his predecessors, which made his rule more personal than that of previous kings, particularly in previously ignored areas such as the north.Warren, p.132; Huscroft, p.171.
The administration of justice was of particular importance to John. Several new processes had been introduced to English law under Henry II, including novel disseisin and mort d'ancestor.Huscroft, p.182. These processes meant the royal courts had a more significant role in local law cases, which had previously been dealt with only by regional or local lords.Huscroft, p.184. John increased the professionalism of local sergeants and bailiffs, and extended the system of coroners first introduced by Hubert Walter in 1194, creating a new class of borough coroners.McLynn, p.366; Hunnisett, pp.1–3. John worked extremely hard to ensure that this system operated well, through judges he had appointed, by fostering legal specialists and expertise, and by intervening in cases himself.Warren, pp.143–4. John continued to try relatively minor cases, even during military crises.Warren, p.144. Viewed positively, Lewis Warren considers that John discharged "his royal duty of providing justice ... with a zeal and a tirelessness to which the English common law is greatly endebted". Seen more critically, John may have been motivated by the potential of the royal legal process to raise fees, rather than a desire to deliver simple justice; John's legal system also only applied to free men, rather than to all of the population.McLynn, p.366. Nonetheless, these changes were popular with many free tenants, who acquired a more reliable legal system that could bypass the barons, against whom such cases were often brought.Carpenter (2004), p.273. John's reforms were less popular with the barons themselves, especially as they remained subject to arbitrary and frequently vindictive royal justice.
Economy
thumb|alt=A photograph of the front and back of a silver penny, the design dominated by a triangle in the centre of each coin. One side shows King John's head.|A silver King John penny, amongst the first to be struck in Dublin
One of John's principal challenges was acquiring the large sums of money needed for his proposed campaigns to reclaim Normandy.Turner, p.79. The Angevin kings had three main sources of income available to them, namely revenue from their personal lands, or demesne; money raised through their rights as a feudal lord; and revenue from taxation. Revenue from the royal demesne was inflexible and had been diminishing slowly since the Norman conquest. Matters were not helped by Richard's sale of many royal properties in 1189, and taxation played a much smaller role in royal income than in later centuries. English kings had widespread feudal rights which could be used to generate income, including the scutage system, in which feudal military service was avoided by a cash payment to the king. He derived income from fines, court fees and the sale of charters and other privileges.Lawler and Lawler, p.6. John intensified his efforts to maximise all possible sources of income, to the extent that he has been described as "avaricious, miserly, extortionate and moneyminded".McLynn, p.288. John also used revenue generation as a way of exerting political control over the barons: debts owed to the crown by the king's favoured supporters might be forgiven; collection of those owed by enemies was more stringently enforced.
left|thumb|alt=A photograph of the "heads" side of a silver coin|A silver King John penny
The result was a sequence of innovative but unpopular financial measures. John levied scutage payments eleven times in his seventeen years as king, as compared to eleven times in total during the reign of the preceding three monarchs.Turner, p.87. In many cases these were levied in the absence of any actual military campaign, which ran counter to the original idea that scutage was an alternative to actual military service. John maximised his right to demand relief payments when estates and castles were inherited, sometimes charging enormous sums, beyond barons' abilities to pay. Building on the successful sale of sheriff appointments in 1194, John initiated a new round of appointments, with the new incumbents making back their investment through increased fines and penalties, particularly in the forests.Carpenter (2004), p.272. Another innovation of Richard's, increased charges levied on widows who wished to remain single, was expanded under John. John continued to sell charters for new towns, including the planned town of Liverpool, and charters were sold for markets across the kingdom and in Gascony.Hodgett, p. 57; Johnson, p.142. The king introduced new taxes and extended existing ones. The Jews, who held a vulnerable position in medieval England, protected only by the king, were subject to huge taxes; £44,000 was extracted from the community by the tallage of 1210; much of it was passed on to the Christian debtors of Jewish moneylenders.Early medieval financial figures have no easy contemporary equivalent, due to the different role of money in the economy. John created a new tax on income and movable goods in 1207 – effectively a version of a modern income tax – that produced £60,000; he created a new set of import and export duties payable directly to the crown.Turner, p.95. John found that these measures enabled him to raise further resources through the confiscation of the lands of barons who could not pay or refused to pay.Turner, p.148.
At the start of John's reign there was a sudden change in prices, as bad harvests and high demand for food resulted in much higher prices for grain and animals. This inflationary pressure was to continue for the rest of the 13th century and had long-term economic consequences for England.Danziger and Gillingham, p. 44. The resulting social pressures were complicated by bursts of deflation that resulted from John's military campaigns.Bolton pp.32–3. It was usual at the time for the king to collect taxes in silver, which was then re-minted into new coins; these coins would then be put in barrels and sent to royal castles around the country, to be used to hire mercenaries or to meet other costs.Stenton, p.163. At those times when John was preparing for campaigns in Normandy, for example, huge quantities of silver had to be withdrawn from the economy and stored for months, which unintentionally resulted in periods during which silver coins were simply hard to come by, commercial credit difficult to acquire and deflationary pressure placed on the economy. The result was political unrest across the country.Bolton, p.40. John attempted to address some of the problems with the English currency in 1204 and 1205 by carrying out a radical overhaul of the coinage, improving its quality and consistency.Barlow, p.329.
Royal household and ira et malevolentia
thumb|upright|King John presenting a church, painted c.1250-59 by Matthew Paris in his Historia Anglorum
John's royal household was based around several groups of followers. One group was the familiares regis, John's immediate friends and knights who travelled around the country with him. They also played an important role in organising and leading military campaigns.Turner, pp.144–5; Church (1999), p.133. Another section of royal followers were the curia regis; these curiales were the senior officials and agents of the king and were essential to his day-to-day rule.Turner, p.144. Being a member of these inner circles brought huge advantages, as it was easier to gain favours from the king, file lawsuits, marry a wealthy heiress or have one's debts remitted.Turner, p.147. By the time of Henry II, these posts were increasingly being filled by "new men" from outside the normal ranks of the barons. This intensified under John's rule, with many lesser nobles arriving from the continent to take up positions at court; many were mercenary leaders from Poitou.Turner, p.145. These men included soldiers who would become infamous in England for their uncivilised behaviour, including Falkes de Breauté, Geard d'Athies, Engelard de Cigongé and Philip Marc.Barlow, p.326. Many barons perceived the king's household as what Ralph Turner has characterised as a "narrow clique enjoying royal favour at barons' expense" staffed by men of lesser status.
This trend for the king to rely on his own men at the expense of the barons was exacerbated by the tradition of Angevin royal ira et malevolentia – "anger and ill-will" – and John's own personality.Huscroft, p.70. From Henry II onwards, ira et malevolentia had come to describe the right of the king to express his anger and displeasure at particular barons or clergy, building on the Norman concept of malevoncia – royal ill-will.Huscroft, p.170; Mason, p.128. In the Norman period, suffering the king's ill-will meant difficulties in obtaining grants, honours or petitions; Henry II had infamously expressed his fury and ill-will towards Thomas Becket; this ultimately resulted in Becket's death. John now had the additional ability to "cripple his vassals" on a significant scale using his new economic and judicial measures, which made the threat of royal anger all the more serious.Warren, p.184.
John was deeply suspicious of the barons, particularly those with sufficient power and wealth to potentially challenge the king. Numerous barons were subjected to John's malevolentia, even including William Marshal, a famous knight and baron normally held up as a model of utter loyalty.Warren, p.185. The most infamous case, which went beyond anything considered acceptable at the time, proved to be that of William de Braose, a powerful marcher lord with lands in Ireland.Warren, p.184; Turner, p.23. De Braose was subjected to punitive demands for money, and when he refused to pay a huge sum of 40,000 marks (equivalent to £26,666 at the time),Both the mark and the pound sterling were accountancy terms in this period; a mark was worth around two-thirds of a pound. his wife and one of his sons were imprisoned by John, which resulted in their deaths.Warren, p.185; Turner, p.169. De Braose died in exile in 1211, and his grandsons remained in prison until 1218. John's suspicions and jealousies meant that he rarely enjoyed good relationships with even the leading loyalist barons.Turner, p.139.
Personal life
thumb|350px|alt=A family tree, with John in a circle and his children's heads represented in circles, linked by coloured lines.|A 13th-century depiction of John and his legitimate children, (l to r) Henry, Richard, Isabella, Eleanor, and Joan
John's personal life greatly affected his reign. Contemporary chroniclers state that John was sinfully lustful and lacking in piety.Turner, p.166. It was common for kings and nobles of the period to keep mistresses, but chroniclers complained that John's mistresses were married noblewomen, which was considered unacceptable. John had at least five children with mistresses during his first marriage to Isabelle of Gloucester, and two of those mistresses are known to have been noblewomen.Turner, p.166, Vincent, p.193. John's behaviour after his second marriage to Isabella of Angoulême is less clear, however. None of John's known illegitimate children were born after he remarried, and there is no actual documentary proof of adultery after that point, although John certainly had female friends amongst the court throughout the period.Vincent, p.193. The specific accusations made against John during the baronial revolts are now generally considered to have been invented for the purposes of justifying the revolt; nonetheless, most of John's contemporaries seem to have held a poor opinion of his sexual behaviour.
The character of John's relationship with his second wife, Isabella of Angoulême, is unclear. John married Isabella whilst she was relatively young – her exact date of birth is uncertain, and estimates place her between at most 15 and more probably towards nine years old at the time of her marriage.Vincent, pp.174–5. Even by the standards of the time, Isabella was married whilst very young.Vincent, p.175. John did not provide a great deal of money for his wife's household and did not pass on much of the revenue from her lands, to the extent that historian Nicholas Vincent has described him as being "downright mean" towards Isabella.Vincent, p.184. Vincent concluded that the marriage was not a particularly "amicable" one.Vincent, p.196. Other aspects of their marriage suggest a closer, more positive relationship. Chroniclers recorded that John had a "mad infatuation" with Isabella, and certainly John had conjugal relationships with Isabella between at least 1207 and 1215; they had five children.Turner, p.98; Vincent, p.196. In contrast to Vincent, historian William Chester Jordan concludes that the pair were a "companionable couple" who had a successful marriage by the standards of the day.Jordan, cited Turner, p.12.
John's lack of religious conviction has been noted by contemporary chroniclers and later historians, with some suspecting that John was at best impious, or even atheistic, a very serious issue at the time.McLynn, p.290. Contemporary chroniclers catalogued his various anti-religious habits at length, including his failure to take communion, his blasphemous remarks, and his witty but scandalous jokes about church doctrine, including jokes about the implausibility of the Resurrection. They commented on the paucity of John's charitable donations to the church.McLynn, pp.78, 290. Historian Frank McLynn argues that John's early years at Fontevrault, combined with his relatively advanced education, may have turned him against the church. Other historians have been more cautious in interpreting this material, noting that chroniclers also reported John's personal interest in the life of St Wulfstan of Worcester and his friendships with several senior clerics, most especially with Hugh of Lincoln, who was later declared a saint.Turner, p.120. Financial records show a normal royal household engaged in the usual feasts and pious observances – albeit with many records showing John's offerings to the poor to atone for routinely breaking church rules and guidance.Turner, p.120; Carpenter (2004), p.276. The historian Lewis Warren has argued that the chronicler accounts were subject to considerable bias and the King was "at least conventionally devout," citing his pilgrimages and interest in religious scripture and commentaries.Warren, pp. 171–172.
Later reign (1204–14)
Continental policy
thumb|300px|alt=A drawing of a medieval castle, with a tall tower with a flag on top; a crossbowman is firing an arrow from the battlements at two horsemen.|An early 13th-century drawing by Matthew Paris showing contemporary warfare, including the use of castles, crossbowmen and mounted knights
During the remainder of his reign, John focused on trying to retake Normandy.Turner, p.106. The available evidence suggests that John did not regard the loss of the Duchy as a permanent shift in Capetian power. Strategically, John faced several challenges:Turner, pp.106–7. England itself had to be secured against possible French invasion, the sea-routes to Bordeaux needed to be secured following the loss of the land route to Aquitaine, and his remaining possessions in Aquitaine needed to be secured following the death of his mother, Eleanor, in April 1204. John's preferred plan was to use Poitou as a base of operations, advance up the Loire valley to threaten Paris, pin down the French forces and break Philip's internal lines of communication before landing a maritime force in the Duchy itself. Ideally, this plan would benefit from the opening of a second front on Philip's eastern frontiers with Flanders and Boulogne – effectively a re-creation of Richard's old strategy of applying pressure from Germany. All of this would require a great deal of money and soldiers.Turner, p.107.
John spent much of 1205 securing England against a potential French invasion. As an emergency measure, John recreated a version of Henry II's Assize of Arms of 1181, with each shire creating a structure to mobilise local levies. When the threat of invasion faded, John formed a large military force in England intended for Poitou, and a large fleet with soldiers under his own command intended for Normandy. To achieve this, John reformed the English feudal contribution to his campaigns, creating a more flexible system under which only one knight in ten would actually be mobilised, but would be financially supported by the other nine; knights would serve for an indefinite period. John built up a strong team of engineers for siege warfare and a substantial force of professional crossbowmen.Barlow, p.336. The king was supported by a team of leading barons with military expertise, including William Longespée, William the Marshal, Roger de Lacy and, until he fell from favour, the marcher lord William de Braose.
John had already begun to improve his Channel forces before the loss of Normandy and he rapidly built up further maritime capabilities after its collapse. Most of these ships were placed along the Cinque Ports, but Portsmouth was also enlarged.Warren, p.123. By the end of 1204 he had around 50 large galleys available; another 54 vessels were built between 1209 and 1212.Turner, p.106; Warren, p.123 William of Wrotham was appointed "keeper of the galleys", effectively John's chief admiral. Wrotham was responsible for fusing John's galleys, the ships of the Cinque Ports and pressed merchant vessels into a single operational fleet. John adopted recent improvements in ship design, including new large transport ships called buisses and removable forecastles for use in combat.
thumb|left|250px|alt=A medieval drawing of William the Marshal riding a horse, impaling another knight with a lance.|William the Marshal (l), one of John's most senior military leaders, by Matthew Paris
Baronial unrest in England prevented the departure of the planned 1205 expedition, and only a smaller force under William Longespée deployed to Poitou. In 1206 John departed for Poitou himself, but was forced to divert south to counter a threat to Gascony from Alfonso VIII of Castile.Turner, p.107. After a successful campaign against Alfonso, John headed north again, taking the city of Angers. Philip moved south to meet John; the year's campaigning ended in stalemate and a two-year truce was made between the two rulers.Turner, pp.107–8.
During the truce of 1206–1208, John focused on building up his financial and military resources in preparation for another attempt to recapture Normandy.Turner, p.108. John used some of this money to pay for new alliances on Philip's eastern frontiers, where the growth in Capetian power was beginning to concern France's neighbours. By 1212 John had successfully concluded alliances with his nephew Otto IV, a contender for the crown of Holy Roman Emperor in Germany, as well as with the counts Renaud of Boulogne and Ferdinand of Flanders. The invasion plans for 1212 were postponed because of fresh English baronial unrest about service in Poitou. Philip seized the initiative in 1213, sending his elder son, Louis, to invade Flanders with the intention of next launching an invasion of England. John was forced to postpone his own invasion plans to counter this threat. He launched his new fleet to attack the French at the harbour of Damme.Turner, p.109. The attack was a success, destroying Philip's vessels and any chances of an invasion of England that year. John hoped to exploit this advantage by invading himself late in 1213, but baronial discontent again delayed his invasion plans until early 1214, in what would prove to be his final Continental campaign.
Scotland, Ireland and Wales
thumb|alt=A drawing of King John wearing a crown and a red robe. The king is sat down and stroking two hunting dogs.|A 13th-century depiction of John with two hunting dogs
In the late 12th and early 13th centuries the border and political relationship between England and Scotland was disputed, with the kings of Scotland claiming parts of what is now northern England. John's father, Henry II, had forced William the Lion to swear fealty to him at the Treaty of Falaise in 1174.Carpenter (2004), p.224. This had been rescinded by Richard I in exchange for financial compensation in 1189, but the relationship remained uneasy.Carpenter (2004), p.255. John began his reign by reasserting his sovereignty over the disputed northern counties. He refused William's request for the earldom of Northumbria, but did not intervene in Scotland itself and focused on his continental problems.Carpenter (2004), p.277; Duncan, p.251. The two kings maintained a friendly relationship, meeting in 1206 and 1207,Duncan, p.252. until it was rumoured in 1209 that William was intending to ally himself with Philip II of France.Carpenter (2004), p.277; Duncan, p.260 John invaded Scotland and forced William to sign the Treaty of Norham, which gave John control of William's daughters and required a payment of £10,000.Carpenter (2004), p.277. This effectively crippled William's power north of the border, and by 1212 John had to intervene militarily to support the Scottish king against his internal rivals. John made no efforts to reinvigorate the Treaty of Falaise, though, and both William and Alexander in turn remained independent kings, supported by, but not owing fealty to, John.Duncan, p.268.
John remained Lord of Ireland throughout his reign. He drew on the country for resources to fight his war with Philip on the continent.Carpenter (2004), p.278. Conflict continued in Ireland between the Anglo-Norman settlers and the indigenous Irish chieftains, with John manipulating both groups to expand his wealth and power in the country. During Richard's rule, John had successfully increased the size of his lands in Ireland, and he continued this policy as king.Carpenter (2004), pp.278–9. In 1210 the king crossed into Ireland with a large army to crush a rebellion by the Anglo-Norman lords; he reasserted his control of the country and used a new charter to order compliance with English laws and customs in Ireland.Carpenter (2004), pp.280–1. John stopped short of trying to actively enforce this charter on the native Irish kingdoms, but historian David Carpenter suspects that he might have done so, had the baronial conflict in England not intervened. Simmering tensions remained with the native Irish leaders even after John left for England.Carpenter (2004), p.282; Duffy, pp.242–3.
Royal power in Wales was unevenly applied, with the country divided between the marcher lords along the borders, royal territories in Pembrokeshire and the more independent native Welsh lords of North Wales. John took a close interest in Wales and knew the country well, visiting every year between 1204 and 1211 and marrying his illegitimate daughter, Joan, to the Welsh prince Llywelyn the Great.Carpenter (2004), pp.282–3. The king used the marcher lords and the native Welsh to increase his own territory and power, striking a sequence of increasingly precise deals backed by royal military power with the Welsh rulers.Carpenter (2004), p.283. A major royal expedition to enforce these agreements occurred in 1211, after Llywelyn attempted to exploit the instability caused by the removal of William de Braose, through the Welsh uprising of 1211.Carpenter (2004), p.284. John's invasion, striking into the Welsh heartlands, was a military success. Llywelyn came to terms that included an expansion of John's power across much of Wales, albeit only temporarily.
Dispute with the Pope
thumb|alt=A painting of Pope Innocent III, wearing his formal robes and a tall, pointed hat.|Pope Innocent III, who excommunicated John in 1209
When the Archbishop of Canterbury, Hubert Walter, died on 13 July 1205, John became involved in a dispute with Pope Innocent III that would lead to the king's excommunication. The Norman and Angevin kings had traditionally exercised a great deal of power over the church within their territories. From the 1040s onwards, however, successive popes had put forward a reforming message that emphasised the importance of the church being "governed more coherently and more hierarchically from the centre" and established "its own sphere of authority and jurisdiction, separate from and independent of that of the lay ruler", in the words of historian Richard Huscroft.Huscroft, p.190. After the 1140s, these principles had been largely accepted within the English church, albeit with an element of concern about centralising authority in Rome.Huscroft, p.189; Turner, p.121. These changes brought the customary rights of lay rulers such as John over ecclesiastical appointments into question. Pope Innocent was, according to historian Ralph Turner, an "ambitious and aggressive" religious leader, insistent on his rights and responsibilities within the church.Turner, p.119.
John wanted John de Gray, the Bishop of Norwich and one of his own supporters, to be appointed Archbishop of Canterbury after the death of Walter, but the cathedral chapter for Canterbury Cathedral claimed the exclusive right to elect Walter's successor. They favoured Reginald, the chapter's sub-prior.Turner, p.125. To complicate matters, the bishops of the province of Canterbury also claimed the right to appoint the next archbishop. The chapter secretly elected Reginald and he travelled to Rome to be confirmed; the bishops challenged the appointment and the matter was taken before Innocent.Turner, pp.125–6. John forced the Canterbury chapter to change their support to John de Gray, and a messenger was sent to Rome to inform the papacy of the new decision.Turner, p.126. Innocent disavowed both Reginald and John de Gray, and instead appointed his own candidate, Stephen Langton. John refused Innocent's request that he consent to Langton's appointment, but the pope consecrated Langton anyway in June 1207.
John was incensed about what he perceived as an abrogation of his customary right as monarch to influence the election. He complained both about the choice of Langton as an individual, as John felt he was overly influenced by the Capetian court in Paris, and about the process as a whole.Turner, p.127. He barred Langton from entering England and seized the lands of the archbishopric and other papal possessions. Innocent set a commission in place to try to convince John to change his mind, but to no avail. Innocent then placed an interdict on England in March 1208, prohibiting clergy from conducting religious services, with the exception of baptisms for the young, and confessions and absolutions for the dying.Turner, p.128; Harper-Bill, p.304.
thumb|left|alt=A photograph of a tall stone castle keep; most of the towers are square, but one is circular.|Rochester Castle, one of the many properties owned by the disputed archbishopric of Canterbury, and an important fortification in the final years of John's reign
John treated the interdict as "the equivalent of a papal declaration of war".Turner, p.128. He responded by attempting to punish Innocent personally and to drive a wedge between those English clergy that might support him and those allying themselves firmly with the authorities in Rome. John seized the lands of those clergy unwilling to conduct services, as well as those estates linked to Innocent himself; he arrested the illicit concubines that many clerics kept during the period, only releasing them after the payment of fines; he seized the lands of members of the church who had fled England, and he promised protection for those clergy willing to remain loyal to him. In many cases, individual institutions were able to negotiate terms for managing their own properties and keeping the produce of their estates.Poole, pp.446–7. By 1209 the situation showed no signs of resolution, and Innocent threatened to excommunicate John if he did not acquiesce to Langton's appointment.Turner, p.131. When this threat failed, Innocent excommunicated the king in November 1209. Although theoretically a significant blow to John's legitimacy, this did not appear to greatly worry the king. Two of John's close allies, Emperor Otto IV and Count Raymond VI of Toulouse, had already suffered the same punishment themselves, and the significance of excommunication had been somewhat devalued. John simply tightened his existing measures and accrued significant sums from the income of vacant sees and abbeys: one 1213 estimate, for example, suggested the church had lost an estimated 100,000 marks (equivalent to £66,666 at the time) to John.Harper-Bill, p.306. Official figures suggest that around 14% of annual income from the English church was being appropriated by John each year.Harper-Bill, p.307.
Innocent gave some dispensations as the crisis progressed.Harper-Bill, p.304. Monastic communities were allowed to celebrate Mass in private from 1209 onwards, and late in 1212 the Holy Viaticum for the dying was authorised.Harper-Bill, pp.304–5. The rules on burials and lay access to churches appear to have been steadily circumvented, at least unofficially. Although the interdict was a burden to much of the population, it did not result in rebellion against John. By 1213, though, John was increasingly worried about the threat of French invasion.Turner, p.133. Some contemporary chroniclers suggested that in January Philip II of France had been charged with deposing John on behalf of the papacy, although it appears that Innocent merely prepared secret letters in case Innocent needed to claim the credit if Philip did successfully invade England.Bartlett, pp.404–5; Turner, p.133.
Under mounting political pressure, John finally negotiated terms for a reconciliation, and the papal terms for submission were accepted in the presence of the papal legate Pandulf Verraccio in May 1213 at the Templar Church at Dover.Turner, p.133; Lloyd, p.213. As part of the deal, John offered to surrender the Kingdom of England to the papacy for a feudal service of 1,000 marks (equivalent to £666 at the time) annually: 700 marks (£466) for England and 300 marks (£200) for Ireland, as well as recompensing the church for revenue lost during the crisis.Turner, p.133; Harper-Bill, p.308. The agreement was formalised in the Bulla Aurea, or Golden Bull. This resolution produced mixed responses. Although some chroniclers felt that John had been humiliated by the sequence of events, there was little public reaction.Turner, pp.133–4. Innocent benefited from the resolution of his long-standing English problem, but John probably gained more, as Innocent became a firm supporter of John for the rest of his reign, backing him in both domestic and continental policy issues.Turner, p.134. Innocent immediately turned against Philip, calling upon him to reject plans to invade England and to sue for peace. John paid some of the compensation money he had promised the church, but he ceased making payments in late 1214, leaving two-thirds of the sum unpaid; Innocent appears to have conveniently forgotten this debt for the good of the wider relationship.Harper-Bill, p.308.
Failure in France and the First Barons' War (1215–16)
thumb|alt=An illuminated picture of two armies of mounted knights fighting; the French side are on the left, the Imperial on the right.|The French victory at the battle of Bouvines doomed John's plan to retake Normandy in 1214 and led to the First Barons' War.
Tensions and discontent
Tensions between John and the barons had been growing for several years, as demonstrated by the 1212 plot against the king.Turner, pp.173–4. Many of the disaffected barons came from the north of England; that faction was often labelled by contemporaries and historians as "the Northerners". The northern barons rarely had any personal stake in the conflict in France, and many of them owed large sums of money to John; the revolt has been characterised as "a rebellion of the king's debtors".Carpenter (2004), p.273, after Holt (1961). Many of John's military household joined the rebels, particularly amongst those that John had appointed to administrative roles across England; their local links and loyalties outweighed their personal loyalty to John.Church (1999), p.154. Tension also grew across North Wales, where opposition to the 1211 treaty between John and Llywelyn was turning into open conflict.Rowlands, pp.284–5. For some the appointment of Peter des Roches as justiciar was an important factor, as he was considered an "abrasive foreigner" by many of the barons.Carpenter (2004), p.287. The failure of John's French military campaign in 1214 was probably the final straw that precipitated the baronial uprising during John's final years as king; James Holt describes the path to civil war as "direct, short and unavoidable" following the defeat at Bouvines.Turner, pp.173–4; Holt (1961), p.100.
Failure of the 1214 French campaign
In 1214 John began his final campaign to reclaim Normandy from Philip. John was optimistic, as he had successfully built up alliances with the Emperor Otto, Renaud of Boulogne and Count Ferdinand of Flanders; he was enjoying papal favour; and he had successfully built up substantial funds to pay for the deployment of his experienced army.Barlow, p.335. Nonetheless, when John left for Poitou in February 1214, many barons refused to provide military service; mercenary knights had to fill the gaps.Carpenter (2004), p.286. John's plan was to split Philip's forces by pushing north-east from Poitou towards Paris, whilst Otto, Renaud and Ferdinand, supported by William Longespée, marched south-west from Flanders.
The first part of the campaign went well, with John outmanoeuvring the forces under the command of Prince Louis and retaking the county of Anjou by the end of June.Carpenter (2004), p.286; Warren, p.221. John besieged the castle of Roche-au-Moine, a key stronghold, forcing Louis to give battle against John's larger army.Warren, p.222. The local Angevin nobles refused to advance with the king; left at something of a disadvantage, John retreated back to La Rochelle. Shortly afterwards, Philip won the hard-fought battle of Bouvines in the north against Otto and John's other allies, bringing an end to John's hopes of retaking Normandy.Warren, p.224. A peace agreement was signed in which John returned Anjou to Philip and paid the French king compensation; the truce was intended to last for six years. John arrived back in England in October.
Pre-war tensions and Magna Carta
thumb|alt=A photograph of a page of Magna Carta, a wide page of dense, small medieval writing.|An original version of Magna Carta, agreed by John and the barons in 1215
Within a few months of John's return, rebel barons in the north and east of England were organising resistance to his rule.Turner, p.174. John held a council in London in January 1215 to discuss potential reforms and sponsored discussions in Oxford between his agents and the rebels during the spring.Turner, p.178. John appears to have been playing for time until Pope Innocent III could send letters giving him explicit papal support. This was particularly important for John, as a way of pressuring the barons but also as a way of controlling Stephen Langton, the Archbishop of Canterbury.Turner, p.179. In the meantime, John began to recruit fresh mercenary forces from Poitou, although some were later sent back to avoid giving the impression that the king was escalating the conflict. John announced his intent to become a crusader, a move which gave him additional political protection under church law.Warren, p.233.
Letters of support from the pope arrived in April but by then the rebel barons had organised. They congregated at Northampton in May and renounced their feudal ties to John, appointing Robert fitz Walter as their military leader.Turner, p.174, p.179. This self-proclaimed "Army of God" marched on London, taking the capital as well as Lincoln and Exeter.Turner, p.180. John's efforts to appear moderate and conciliatory had been largely successful, but once the rebels held London they attracted a fresh wave of defectors from John's royalist faction. John instructed Langton to organise peace talks with the rebel barons.
John met the rebel leaders at Runnymede, near Windsor Castle, on 15 June 1215. Langton's efforts at mediation created a charter capturing the proposed peace agreement; it was later renamed Magna Carta, or "Great Charter".Turner, pp.180, 182. The charter went beyond simply addressing specific baronial complaints, and formed a wider proposal for political reform, albeit one focusing on the rights of free men, not serfs and unfree labour.Turner, p.182. It promised the protection of church rights, protection from illegal imprisonment, access to swift justice, new taxation only with baronial consent and limitations on scutage and other feudal payments.Turner, p. 184–5. A council of twenty-five barons would be created to monitor and ensure John's future adherence to the charter, whilst the rebel army would stand down and London would be surrendered to the king.Turner, p.189.
Neither John nor the rebel barons seriously attempted to implement the peace accord. The rebel barons suspected that the proposed baronial council would be unacceptable to John and that he would challenge the legality of the charter; they packed the baronial council with their own hardliners and refused to demobilise their forces or surrender London as agreed.Turner, pp.189–190. Despite his promises to the contrary, John appealed to Innocent for help, observing that the charter compromised the pope's rights under the 1213 agreement that had appointed him John's feudal lord.Turner, p.190. Innocent obliged; he declared the charter "not only shameful and demeaning, but illegal and unjust" and excommunicated the rebel barons. The failure of the agreement led rapidly to the First Barons' War.
War with the barons
thumb|250px|alt=A map of England showing King John's march north and back south with solid black and dashed arrows.|John's campaign from September 1215 to March 1216
The rebels made the first move in the war, seizing the strategic Rochester Castle, owned by Langton but left almost unguarded by the archbishop.Turner, p.192. John was well prepared for a conflict. He had stockpiled money to pay for mercenaries and ensured the support of the powerful marcher lords with their own feudal forces, such as William Marshal and Ranulf of Chester.Turner, p.191. The rebels lacked the engineering expertise or heavy equipment necessary to assault the network of royal castles that cut off the northern rebel barons from those in the south.Turner, p.191; Barlow, p.354. John's strategy was to isolate the rebel barons in London, protect his own supply lines to his key source of mercenaries in Flanders, prevent the French from landing in the south-east, and then win the war through slow attrition. John put off dealing with the badly deteriorating situation in North Wales, where Llywelyn the Great was leading a rebellion against the 1211 settlement.Rowlands, pp.286–7.
John's campaign started well. In November John retook Rochester Castle from rebel baron William d'Aubigny in a sophisticated assault. One chronicler had not seen "a siege so hard pressed or so strongly resisted", whilst historian Reginald Brown describes it as "one of the greatest [siege] operations in England up to that time".Turner, p.192 citing Brown, pp.10–11; Turner, p.193. Having regained the south-east John split his forces, sending William Longespée to retake the north side of London and East Anglia, whilst John himself headed north via Nottingham to attack the estates of the northern barons.Turner, p.193. Both operations were successful and the majority of the remaining rebels were pinned down in London. In January 1216 John marched against Alexander II of Scotland, who had allied himself with the rebel cause.Duncan, p.267. John took back Alexander's possessions in northern England in a rapid campaign and pushed up towards Edinburgh over a ten-day period.
The rebel barons responded by inviting the French prince Louis to lead them: Louis had a claim to the English throne by virtue of his marriage to Blanche of Castile, a granddaughter of Henry II.Turner, pp.191–2. Philip may have provided him with private support but refused to openly support Louis, who was excommunicated by Innocent for taking part in the war against John. Louis' planned arrival in England presented a significant problem for John, as the prince would bring with him naval vessels and siege engines essential to the rebel cause.Barlow, p.356. Once John contained Alexander in Scotland, he marched south to deal with the challenge of the coming invasion.
Prince Louis intended to land in the south of England in May 1216, and John assembled a naval force to intercept him. Unfortunately for John, his fleet was dispersed by bad storms and Louis landed unopposed in Kent. John hesitated and decided not to attack Louis immediately, either due to the risks of open battle or over concerns about the loyalty of his own men. Louis and the rebel barons advanced west and John retreated, spending the summer reorganising his defences across the rest of the kingdom.Turner, p.194. John saw several of his military household desert to the rebels, including his half-brother, William Longespée. By the end of the summer the rebels had regained the south-east of England and parts of the north.
Death
thumb|alt=A photograph of the tomb of King John; a large carved, square, stone block supports a carved effigy of the king lying down.|King John's tomb in Worcester Cathedral
In September 1216 John began a fresh, vigorous attack. He marched from the Cotswolds, feigned an offensive to relieve the besieged Windsor Castle, and attacked eastwards around London to Cambridge to separate the rebel-held areas of Lincolnshire and East Anglia.Turner, p.194; Warren, p.253. From there he travelled north to relieve the rebel siege at Lincoln and back east to King's Lynn, probably to order further supplies from the continent.Warren, p.253.The town of King's Lynn was simply called Lynn in the 13th century. In King's Lynn, John contracted dysentery, which would ultimately prove fatal. Meanwhile, Alexander II invaded northern England again, taking Carlisle in August and then marching south to give homage to Prince Louis for his English possessions; John narrowly missed intercepting Alexander along the way.Turner, p.194; Duncan, p.267; Warren, p.253. Tensions between Louis and the English barons began to increase, prompting a wave of desertions, including William Marshal's son William and William Longespée, who both returned to John's faction.McLynn, p.455; Warren, p.253.
The king returned west but is said to have lost a significant part of his baggage train along the way.Warren, p.254. Roger of Wendover provides the most graphic account of this, suggesting that the king's belongings, including the Crown Jewels, were lost as he crossed one of the tidal estuaries which empties into the Wash, being sucked in by quicksand and whirlpools. Accounts of the incident vary considerably between the various chroniclers and the exact location of the incident has never been confirmed; the losses may have involved only a few of his pack-horses.Warren, pp.284–5; Barlow, p.356. Modern historians assert that by October 1216 John faced a "stalemate", "a military situation uncompromised by defeat".Turner, p.195; Barlow, p.357.
John's illness grew worse and by the time he reached Newark Castle he was unable to travel any farther; John died on the night of 18/19 October.Warren, pp.254–5. Numerous – probably fictitious – accounts circulated soon after his death that he had been killed by poisoned ale, poisoned plums or a "surfeit of peaches".Given-Wilson, p.87. His body was escorted south by a company of mercenaries and he was buried in Worcester Cathedral in front of the altar of St Wulfstan.Warren, p.255; McLynn, p.460. A new sarcophagus with an effigy was made for him in 1232, in which his remains now rest.Danziger and Gillingham, p.270.
Legacy
In the aftermath of John's death William Marshal was declared the protector of the nine-year-old Henry III.McLynn, p.460. The civil war continued until royalist victories at the battles of Lincoln and Dover in 1217. Louis gave up his claim to the English throne and signed the Treaty of Lambeth. The failed Magna Carta agreement was resuscitated by Marshal's administration and reissued in an edited form in 1217 as a basis for future government.Danziger and Gillinham, p.271; Huscroft, p.151. Henry III continued his attempts to reclaim Normandy and Anjou until 1259, but John's continental losses and the consequent growth of Capetian power in the 13th century proved to mark a "turning point in European history".Carpenter (2004), p.270.
John's first wife, Isabel, Countess of Gloucester, was released from imprisonment in 1214; she remarried twice, and died in 1217. John's second wife, Isabella of Angoulême, left England for Angoulême soon after the king's death; she became a powerful regional leader, but largely abandoned the children she had had by John.Vincent, p.206. John had five legitimate children, all by Isabella. His eldest son, Henry III, ruled as king for the majority of the 13th century. Richard became a noted European leader and ultimately the King of the Romans in the Holy Roman Empire.Carpenter (1996), p.223. Joan married Alexander II of Scotland to become his queen consort. Isabella married the Holy Roman Emperor Frederick II.Carpenter (2004), p.344. His youngest daughter, Eleanor, married William Marshal's son, also called William, and later the famous English rebel Simon de Montfort.Carpenter (2004), p.306. John had a number of illegitimate children by various mistresses, including nine sons – Richard, Oliver, John, Geoffrey, Henry, Osbert Gifford, Eudes, Bartholomew and probably Philip – and three daughters – Joan, Maud and probably Isabel.Richardson, p.9. Of these, Joan became the most famous, marrying Prince Llywelyn the Great of Wales.Carpenter (2004), p.328.
Historiography
thumb|alt=A medieval sketch of Matthew Paris, dressed as a monk and on his hands and knees.|Matthew Paris, one of the first historians of John's reign
Historical interpretations of John have been subject to considerable change over the years. Medieval chroniclers provided the first contemporary, or near contemporary, histories of John's reign. One group of chroniclers wrote early in John's life, or around the time of his accession, including Richard of Devizes, William of Newburgh, Roger of Hoveden and Ralph de Diceto.Gillingham (2007), p.2. These historians were generally unsympathetic to John's behaviour under Richard's rule, but slightly more positive towards the very earliest years of John's reign.Holt (1963), p.19, cited Gillingham (2007) p.4. Reliable accounts of the middle and later parts of John's reign are more limited, with Gervase of Canterbury and Ralph of Coggeshall writing the main accounts; neither of them were positive about John's performance as king.Warren, p.7; Gillingham (2007), p.15. Much of John's later, negative reputation was established by two chroniclers writing after the king's death, Roger of Wendover and Matthew Paris, the latter claiming that John attempted conversion to Islam in exchange for military aid from the Almohad ruler Muhammad al-Nasir - a story which is considered to be untrue by modern historians.Warren, pp.11, 14.
In the 16th century political and religious changes altered the attitude of historians towards John. Tudor historians were generally favourably inclined towards the king, focusing on John's opposition to the Papacy and his promotion of the special rights and prerogatives of a king. Revisionist histories written by John Foxe, William Tyndale and Robert Barnes portrayed John as an early Protestant hero, and John Foxe included the king in his Book of Martyrs.Bevington, p.432. John Speed's Historie of Great Britaine in 1632 praised John's "great renown" as a king; he blamed the bias of medieval chroniclers for the king's poor reputation.Gillingham (2007), p.4.
thumb|left|upright|alt=A photograph of the wood block print of the Book of Martyrs. The book's title is in the centre and various scenes from the book are depicted around it.|John Foxe's Book of Martyrs, officially titled Acts and Monuments, which took a positive view of John's reign
By the Victorian period in the 19th century historians were more inclined to draw on the judgements of the chroniclers and to focus on John's moral personality. Kate Norgate, for example, argued that John's downfall had been due not to his failure in war or strategy, but due to his "almost superhuman wickedness", whilst James Ramsay blamed John's family background and his cruel personality for his downfall.Norgate (1902), p.286; Ramsay, p.502. Historians in the "Whiggish" tradition, focusing on documents such as the Domesday Book and Magna Carta, trace a progressive and universalist course of political and economic development in England over the medieval period.Dyer, p.4; Coss, p.81. These historians were often inclined to see John's reign, and his signing of Magna Carta in particular, as a positive step in the constitutional development of England, despite the flaws of the king himself. Winston Churchill, for example, argued that "[w]hen the long tally is added, it will be seen that the British nation and the English-speaking world owe far more to the vices of John than to the labours of virtuous sovereigns".Churchill, p.190.
In the 1940s, new interpretations of John's reign began to emerge, based on research into the record evidence of his reign, such as pipe rolls, charters, court documents and similar primary records. Notably, an essay by Vivian Galbraith in 1945 proposed a "new approach" to understanding the ruler.Galbraith, pp.128–30, cited Gillingham (2007), p.1. The use of recorded evidence was combined with an increased scepticism about two of the most colourful chroniclers of John's reign, Roger of Wendover and Matthew Paris.Turner, pp.22–3. In many cases the detail provided by these chroniclers, both writing after John's death, was challenged by modern historians.Warren, pp.11–6. Interpretations of Magna Carta and the role of the rebel barons in 1215 have been significantly revised: although the charter's symbolic, constitutional value for later generations is unquestionable, in the context of John's reign most historians now consider it a failed peace agreement between "partisan" factions.Huscroft, p.174; Barlow, p.353. There has been increasing debate about the nature of John's Irish policies. Specialists in Irish medieval history, such as Sean Duffy, have challenged the conventional narrative established by Lewis Warren, suggesting that Ireland was less stable by 1216 than was previously supposed.Duffy, pp.221, 245.
Most historians today, including John's recent biographers Ralph Turner and Lewis Warren, argue that John was an unsuccessful monarch, but note that his failings were exaggerated by 12th- and 13th-century chroniclers. Jim Bradbury notes the current consensus that John was a "hard-working administrator, an able man, an able general", albeit, as Turner suggests, with "distasteful, even dangerous personality traits", including pettiness, spitefulness and cruelty.Bradbury (2007), p.353; Turner, p.23. John Gillingham, author of a major biography of Richard I, follows this line too, although he considers John a less effective general than do Turner or Warren, and describes him "one of the worst kings ever to rule England".Gillingham (2001), p.125. Bradbury takes a moderate line, but suggests that in recent years modern historians have been overly lenient towards John's numerous faults.Bradbury (2007), p.361. Popular historian Frank McLynn maintains a counter-revisionist perspective on John, arguing that the king's modern reputation amongst historians is "bizarre", and that as a monarch John "fails almost all those [tests] that can be legitimately set".McLynn, pp.472–3.
Popular representations
thumb|alt=A photograph of the first page of Shakespeare's play "King John", with two columns of text below.|Shakespeare's play The Life and Death of King John
Popular representations of John first began to emerge during the Tudor period, mirroring the revisionist histories of the time. The anonymous play The Troublesome Reign of King John portrayed the king as a "proto-Protestant martyr", similar to that shown in John Bale's morality play Kynge Johan, in which John attempts to save England from the "evil agents of the Roman Church".Curren-Aquino (1989a), p.19.; Harris, p.91. By contrast, Shakespeare's King John, a relatively anti-Catholic play that draws on The Troublesome Reign for its source material, offers a more "balanced, dual view of a complex monarch as both a proto-Protestant victim of Rome's machinations and as a weak, selfishly motivated ruler".Curren-Aquino (1989a), p.19; McEachern, p.329; Bevington, p.454. Anthony Munday's play The Downfall and The Death of Robert Earl of Huntington portrays many of John's negative traits, but adopts a positive interpretation of the king's stand against the Roman Catholic Church, in line with the contemporary views of the Tudor monarchs.Potter, p.70. By the middle of the 17th century, plays such as Robert Davenport's King John and Matilda, although based largely on the earlier Elizabethan works, were transferring the role of Protestant champion to the barons and focusing more on the tyrannical aspects of John's behaviour.Maley, p.50.
Nineteenth-century fictional depictions of John were heavily influenced by Sir Walter Scott's historical romance, Ivanhoe, which presented "an almost totally unfavourable picture" of the king; the work drew on Victorian histories of the period and on Shakespeare's play.Tulloch, p.497. Scott's work influenced the late 19th-century children's writer Howard Pyle's book The Merry Adventures of Robin Hood, which in turn established John as the principal villain within the traditional Robin Hood narrative.D'Ammassa, p.94. During the 20th century, John was normally depicted in fictional books and films alongside Robin Hood. Sam De Grasse's role as John in the black-and-white 1922 film version shows John committing numerous atrocities and acts of torture.Aberth, p.166. Claude Rains played John in the 1938 colour version alongside Errol Flynn, starting a trend for films to depict John as an "effeminate ... arrogant and cowardly stay-at-home".Potter, p.210. The character of John acts either to highlight the virtues of King Richard, or contrasts with the Sheriff of Nottingham, who is usually the "swashbuckling villain" opposing Robin. An extreme version of this trend can be seen in the Disney cartoon version, for example, which depicts John, voiced by Peter Ustinov, as a "cowardly, thumbsucking lion".Potter, p.218. Popular works that depict John beyond the Robin Hood legends, such as James Goldman's play and later film, The Lion in Winter, set in 1183, commonly present him as an "effete weakling", in this instance contrasted with the more masculine Henry II, or as a tyrant, as in A. A. Milne's poem for children, "King John's Christmas".Elliott, pp.109–10; Seel, p.7.
Ancestry
Notes
References
Bibliography
Aberth, John. (2003) A Knight at the Movies: Medieval History on Film. London: Routledge. ISBN 978-0-415-93886-0.
Barlow, Frank. (1999) The Feudal Kingdom of England, 1042–1216. Harlow, UK: Pearson Education. ISBN 0-582-38117-7.
Barrett, Nick. (2007) "The Revenues of King John and Philip Augustus Revisited," in Church (ed) 2007.
Bartlett, Robert. (2000) England Under the Norman and Angevin Kings: 1075–1225. Oxford: Clarendon Press. ISBN 0-19-822741-8.
Bevington, David. (2002) "Literature and the theatre," in Loewenstein and Mueller (eds) 2002.
Bolton, J. K. (2007) "English Economy in the Early Thirteenth Century," in Church (ed) 2007.
Bradbury, Jim. (1998) Philip Augustus, King of France 1180–1223. London: Longman. ISBN 978-0-582-06058-6.
Bradbury, Jim. (2007) "Philip Augustus and King John: Personality and History," in Church (ed) 2007.
Brown, Reginald Allen. (1989) Rochester Castle: Kent. London: English Heritage. ISBN 978-1-85074-129-9.
Carpenter, David. (1996) The Reign of Henry III. London: Hambledon Press. ISBN 978-1-85285-137-8.
Carpenter, David. (2004) Struggle for Mastery: The Penguin History of Britain 1066–1284. London: Penguin. ISBN 978-0-14-014824-4.
Church, Stephen D. (1999) The Household Knights of King John. Cambridge: Cambridge University Press. ISBN 978-0-521-55319-3.
Church, Stephen D. (ed) (2007) King John: New Interpretations. Woodbridge, UK: Boydell Press. ISBN 978-0-85115-947-8.
Churchill, Winston. (1958) A History of the English-Speaking Peoples, Volume 1. London: Cassell. OCLC 634802587.
Coss, Peter. (2002) "From Feudalism to Bastard Feudalism," in Fryde, Monnet and Oexle (eds) (2002).
Curren-Aquino, Deborah T. (1989a) "Introduction: King John Resurgent," in Curren-Aquino (ed) 1989b.
Curren-Aquino, Deborah T. (ed) (1989b) King John: New Perspectives. Cranbury, US: University of Delaware Press. ISBN 978-0-87413-337-0.
D'Amassa, Don. (2009) Encyclopedia of Adventure Fiction: the Essential Reference to the Great Works and Writers of Adventure Fiction. New York: Facts on File. ISBN 978-0-8160-7573-7.
Danziger, Danny and John Gillingham. (2003) 1215: The Year of the Magna Carta. London: Coronet Books. ISBN 978-0-7432-5778-7.
Duffy, Sean. (2007) "John and Ireland: the Origins of England's Irish Problem," in Church (ed) 2007.
Duncan, A. A. M. (2007) "John King of England and the King of the Scots," in Church (ed) 2007.
Dyer, Christopher. (2009) Making a Living in the Middle Ages: The People of Britain, 850 – 1520. London: Yale University Press. ISBN 978-0-300-10191-1.
Elliott, Andrew B. R. (2011) Remaking the Middle Ages: The Methods of Cinema and History in Portraying the Medieval World. Jefferson, US: McFarland. ISBN 978-0-7864-4624-7.
Fryde, E. B., D. E. Greenway, S. Porter and I. Roy (eds) (1996) Handbook of British Chronology, third edition. Cambridge: Cambridge University Press. ISBN 0-521-56350-X.
Fryde, Natalie, Pierre Monnet and Oto Oexle. (eds) (2002) Die Gegenwart des Feudalismus. Göttingen, Germany: Vandenhoeck and Ruprecht. ISBN 978-3-525-35391-2.
Fryde, Natalie. (2007) "King John and the Empire," in Church (ed) 2007.
Galbraith, V. H. (1945) "Good and Bad Kings in English History," History 30, 119–32.
Gillingham, John. (1994) Richard Coeur de Lion: Kingship, Chivalry, and War in the Twelfth Century. London: Hambledon Press. ISBN 978-1-85285-084-5.
Gillingham, John (2001) The Angevin Empire. (2nd edition) London, UKL Hodder Arnold. ISBN 0-340-74115-5.
Gillingham, John. (2007) "Historians without Hindsight: Coggshall, Diceto and Howden on the Early Years of John's Reign," in Church (ed) 2007.
Given-Wilson, Chris. (1996) An Illustrated History of Late Medieval England. Manchester: Manchester University Press. ISBN 0-7190-4152-X.
Harper-Bill. (2007) "John and the Church of Rome," in Church (ed) 2007.
Harris, Jesse W. (1940) John Bale, a study in the minor literature of the Reformation. Urbana, US: Illinois Studies in Language and Literature.
Hodgett, Gerald. (2006) A Social and Economic History of Medieval Europe. Abingdon, UK: Routledge. ISBN 978-0-415-37707-2.
Holt, James Clarke. (1961) The Northerners: A Study in the Reign of King John. Oxford: Oxford University Press. OCLC 862444.
Holt, James Clarke. (1963) King John. London: Historical Association. OCLC 639752123.
Holt, James Clarke. (1984) "The Loss of Normandy and Royal Finance," in Holt and Gillingham (eds) 1984.
Holt, James Clarke and John Gillingham (eds) (1984) War and Government in the Middle Ages: Essays in Honour of J. O. Prestwich. Woodbridge, UK: Boydell Press. ISBN 978-0-389-20475-6.
Hunnisett, R. F. (1961) The Medieval Coroner. Cambridge: Cambridge University Press. OCLC 408381.
Huscroft, Richard. (2005) Ruling England, 1042–1217. Harlow, UK: Pearson. ISBN 0-582-84882-2.
Inwood, Stephen. (1998) A History of London. London: Macmillan. ISBN 978-0-7867-0613-6.
Johnson, Hugh. (1989) Vintage: The Story of Wine. New York: Simon and Schuster. ISBN 0-671-68702-6.
Jordan, William Chester. (1991) "Isabelle d'Angoulême, by the Grace of God, Queen," in Revue belge de philologie et histoire, 69, 821–852.
Lawler, John and Gail Gates Lawler. (2000) A Short Historical Introduction to the Law of Real Property. Washington DC: Beard Books. ISBN 978-1-58798-032-9.
Lloyd, Alan. (1972) The Maligned Monarch: a Life of King John of England. Garden City, US: Doubleday. OCLC 482542.
Loewenstein, David and Janel M. Mueller. (eds) (2002) The Cambridge History of Early Modern English Literature. Cambridge: Cambridge University Press. ISBN 978-0-521-63156-3.
Maley, Willy. (2010) "'And bloody England into England gone': Empire, Monarchy, and Nation in King John," in Maley and Tudeau-Clayton (eds) 2010.
Maley, Willy and Margaret Tudeau-Clayton. (eds) (2010) This England, That Shakespeare: New Angles on Englishness and the Bard. Farnham, UK: Ashgate Publishing. ISBN 978-0-7546-6602-8.
Mason, Emma. (2008) King Rufus: The Life and Murder of William II of England. Stroud, UK: The History Press. ISBN 978-0-7524-4635-6.
McEachern, Claire. (2002) "Literature and national identity," in Loewenstein and Mueller (eds) 2002.
McLynn, Frank. (2007) Lionheart and Lackland: King Richard, King John and the Wars of Conquest. London: Vintage Books. ISBN 978-0-7126-9417-9.
Moss, V. D. (2007) "The Norman Exchequer Rolls of King John," in Church (ed) 2007.
Norgate, Kate. (1887) England Under the Angevin Kings, vol. 2. London: Macmillan. OCLC 373944.
Norgate, Kate. (1902) John Lackland. London: Macmillan. OCLC 1374257.
Poole, Stephen. (1993) From Domesday Book to Magna Carta 1087–1216. Oxford: Oxford University Press. ISBN 0-19-285287-6.
Potter, Lois. (1998) Playing Robin Hood: the Legend as Performance in Five Centuries. Cranbury, US: University of Delaware Press. ISBN 978-0-87413-663-0.
Power, Daniel. (2007) "King John and the Norman Aristocracy," in Church (ed) 2007.
Ramsay, James Henry. (1903) The Angevin Empire. London: Sonnenschein. OCLC 2919309.
Richardson, Douglas. (2004) Plantagenet Ancestry: a Study in Colonial and Medieval Families. Salt Lake City: Genealogical Publishing. ISBN 978-0-8063-1750-2.
Rowlands, Ifor W. (2007) "King John and Wales," in Church (ed) 2007.
Scott, Walter. (1998) Ivanhoe. Edinburgh: Edinburgh University Press. ISBN 978-0-7486-0573-6.
Seel, Graham E. (2012) King John: An Underrated King. London: Anthem Press. ISBN 978-0-8572-8518-8.
Stenton, Doris Mary. (1976) English Society in the Early Middle Ages (1066–1307). Harmondsworth, UK: Penguin. ISBN 0-14-020252-8.
Tulloch, Graham. (1988) "Historical Notes," in Scott (1998).
Turner, Ralph V. (2009) King John: England's Evil King? Stroud, UK: History Press. ISBN 978-0-7524-4850-3.
Vincent, Nicholas. (2007) "Isabella of Angoulême: John's Jezebel," in Church (ed) 2007.
Warren, W. Lewis. (1991) King John. London: Methuen. ISBN 0-413-45520-3.
|-
|-
|-
Category:1166 births
Category:1216 deaths
Category:Deaths from dysentery
Category:Dukes of Normandy
Gloucester
Category:Earls of Cornwall
Category:English monarchs
Category:English people of French descent
Category:English people of Scottish descent
Category:High Sheriffs of Somerset
Category:House of Plantagenet
Category:People excommunicated by the Roman Catholic Church
Category:People from Oxford
Category:Cornish people
Category:Robin Hood characters
Category:12th-century monarchs in Europe
Category:13th-century monarchs in Europe
Category:People of the Barons' Wars
Category:Burials at Worcester Cathedral
Category:House of Anjou
Category:13th-century peers of France | 16,550 | 2017-01 |
Time | thumb|150px|right|The flow of sand in an hourglass can be used to measure the passage of time. It also concretely represents the present as being between the past and the future.
Time is the indefinite continued progress of existence and events that occur in apparently irreversible succession from the past through the present to the future. Time is a component quantity of various measurements used to sequence events, to compare the duration of events or the intervals between them, and to quantify rates of change of quantities in material reality or in the conscious experience.Merriam-Webster Dictionary the measured or measurable period during which an action, process, or condition exists or continues : duration; a nonspatial continuum which is measured in terms of events that succeed one another from past through present to futureCompact Oxford English Dictionary A limited stretch or space of continued existence, as the interval between two successive events or acts, or the period through which an action, condition, or state continues. (1971) Time is often referred to as the fourth dimension, along with the three spatial dimensions."Newton did for time what the Greek geometers did for space, idealized it into an exactly measurable dimension." About Time: Einstein's Unfinished Revolution, Paul Davies, p. 31, Simon & Schuster, 1996, ISBN 978-0684818221
Time has long been an important subject of study in religion, philosophy, and science, but defining it in a manner applicable to all fields without circularity has consistently eluded scholars.Adam Frank, Cosmology and Culture at the Twilight of the Big Bang, "the time we imagined from the cosmos and the time we imagined into the human experience turn out to be woven so tightly together that we have lost the ability to see each of them for what it is." p. xv, Free Press, 2011, ISBN 978-1439169599St. Augustine, Confessions, Simon & Brown, 2012, ISBN 978-1613823262
Nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems. 108 pages
Two contrasting viewpoints on time divide prominent philosophers.
One view is that time is part of the fundamental structure of the universe—a dimension independent of events, in which events occur in sequence.
Isaac Newton subscribed to this realist view, and hence it is sometimes referred to as Newtonian time.
The opposing view is that time does not refer to any kind of "container" that events and objects "move through", nor to any entity that "flows", but that it is instead part of a fundamental intellectual structure (together with space and number) within which humans sequence and compare events. This second view, in the tradition of Gottfried Leibniz
and Immanuel Kant,
holds that time is neither an event nor a thing, and thus is not itself measurable nor can it be travelled.
Time in physics is unambiguously operationally defined as "what a clock reads". Time is one of the seven fundamental physical quantities in both the International System of Units and International System of Quantities. Time is used to define other quantities—such as velocity—so defining time in terms of such quantities would result in circularity of definition.Duff, Okun, Veneziano, ibid. p. 3. "There is no well established terminology for the fundamental constants of Nature. ... The absence of accurately defined terms or the uses (i.e., actually misuses) of ill-defined terms lead to confusion and proliferation of wrong statements."
An operational definition of time, wherein one says that observing a certain number of repetitions of one or another standard cyclical event (such as the passage of a free-swinging pendulum) constitutes one standard unit such as the second, is highly useful in the conduct of both advanced experiments and everyday affairs of life. The operational definition leaves aside the question whether there is something called time, apart from the counting activity just mentioned, that flows and that can be measured. Investigations of a single continuum called spacetime bring questions about space into questions about time, questions that have their roots in the works of early students of natural philosophy.
Furthermore, it may be that there is a subjective component to time, but whether or not time itself is "felt", as a sensation, or is a judgment, is a matter of debate.
Lehar, Steve. (2000). The Function of Conscious Experience: An Analogical Paradigm of Perception and Behavior, Consciousness and Cognition.
Temporal measurement has occupied scientists and technologists, and was a prime motivation in navigation and astronomy. Periodic events and periodic motion have long served as standards for units of time. Examples include the apparent motion of the sun across the sky, the phases of the moon, the swing of a pendulum, and the beat of a heart. Currently, the international unit of time, the second, is defined by measuring the electronic transition frequency of caesium atoms (see below). Time is also of significant social importance, having economic value ("time is money") as well as personal value, due to an awareness of the limited time in each day and in human life spans.
Temporal measurement and history
Generally speaking, methods of temporal measurement, or chronometry, take two distinct forms: the calendar, a mathematical tool for organizing intervals of time,
and the clock, a physical mechanism that counts the passage of time. In day-to-day life, the clock is consulted for periods less than a day whereas the calendar is consulted for periods longer than a day. Increasingly, personal electronic devices display both calendars and clocks simultaneously. The number (as on a clock dial or calendar) that marks the occurrence of a specified event as to hour or date is obtained by counting from a fiducial epoch—a central reference point.
History of the calendar
Artifacts from the Paleolithic suggest that the moon was used to reckon time as early as 6,000 years ago.
Lunar calendars were among the first to appear, either 12 or 13 lunar months (either 354 or 384 days). Without intercalation to add days or months to some years, seasons quickly drift in a calendar based solely on twelve lunar months. Lunisolar calendars have a thirteenth month added to some years to make up for the difference between a full year (now known to be about 365.24 days) and a year of just twelve lunar months. The numbers twelve and thirteen came to feature prominently in many cultures, at least partly due to this relationship of months to years. Other early forms of calendars originated in Mesoamerica, particularly in ancient Mayan civilization. These calendars were religiously and astronomically based, with 18 months in a year and 20 days in a month.Van Stone, Mark. "The Maya Long Count Calendar: An Introduction." Archaeoastronomy 24.(2011): 8-11. Academic Search Complete. Web. 20 Feb. 2016.
The reforms of Julius Caesar in 45 BC put the Roman world on a solar calendar. This Julian calendar was faulty in that its intercalation still allowed the astronomical solstices and equinoxes to advance against it by about 11 minutes per year. Pope Gregory XIII introduced a correction in 1582; the Gregorian calendar was only slowly adopted by different nations over a period of centuries, but it is now the most commonly used calendar around the world, by far.
During the French Revolution, a new clock and calendar were invented in attempt to de-Christianize time and create a more rational system in order to replace the Gregorian Calendar. The French Republican Calendar's days consisted of ten hours of a hundred minutes of a hundred seconds, which marked a deviation from the 12-based duodecimal system used in many other devices by many cultures. The system was later abolished in 1806."French Republican Calendar | Chronology." Encyclopedia Britannica Online. Encyclopedia Britannica, n.d. Web. 21 Feb. 2016.
History of time measurement devices
thumb|right|Horizontal sundial in Taganrog
A large variety of devices has been invented to measure time. The study of these devices is called horology.
An Egyptian device that dates to c.1500 BC, similar in shape to a bent T-square, measured the passage of time from the shadow cast by its crossbar on a nonlinear rule. The T was oriented eastward in the mornings. At noon, the device was turned around so that it could cast its shadow in the evening direction.Barnett, Jo Ellen Time's Pendulum: The Quest to Capture Time—from Sundials to Atomic Clocks Plenum, 1998 ISBN 0-306-45787-3 p.28
A sundial uses a gnomon to cast a shadow on a set of markings calibrated to the hour. The position of the shadow marks the hour in local time. The idea to separate the day into smaller parts is credited to Egyptians because of their sundials, which operated on a duodecimal system. The importance of the number 12 is due the number of lunar cycles in a year and the number of stars used to count the passage of night.Lombardi, Michael A. "Why Is a Minute Divided into 60 Seconds, an Hour into 60 Minutes, Yet There Are Only 24 Hours in a Day?" Scientific American. Springer Nature, 5 Mar. 2007. Web. 21 Feb. 2016.
The most precise timekeeping device of the ancient world was the water clock, or clepsydra, one of which was found in the tomb of Egyptian pharaoh Amenhotep I (1525–1504 BC). They could be used to measure the hours even at night, but required manual upkeep to replenish the flow of water. The Ancient Greeks and the people from Chaldea (southeastern Mesopotamia) regularly maintained timekeeping records as an essential part of their astronomical observations. Arab inventors and engineers in particular made improvements on the use of water clocks up to the Middle Ages.Barnett, ibid, p.37
In the 11th century, Chinese inventors and engineers invented the first mechanical clocks driven by an escapement mechanism.
thumb|left|A contemporary quartz watch, 2007
The hourglass uses the flow of sand to measure the flow of time. They were used in navigation. Ferdinand Magellan used 18 glasses on each ship for his circumnavigation of the globe (1522).Laurence Bergreen, Over the Edge of the World: Magellan's Terrifying Circumnavigation of the Globe, HarperCollins Publishers, 2003, hardcover 480 pages, ISBN 0-06-621173-5
Incense sticks and candles were, and are, commonly used to measure time in temples and churches across the globe. Waterclocks, and later, mechanical clocks, were used to mark the events of the abbeys and monasteries of the Middle Ages. Richard of Wallingford (1292–1336), abbot of St. Alban's abbey, famously built a mechanical clock as an astronomical orrery about 1330.North, J. (2004) God's Clockmaker: Richard of Wallingford and the Invention of Time. Oxbow Books. ISBN 1-85285-451-0Watson, E (1979) "The St Albans Clock of Richard of Wallingford". Antiquarian Horology 372–384.
Great advances in accurate time-keeping were made by Galileo Galilei and especially Christiaan Huygens with the invention of pendulum driven clocks along with the invention of the minute hand by Jost Burgi."History of Clocks." About.com Inventors. About.com, n.d. Web. 21 Feb. 2016.
The English word clock probably comes from the Middle Dutch word klocke which, in turn, derives from the medieval Latin word clocca, which ultimately derives from Celtic and is cognate with French, Latin, and German words that mean bell. The passage of the hours at sea were marked by bells, and denoted the time (see ship's bell). The hours were marked by bells in abbeys as well as at sea.
thumb|Chip-scale atomic clocks, such as this one unveiled in 2004, are expected to greatly improve GPS location.
Clocks can range from watches, to more exotic varieties such as the Clock of the Long Now. They can be driven by a variety of means, including gravity, springs, and various forms of electrical power, and regulated by a variety of means such as a pendulum.
Alarm clocks first appeared in Ancient Greece around 250 B.C. with a water clock that would set off a whistle. This idea was later mechanized by Levi Hutchins and Seth E. Thomas.
A chronometer is a portable timekeeper that meets certain precision standards. Initially, the term was used to refer to the marine chronometer, a timepiece used to determine longitude by means of celestial navigation, a precision firstly achieved by John Harrison. More recently, the term has also been applied to the chronometer watch, a watch that meets precision standards set by the Swiss agency COSC.
The most accurate timekeeping devices are atomic clocks, which are accurate to seconds in many millions of years,
and are used to calibrate other clocks and timekeeping instruments.
Atomic clocks use the frequency of electronic transitions in certain atoms to measure the second. One of the most common atoms used is caesium, most modern atomic clocks probe caesium with microwaves to determine the frequency of these electron vibrations. Since 1967, the International System of Measurements bases its unit of time, the second, on the properties of caesium atoms. SI defines the second as 9,192,631,770 cycles of the radiation that corresponds to the transition between two electron spin energy levels of the ground state of the 133Cs atom.
Today, the Global Positioning System in coordination with the Network Time Protocol can be used to synchronize timekeeping systems across the globe.
In medieval philosophical writings, the atom was a unit of time referred to as the smallest possible division of time. The earliest known occurrence in English is in Byrhtferth's Enchiridion (a science text) of 1010–1012, where it was defined as 1/564 of a momentum (1½ minutes),"atom", Oxford English Dictionary, Draft Revision September 2008 (contains relevant citations from Byrhtferth's Enchiridion) and thus equal to 15/94 of a second. It was used in the computus, the process of calculating the date of Easter.
, the smallest time interval uncertainty in direct measurements is on the order of 12 attoseconds (1.2 × 10−17 seconds), about 3.7 × 1026 Planck times.
List of units
+ Units of time Unit Length, duration and sizeNotesinstant varies loosely speaking, zero time (colloquially the term may be used in other ways) Planck time unit 5.39 x 10−44 s The duration light takes to travel one Planck length. Theorized to be the smallest duration measurement that will ever be possible, roughly 10−43 seconds. yoctosecond 10−24 s jiffy (quantum physics) about 3 × 10−24 The duration light takes to travel one fermi (10−15 m, about the size of a nucleon) in a vacuum. zeptosecond 10−21 s attosecond 10−18 s shortest duration now measurable femtosecond 10−15 s pulse duration on fastest lasers picosecond 10−12 s nanosecond 10−9 s duration for molecules to fluoresce shake 10−8 s 10 nanoseconds. Also a casual term for a short duration. microsecond 10−6 s millisecond 0.001 s shortest duration unit used on stopwatches centisecond 0.01 s used on some stopwatches jiffy (electronics) ~1/50 s to 1/60 s Used to measure the duration between alternating power cycles. Also a casual term for a short durationdecisecond0.1 sUsed on some stopwatches second 1 s SI base unit decasecond 10 seconds half a minute 30 seconds minute 60 seconds moment (historical) 1/40th of an hour? (90 seconds) used by Medieval Western European computists. hectosecond 100 seconds 1 minute and 40 seconds 5 minutes 300 seconds All the numbers on an analog clock are 5 minutes apart centiday 864 seconds traditional Chinese unit of decimal time duration, usually 1/100 of a day, apart by 100 ke(scales) on the sundial and the ruler of water clock, i.e. 14 minutes and 24 seconds. (Nearly 1/4 of an hour. Similar to the English word "quarter" as in "a quarter past six", i.e. 6:15) kilosecond 1,000 seconds 16 minutes and 40 secondspoint24 minutesTraditional Chinese time unit, usually 1/60 of a day. hour 60 minutesdualhour 2 hours Traditional time unit, 1/12 of a day. apart by 12 shi on the sundial.deciday2.4 hours Traditional unit of decimal time duration, 1/10 of a day. apart by geng day 24 hours longest unit used on stopwatches and countdowns week 7 days Also called sennight megasecond 1,000,000 seconds About 11.6 days fortnight 14 days 2 weeks (more common in Great Britain) lunar month 27.2–29.5 days Various definitions of lunar month exist. February 28–29 days This month gets an extra day in a leap year common month 30–31 days Often 30 days for financial and other calculations. quarter and season 3 months The duration of any of the four calendar seasons; winter, spring, summer and autumn. year 12 months common year 365 days 52 weeks + 1 day tropical year 365.24219 days Extract of page 18 average Gregorian year 365.2425 days Extract of page 287 averageJulian year365.25 days sidereal year 365.256363004 days leap year 366 days 52 weeks + 2 days biennium 2 years A unit of time duration commonly used by legislatures triennium 3 years Olympiad 4-year cycle lustrum 5 years decade 10 years Indiction 15-year cycle generation varies about 17–36 years for humans, but some are more extreme gigasecond 1,000,000,000 seconds About 31.7 years jubilee 50 years Lifespan 85 or 82 years how long (on average) people live century 100 years millennium 1,000 years also called "kiloannum" terasecond 1012 seconds About 31,700 years megaannum 1,000,000 years 1 million years age varies on the geological timescale, some millions of years epoch varies on the geological timescale, tens of millions of years petasecond 1015 seconds About 31.7 million years era varies on the geological timescale, several hundred millions of years galactic year Approximately 230 million yearsThe duration it takes the Sun to orbit the center of the Milky Way galaxy once. eon varies on the geological timescale, 500 million years or more. Also "an indefinite and very long period of time". gigaannum 1,000,000,000 years A billion years (109) Our star's lifespan 12,000,000,000 years Is how long the sun will live exasecond 1018 seconds roughly 31.7 x 109 years, more than twice the age of the universe (on current estimates) teraannum 1,000,000,000,000 years A trillion years (1012) zettasecond 1021 seconds About 31.7 x 1012 years petaannum 1,000,000,000,000,000 years A quadrillion years (1015) yottasecond 1024 seconds About 31.7 x 1015 years cosmological decade varies 10 times the length of the previous cosmological decade, with CÐ 1 beginning either 10 seconds or 10 years after the Big Bang, depending on the definition.
Definitions and standards
Originally the second was defined as 1/86,400 of the mean solar day, which is the year-average of the solar day, being the time interval between two successive noons, i.e., the time interval between two successive passages of the sun across the meridian. In 1874 the British Association for the Advancement of Science introduced the CGS (centimetre/gramme/second system) combining fundamental units of length, mass and time. This second is "elastic", because tidal friction is slowing the earth's rotation rate. For use in calculating ephemerides of celestial motion, therefore, in 1952 astronomers introduced the "ephemeris second", currently defined as
Whitaker's Almanac 2013 (ed. Ruth Northey), London 2012, p 1131, ISBN 978-1-4081-7207-0.
The CGS system has been superseded by the Système international. The SI base unit for time is the SI second. The International System of Quantities, which incorporates the SI, also defines larger units of time equal to fixed integer multiples of one second (1 s), such as the minute, hour and day. These are not part of the SI, but may be used alongside the SI. Other units of time such as the month and the year are not equal to fixed multiples of 1 s, and instead exhibit significant variations in duration.
The official SI definition of the second is as follows:
At its 1997 meeting, the CIPM affirmed that this definition refers to a caesium atom in its ground state at a temperature of 0 K.
The current definition of the second, coupled with the current definition of the metre, is based on the special theory of relativity, which affirms our spacetime to be a Minkowski space. The definition of the second in mean solar time, however, is unchanged.
World time
While in theory, the concept of a single worldwide universal time-scale may have been conceived of many centuries ago, in practicality the technical ability to create and maintain such a time-scale did not become possible until the mid-19th century. The timescale adopted was Greenwich Mean Time, created in 1847. A few countries have replaced it with Coordinated Universal Time, UTC.
History of the development of UTC
With the advent of the industrial revolution, a greater understanding and agreement on the nature of time itself became increasingly necessary and helpful. In 1847 in Britain, Greenwich Mean Time (GMT) was first created for use by the British railways, the British navy, and the British shipping industry. Using telescopes, GMT was calibrated to the mean solar time at the Royal Observatory, Greenwich in the UK.
As international commerce continued to increase throughout Europe, in order to achieve a more efficiently functioning modern society, an agreed upon, and highly accurate international standard of time measurement became necessary. In order to find or determine such a time-standard, three steps had to be followed:
An internationally agreed upon time-standard had to be defined.
This new time-standard then had to be consistently and accurately measured.
The new time-standard then had to be freely shared and distributed around the world.
The development of what is now known as UTC time came about historically as an effort which first began as a collaboration between 41 nations, officially agreed to and signed at the International Meridian Conference, in Washington D.C. in 1884
Amongst these 41 nations represented at this conference, the advanced time-technologies that had already come into use in Britain were fundamental components of the agreed upon method of arriving at a universal and agreed upon international time.
In 1928 the modern day descendant of GMT (though slightly less accurate than UTC) was defined by the International Astronomical Union as Universal Time (UT). Even to the present day, UT is still based on an international telescopic system. Observations at the Greenwich Observatory itself ceased in 1954, though the location is still used as the basis for the coordinate system. Because the rotational period of Earth is not perfectly constant, the duration of a second would vary if calibrated to a telescope-based standard like GMT or UT—in which a second was defined as a fraction of a day or year. The terms "GMT" and "Greenwich Mean Time" are sometimes used informally to refer to UT, however GMT and UTC are not the same thing, and the most accurate description of the most commonly used international time standard is now UTC, and is no longer "GMT".
For the better part of the first century following the "International Meridian Conference," until 1960, the methods and definitions of time-keeping that had been laid out at the Conference proved to be adequate to meet time tracking needs of society. Still, with the advent of the "electronic revolution" in the latter half of the 20th century, the technologies that had been available at the time of the Convention of the Metre, proved to be in need of further refinement in order to meet the needs of the ever increasing precision that the "electronic revolution" had begun to require. Therefore, in 1960, due to irregularities that had been found in the length of a solar year over time, it was agreed upon that the solar year, 1900 would thenceforth serve as the "reference year" for all future computations and definitions of the exact length of a year, and by inference, of a second-of-time. This new definition of a second-of-time, based on the reference year of 1900, came to be known as the ephemeris-second.
Once a more exact and measurable definition of a second-of-time had been agreed upon, known as the ephemeris-second, in 1967, the new and more easily measured technology of the atomic clock resulted in the agreed upon definition of the si-second, now based directly on the atomic clock equivalent of the ephemeris-second.
The si-second (Standard Internationale second) has stood since 1967 as the internationally recognized fundamental building-block to be used for the computation and measurement of time, and is based directly on the measurement of the atomic-clock observation of the frequency oscillation of cesium atoms. Atomic clocks do not measure nuclear decay rates, which is a common misperception, but rather measure a certain natural vibrational frequency of Cesium-133.Cesium Atoms at Work USNO, downloaded June 28, 2016.
Current application of UTC
The most commonly used standard of time is currently what is typically referred to as UTC Time. This time-standard is based on the si-second, which was first defined in 1967, and is based on the use of atomic clocks. Some other less used but closely related time-standards include International Atomic Time (TAI), Terrestrial Time, and Barycentric Dynamical Time.
Between 1967 and 1971, UTC was periodically adjusted by fractional "leap seconds" in order to adjust and refine for various temporal aberrations that were subsequently discovered. After 1 January 1972, UTC time has been defined as being offset from the original 1967 UTC time by a whole-number of seconds, changing only when a leap second is added to keep clock time synchronized with the rotation of the Earth.
The Global Positioning System also broadcasts a very precise time signal worldwide, along with instructions for converting GPS time to UTC. GPS-time is based on, and regularly synchronized with or from, UTC-time.
Earth is split up into a number of time zones. Most time zones are exactly one hour apart, and by convention compute their local time as an offset from UTC. In many locations these offsets vary twice yearly due to daylight saving time transitions. While a few governments still legally define their national times as being based upon GMT, most major governments have now redefined their national times as being based directly upon UTC.
Time conversions
These conversions are accurate at the millisecond level for time systems involving earth rotation (UT1 & TT). Conversions between atomic time systems (TAI, GPS, and UTC) are accurate at the microsecond level.
System Description UT1 UTC TT TAI GPS UT1 Mean Solar Time UT1 UTC = UT1 - DUT1 TT = UT1 + 32.184 s + LS - DUT1 TAI = UT1 - DUT1 + LS GPS = UT1 - DUT1 + LS - 19 s UTC Civil Time UT1 = UTC + DUT1 UTC TT = UTC + 32.184 s + LS TAI = UTC + LS GPS = UTC + LS - 19 s TT Terrestrial (Ephemeris) Time UT1 = TT - 32.184 s - LS + DUT1 UTC = TT - 32.184 s - LS TT TAI = TT - 32.184 s GPS = TT - 51.184 s TAI Atomic Time UT1 = TAI + DUT1 - LS UTC = TAI - LS TT = TAI + 32.184 s TAI GPS = TAI - 19 s GPS GPS Time UT1 = GPS + DUT1 - LS + 19 s UTC = GPS - LS + 19 s TT = GPS + 51.184 s TAI = GPS + 19 s GPS
Definitions:
LS = TAI - UTC = Leap Seconds from http://maia.usno.navy.mil/ser7/tai-utc.dat
DUT1 = UT1 - UTC from http://maia.usno.navy.mil/ser7/ser7.dat or http://maia.usno.navy.mil/search/search.html
Sidereal time
Sidereal time is the measurement of time relative to a distant star (instead of solar time that is relative to the sun). It is used in astronomy to predict when a star will be overhead. Due to the orbit of the earth around the sun a sidereal day is about 4 minutes (1/366th) less than a solar day.
Chronology
Another form of time measurement consists of studying the past. Events in the past can be ordered in a sequence (creating a chronology), and can be put into chronological groups (periodization). One of the most important systems of periodization is the geologic time scale, which is a system of periodizing the events that shaped the Earth and its life. Chronology, periodization, and interpretation of the past are together known as the study of history.
Time-like concepts: terminology
The term "time" is generally used for many close but different concepts, including:
instantIEC 60050-113:2011, item 113-01-08 as an object—one point on the time axes. Being an object, it has no value;
time intervalIEC 60050-113:2011, item 113-01-010; ISO 80000-3:2006, item 3-7 as an object—part of the time axes limited by two instants. Being an object, it has no value;
dateIEC 60050-113:2011, item 113-01-012: "mark attributed to an instant by means of a specified time scale as a quantity characterizing an instant. As a quantity, it has a value which may be expressed in a variety of ways, for example "2014-04-26T09:42:36,75" in ISO standard format, or more colloquially such as "today, 9:42 a.m.";
durationIEC 60050-113:2011, item 113-01-013: "range of a time interval (113-01-10)" as a quantity characterizing a time interval.ISO 80000-3:2006, item 3-7 As a quantity, it has a value, such as a number of minutes, or may be described in terms of the quantities (such as times and dates) of its beginning and end.
Religion
thumb|left|110px|Hindu units of time shown logarithmically
Linear and cyclical time
Ancient cultures such as Incan, Mayan, Hopi, and other Native American Tribes - plus the Babylonians, Ancient Greeks, Hinduism, Buddhism, Jainism, and others - have a concept of a wheel of time: they regard time as cyclical and quantic, consisting of repeating ages that happen to every being of the Universe between birth and extinction.
In general, the Islamic and Judeo-Christian world-view regards time as linear
and directional,
beginning with the act of creation by God. The traditional Christian view sees time ending, teleologically,
with the eschatological end of the present order of things, the "end time".
In the Old Testament book Ecclesiastes, traditionally ascribed to Solomon (970–928 BC), time (as the Hebrew word עדן, זמן `iddan(time) zĕman(season) is often translated) was traditionally regarded as a medium for the passage of predestined events. (Another word, زمان" זמן" zamān, meant time fit for an event, and is used as the modern Arabic, Persian, and Hebrew equivalent to the English word "time".)
Time in Greek mythology
The Greek language denotes two distinct principles, Chronos and Kairos. The former refers to numeric, or chronological, time. The latter, literally "the right or opportune moment", relates specifically to metaphysical or Divine time. In theology, Kairos is qualitative, as opposed to quantitative.
In Greek mythology, Chronos (Ancient Greek: Χρόνος) is identified as the Personification of Time. His name in Greek means "time" and is alternatively spelled Chronus (Latin spelling) or Khronos. Chronos is usually portrayed as an old, wise man with a long, gray beard, such as "Father Time". Some English words whose etymological root is khronos/chronos include chronology, chronometer, chronic, anachronism, synchronize, and chronicle.
Time in Kabbalah
According to Kabbalists, “time” is a paradox and an illusion. Extract of page 111 Both the future and the past are recognized to be combined and simultaneously present.
Philosophy
Two distinct viewpoints on time divide many prominent philosophers.
One view is that time is part of the fundamental structure of the universe, a dimension in which events occur in sequence. Sir Isaac Newton subscribed to this realist view, and hence it is sometimes referred to as Newtonian time.
An opposing view is that time does not refer to any kind of actually existing dimension that events and objects "move through", nor to any entity that "flows", but that it is instead an intellectual concept (together with space and number) that enables humans to sequence and compare events.
This second view, in the tradition of Gottfried Leibniz
and Immanuel Kant,
holds that space and time "do not exist in and of themselves, but ... are the product of the way we represent things", because we can know objects only as they appear to us.
The Vedas, the earliest texts on Indian philosophy and Hindu philosophy dating back to the late 2nd millennium BC, describe ancient Hindu cosmology, in which the universe goes through repeated cycles of creation, destruction and rebirth, with each cycle lasting 4,320 million years., Introduction, p. 7
Ancient Greek philosophers, including Parmenides and Heraclitus, wrote essays on the nature of time.Dagobert Runes, Dictionary of Philosophy, p. 318
Plato, in the Timaeus, identified time with the period of motion of the heavenly bodies. Aristotle, in Book IV of his Physica defined time as 'number of movement in respect of the before and after'."Time then is a kind of number. (Number, we must note, is used in two senses-both of what is counted or the countable and also of that with which we count. Time obviously is what is counted, not that with which we count: there are different kinds of thing.) [...] It is clear, then, that time is 'number of movement in respect of the before and after', and is continuous since it is an attribute of what is continuous. "
In Book 11 of his Confessions, St. Augustine of Hippo ruminates on the nature of time, asking, "What then is time? If no one asks me, I know: if I wish to explain it to one that asketh, I know not." He begins to define time by what it is not rather than what it is, Book 11, Chapter 14.
an approach similar to that taken in other negative definitions. However, Augustine ends up calling time a “distention” of the mind (Confessions 11.26) by which we simultaneously grasp the past in memory, the present by attention, and the future by expectation.
In contrast to ancient Greek philosophers who believed that the universe had an infinite past with no beginning, medieval philosophers and theologians developed the concept of the universe having a finite past with a beginning.
This view is shared by Abrahamic faiths as they believe time started by creation, therefore the only thing being infinite is God and everything else, including time, is finite.
Isaac Newton believed in absolute space and absolute time; Leibniz believed that time and space are relational.Gottfried Martin, Kant's Metaphysics and Theory of Science
The differences between Leibniz's and Newton's interpretations came to a head in the famous Leibniz–Clarke correspondence.
Immanuel Kant, in the Critique of Pure Reason, described time as an a priori intuition that allows us (together with the other a priori intuition, space) to comprehend sense experience. translated by J. M. D. Meiklejohn, eBooks@Adelaide, 2004
With Kant, neither space nor time are conceived as substances, but rather both are elements of a systematic mental framework that necessarily structures the experiences of any rational agent, or observing subject. Kant thought of time as a fundamental part of an abstract conceptual framework, together with space and number, within which we sequence events, quantify their duration, and compare the motions of objects. In this view, time does not refer to any kind of entity that "flows," that objects "move through," or that is a "container" for events. Spatial measurements are used to quantify the extent of and distances between objects, and temporal measurements are used to quantify the durations of and between events. Time was designated by Kant as the purest possible schema of a pure concept or category.
Henri Bergson believed that time was neither a real homogeneous medium nor a mental construct, but possesses what he referred to as Duration. Duration, in Bergson's view, was creativity and memory as an essential component of reality.Bergson, Henri (1907) Creative Evolution. trans. by Arthur Mitchell. Mineola: Dover, 1998.
According to Martin Heidegger we do not exist inside time, we are time. Hence, the relationship to the past is a present awareness of having been, which allows the past to exist in the present. The relationship to the future is the state of anticipating a potential possibility, task, or engagement. It is related to the human propensity for caring and being concerned, which causes "being ahead of oneself" when thinking of a pending occurrence. Therefore, this concern for a potential occurrence also allows the future to exist in the present. The present becomes an experience, which is qualitative instead of quantitative. Heidegger seems to think this is the way that a linear relationship with time, or temporal existence, is broken or transcended.
We are not stuck in sequential time. We are able to remember the past and project into the future—we have a kind of random access to our representation of temporal existence; we can, in our thoughts, step out of (ecstasis) sequential time.
Time as "unreal"
In 5th century BC Greece, Antiphon the Sophist, in a fragment preserved from his chief work On Truth, held that: "Time is not a reality (hypostasis), but a concept (noêma) or a measure (metron)."
Parmenides went further, maintaining that time, motion, and change were illusions, leading to the paradoxes of his follower Zeno.
Time as an illusion is also a common theme in Buddhist thought.
J. M. E. McTaggart's 1908 The Unreality of Time argues that, since every event has the characteristic of being both present and not present (i.e., future or past), that time is a self-contradictory idea (see also The flow of time).
These arguments often center around what it means for something to be unreal. Modern physicists generally believe that time is as real as space—though others, such as Julian Barbour in his book The End of Time, argue that quantum equations of the universe take their true form when expressed in the timeless realm containing every possible now or momentary configuration of the universe, called 'platonia' by Barbour.
A modern philosophical theory called presentism views the past and the future as human-mind interpretations of movement instead of real parts of time (or "dimensions") which coexist with the present. This theory rejects the existence of all direct interaction with the past or the future, holding only the present as tangible. This is one of the philosophical arguments against time travel. This contrasts with eternalism (all time: present, past and future, is real) and the growing block theory (the present and the past are real, but the future is not).
Physical definition
Until Einstein's reinterpretation of the physical concepts associated with time and space, time was considered to be the same everywhere in the universe, with all observers measuring the same time interval for any event.Herman M. Schwartz, Introduction to Special Relativity, McGraw-Hill Book Company, 1968, hardcover 442 pages, see ISBN 0-88275-478-5 (1977 edition), pp. 10–13
Non-relativistic classical mechanics is based on this Newtonian idea of time.
Einstein, in his special theory of relativity,A. Einstein, H. A. Lorentz, H. Weyl, H. Minkowski, The Principle of Relativity, Dover Publications, Inc, 2000, softcover 216 pages, ISBN 0-486-60081-5, See pp. 37–65 for an English translation of Einstein's original 1905 paper.
postulated the constancy and finiteness of the speed of light for all observers. He showed that this postulate, together with a reasonable definition for what it means for two events to be simultaneous, requires that distances appear compressed and time intervals appear lengthened for events associated with objects in motion relative to an inertial observer.
The theory of special relativity finds a convenient formulation in Minkowski spacetime, a mathematical structure that combines three dimensions of space with a single dimension of time. In this formalism, distances in space can be measured by how long light takes to travel that distance, e.g., a light-year is a measure of distance, and a meter is now defined in terms of how far light travels in a certain amount of time. Two events in Minkowski spacetime are separated by an invariant interval, which can be either space-like, light-like, or time-like. Events that have a time-like separation cannot be simultaneous in any frame of reference, there must be a temporal component (and possibly a spatial one) to their separation. Events that have a space-like separation will be simultaneous in some frame of reference, and there is no frame of reference in which they do not have a spatial separation. Different observers may calculate different distances and different time intervals between two events, but the invariant interval between the events is independent of the observer (and his velocity).
Classical mechanics
In non-relativistic classical mechanics, Newton's concept of "relative, apparent, and common time" can be used in the formulation of a prescription for the synchronization of clocks. Events seen by two different observers in motion relative to each other produce a mathematical concept of time that works sufficiently well for describing the everyday phenomena of most people's experience. In the late nineteenth century, physicists encountered problems with the classical understanding of time, in connection with the behavior of electricity and magnetism. Einstein resolved these problems by invoking a method of synchronizing clocks using the constant, finite speed of light as the maximum signal velocity. This led directly to the result that observers in motion relative to one another measure different elapsed times for the same event.
250px|right|thumb|Two-dimensional space depicted in three-dimensional spacetime. The past and future light cones are absolute, the "present" is a relative concept different for observers in relative motion.
Spacetime
Time has historically been closely related with space, the two together merging into spacetime in Einstein's special relativity and general relativity. According to these theories, the concept of time depends on the spatial reference frame of the observer, and the human perception as well as the measurement by instruments such as clocks are different for observers in relative motion. For example, if a spaceship carrying a clock flies through space at (very nearly) the speed of light, its crew does not notice a change in the speed of time on board their vessel because everything traveling at the same speed slows down at the same rate (including the clock, the crew's thought processes, and the functions of their bodies). However, to a stationary observer watching the spaceship fly by, the spaceship appears flattened in the direction it is traveling and the clock on board the spaceship appears to move very slowly.
On the other hand, the crew on board the spaceship also perceives the observer as slowed down and flattened along the spaceship's direction of travel, because both are moving at very nearly the speed of light relative to each other. Because the outside universe appears flattened to the spaceship, the crew perceives themselves as quickly traveling between regions of space that (to the stationary observer) are many light years apart. This is reconciled by the fact that the crew's perception of time is different from the stationary observer's; what seems like seconds to the crew might be hundreds of years to the stationary observer. In either case, however, causality remains unchanged: the past is the set of events that can send light signals to an entity and the future is the set of events to which an entity can send light signals.
Time dilation
thumb|Relativity of simultaneity: Event B is simultaneous with A in the green reference frame, but it occurred before in the blue frame, and occurs later in the red frame.
Einstein showed in his thought experiments that people travelling at different speeds, while agreeing on cause and effect, measure different time separations between events, and can even observe different chronological orderings between non-causally related events. Though these effects are typically minute in the human experience, the effect becomes much more pronounced for objects moving at speeds approaching the speed of light. Many subatomic particles exist for only a fixed fraction of a second in a lab relatively at rest, but some that travel close to the speed of light can be measured to travel farther and survive much longer than expected (a muon is one example). According to the special theory of relativity, in the high-speed particle's frame of reference, it exists, on the average, for a standard amount of time known as its mean lifetime, and the distance it travels in that time is zero, because its velocity is zero. Relative to a frame of reference at rest, time seems to "slow down" for the particle. Relative to the high-speed particle, distances seem to shorten. Einstein showed how both temporal and spatial dimensions can be altered (or "warped") by high-speed motion.
Einstein (The Meaning of Relativity): "Two events taking place at the points A and B of a system K are simultaneous if they appear at the same instant when observed from the middle point, M, of the interval AB. Time is then defined as the ensemble of the indications of similar clocks, at rest relatively to K, which register the same simultaneously."
Einstein wrote in his book, Relativity, that simultaneity is also relative, i.e., two events that appear simultaneous to an observer in a particular inertial reference frame need not be judged as simultaneous by a second observer in a different inertial frame of reference.
Relativistic time versus Newtonian time
right|framed|Views of spacetime along the world line of a rapidly accelerating observer in a relativistic universe. The events ("dots") that pass the two diagonal lines in the bottom half of the image (the past light cone of the observer in the origin) are the events visible to the observer.
The animations visualise the different treatments of time in the Newtonian and the relativistic descriptions. At the heart of these differences are the Galilean and Lorentz transformations applicable in the Newtonian and relativistic theories, respectively.
In the figures, the vertical direction indicates time. The horizontal direction indicates distance (only one spatial dimension is taken into account), and the thick dashed curve is the spacetime trajectory ("world line") of the observer. The small dots indicate specific (past and future) events in spacetime.
The slope of the world line (deviation from being vertical) gives the relative velocity to the observer. Note how in both pictures the view of spacetime changes when the observer accelerates.
In the Newtonian description these changes are such that time is absolute: Extract of page 30 the movements of the observer do not influence whether an event occurs in the 'now' (i.e., whether an event passes the horizontal line through the observer).
However, in the relativistic description the observability of events is absolute: the movements of the observer do not influence whether an event passes the "light cone" of the observer. Notice that with the change from a Newtonian to a relativistic description, the concept of absolute time is no longer applicable: events move up-and-down in the figure depending on the acceleration of the observer.
Arrow of time
Time appears to have a direction—the past lies behind, fixed and immutable, while the future lies ahead and is not necessarily fixed. Yet for the most part the laws of physics do not specify an arrow of time, and allow any process to proceed both forward and in reverse. This is generally a consequence of time being modeled by a parameter in the system being analyzed, where there is no "proper time": the direction of the arrow of time is sometimes arbitrary. Examples of this include the Second law of thermodynamics, which states that entropy must increase over time (see Entropy); the cosmological arrow of time, which points away from the Big Bang, CPT symmetry, and the radiative arrow of time, caused by light only traveling forwards in time (see light cone). In particle physics, the violation of CP symmetry implies that there should be a small counterbalancing time asymmetry to preserve CPT symmetry as stated above. The standard description of measurement in quantum mechanics is also time asymmetric (see Measurement in quantum mechanics).
Quantized time
Time quantization is a hypothetical concept. In the modern established physical theories (the Standard Model of Particles and Interactions and General Relativity) time is not quantized.
Planck time (~ 5.4 × 10−44 seconds) is the unit of time in the system of natural units known as Planck units. Current established physical theories are believed to fail at this time scale, and many physicists expect that the Planck time might be the smallest unit of time that could ever be measured, even in principle. Tentative physical theories that describe this time scale exist; see for instance loop quantum gravity.
Time and the Big Bang theory
Stephen Hawking in particular has addressed a connection between time and the Big Bang. In A Brief History of Time and elsewhere, Hawking says that even if time did not begin with the Big Bang and there were another time frame before the Big Bang, no information from events then would be accessible to us, and nothing that happened then would have any effect upon the present time-frame.
Upon occasion, Hawking has stated that time actually began with the Big Bang, and that questions about what happened before the Big Bang are meaningless.
This less-nuanced, but commonly repeated formulation has received criticisms from philosophers such as Aristotelian philosopher Mortimer J. Adler.
Scientists have come to some agreement on descriptions of events that happened 10−35 seconds after the Big Bang, but generally agree that descriptions about what happened before one Planck time (5 × 10−44 seconds) after the Big Bang are likely to remain pure speculation.
Speculative physics beyond the Big Bang
right|300px|thumb|A graphical representation of the expansion of the universe with the inflationary epoch represented as the dramatic expansion of the metric seen on the left
While the Big Bang model is well established in cosmology, it is likely to be refined in the future. Little is known about the earliest moments of the universe's history. The Penrose–Hawking singularity theorems require the existence of a singularity at the beginning of cosmic time. However, these theorems assume that general relativity is correct, but general relativity must break down before the universe reaches the Planck temperature, and a correct treatment of quantum gravity may avoid the singularity.
If inflation has indeed occurred, it is likely that there are parts of the universe so distant that they cannot be observed in principle, as exponential expansion would push large regions of space beyond our observable horizon.
Some proposals, each of which entails untested hypotheses, are:
Models including the Hartle–Hawking boundary condition in which the whole of space-time is finite; the Big Bang does represent the limit of time, but without the need for a singularity.
Brane cosmology models in which inflation is due to the movement of branes in string theory; the pre-big bang model; the ekpyrotic model, in which the Big Bang is the result of a collision between branes; and the cyclic model, a variant of the ekpyrotic model in which collisions occur periodically.
Chaotic inflation, in which inflation events start here and there in a random quantum-gravity foam, each leading to a bubble universe expanding from its own big bang.
Proposals in the last two categories see the Big Bang as an event in a much larger and older universe, or multiverse, and not the literal beginning.
Time travel
Time travel is the concept of moving backwards or forwards to different points in time, in a manner analogous to moving through space, and different from the normal "flow" of time to an earthbound observer. In this view, all points in time (including future times) "persist" in some way. Time travel has been a plot device in fiction since the 19th century. Traveling backwards in time has never been verified, presents many theoretic problems, and may be an impossibility.
Any technological device, whether fictional or hypothetical, that is used to achieve time travel is known as a time machine.
A central problem with time travel to the past is the violation of causality; should an effect precede its cause, it would give rise to the possibility of a temporal paradox. Some interpretations of time travel resolve this by accepting the possibility of travel between branch points, parallel realities, or universes.
Another solution to the problem of causality-based temporal paradoxes is that such paradoxes cannot arise simply because they have not arisen. As illustrated in numerous works of fiction, free will either ceases to exist in the past or the outcomes of such decisions are predetermined. As such, it would not be possible to enact the grandfather paradox because it is a historical fact that your grandfather was not killed before his child (your parent) was conceived. This view doesn't simply hold that history is an unchangeable constant, but that any change made by a hypothetical future time traveler would already have happened in his or her past, resulting in the reality that the traveler moves from. More elaboration on this view can be found in the Novikov self-consistency principle.
Time perception
thumb|upright|Philosopher and psychologist William James
The specious present refers to the time duration wherein one's perceptions are considered to be in the present. The experienced present is said to be ‘specious’ in that, unlike the objective present, it is an interval and not a durationless instant. The term specious present was first introduced by the psychologist E.R. Clay, and later developed by William James.
Biopsychology
The brain's judgment of time is known to be a highly distributed system, including at least the cerebral cortex, cerebellum and basal ganglia as its components. One particular component, the suprachiasmatic nuclei, is responsible for the circadian (or daily) rhythm, while other cell clusters appear capable of shorter-range (ultradian) timekeeping.
Psychoactive drugs can impair the judgment of time. Stimulants can lead both humans and rats to overestimate time intervals,
while depressants can have the opposite effect.
The level of activity in the brain of neurotransmitters such as dopamine and norepinephrine may be the reason for this.
Such chemicals will either excite or inhibit the firing of neurons in the brain, with a greater firing rate allowing the brain to register the occurrence of more events within a given interval (speed up time) and a decreased firing rate reducing the brain's capacity to distinguish events occurring within a given interval (slow down time).
Mental chronometry is the use of response time in perceptual-motor tasks to infer the content, duration, and temporal sequencing of cognitive operations.
Development of awareness and understanding of time in children
Children's expanding cognitive abilities allow them to understand time more clearly. Two- and three-year-olds' understanding of time is mainly limited to "now and not now." Five- and six-year-olds can grasp the ideas of past, present, and future. Seven- to ten-year-olds can use clocks and calendars.
Alterations
In addition to psychoactive drugs, judgments of time can be altered by temporal illusions (like the kappa effect),Wada Y, Masuda T, Noguchi K, 2005, "Temporal illusion called 'kappa effect' in event perception" Perception 34 ECVP Abstract Supplement age,
and hypnosis.
The sense of time is impaired in some people with neurological diseases such as Parkinson's disease and attention deficit disorder.
Psychologists assert that time seems to go faster with age, but the literature on this age-related perception of time remains controversial. Extract of page 54
Those who support this notion argue that young people, having more excitatory neurotransmitters, are able to cope with faster external events.
Use of time
In sociology and anthropology, time discipline is the general name given to social and economic rules, conventions, customs, and expectations governing the measurement of time, the social currency and awareness of time measurements, and people's expectations concerning the observance of these customs by others. Arlie Russell Hochschild ISBN 9780805044713 and Norbert Elias have written on the use of time from a sociological perspective.
The use of time is an important issue in understanding human behavior, education, and travel behavior. Time-use research is a developing field of study. The question concerns how time is allocated across a number of activities (such as time spent at home, at work, shopping, etc.). Time use changes with technology, as the television or the Internet created new opportunities to use time in different ways. However, some aspects of time use are relatively stable over long periods of time, such as the amount of time spent traveling to work, which despite major changes in transport, has been observed to be about 20–30 minutes one-way for a large number of cities over a long period.
Time management is the organization of tasks or events by first estimating how much time a task requires and when it must be completed, and adjusting events that would interfere with its completion so it is done in the appropriate amount of time. Calendars and day planners are common examples of time management tools.
A sequence of events, or series of events, is a sequence of items, facts, events, actions, changes, or procedural steps, arranged in time order (chronological order), often with causality relationships among the items.
Because of causality, cause precedes effect, or cause and effect may appear together in a single item, but effect never precedes cause. A sequence of events can be presented in text, tables, charts, or timelines. The description of the items or events may include a timestamp. A sequence of events that includes the time along with place or location information to describe a sequential path may be referred to as a world line.
Uses of a sequence of events include stories,
historical events (chronology), directions and steps in procedures,
and timetables for scheduling activities. A sequence of events may also be used to help describe processes in science, technology, and medicine. A sequence of events may be focused on past events (e.g., stories, history, chronology), on future events that must be in a predetermined order (e.g., plans, schedules, procedures, timetables), or focused on the observation of past events with the expectation that the events will occur in the future (e.g., processes). The use of a sequence of events occurs in fields as diverse as machines (cam timer), documentaries (Seconds From Disaster), law (choice of law), computer simulation (discrete event simulation), and electric power transmission
(sequence of events recorder). A specific example of a sequence of events is the timeline of the Fukushima Daiichi nuclear disaster.
Spatial conceptualization of time
Although time is regarded as an abstract concept, there is increasing evidence that time is conceptualized in the mind in terms of space. That is, instead of thinking about time in a general, abstract way, humans think about time in a spatial way and mentally organize it as such. Using space to think about time allows humans to mentally organize temporal events in a specific way.
This spatial representation of time is often represented in the mind as a Mental Time Line (MTL). Using space to think about time allows humans to mentally organize temporal order. These origins are shaped by many environmental factors––for example, literacy appears to play a large role in the different types of MTLs, as reading/writing direction provides an everyday temporal orientation that differs from culture to culture. In Western cultures, the MTL may unfold rightward (with the past on the left and the future on the right) since people read and write from left to right. Western calendars also continue this trend by placing the past on the left with the future progressing toward the right. Conversely, Israeli-Hebrew speakers read from right to left, and their MTLs unfold leftward (past on the right with future on the left), and evidence suggests these speakers organize time events in their minds like this as well.
This linguistic evidence that abstract concepts are based in spatial concepts also reveals that the way humans mentally organize time events varies across cultures––that is, a certain specific mental organization system is not universal. So, although Western cultures typically associate past events with the left and future events with the right according to a certain MTL, this kind of horizontal, egocentric MTL is not the spatial organization of all cultures. Although most developed nations use an egocentric spatial system, there is recent evidence that some cultures use an allocentric spatialization, often based on environmental features.
A recent study of the indigenous Yupno people of Papua New Guinea focused on the directional gestures used when individuals used time-related words. When speaking of the past (such as “last year” or “past times”), individuals gestured downhill, where the river of the valley flowed into the ocean. When speaking of the future, they gestured uphill, toward the source of the river. This was common regardless of which direction the person faced, revealing that the Yupno people may use an allocentric MTL, in which time flows uphill.
A similar study of the Pormpuraawans, an aboriginal group in Australia, revealed a similar distinction in which when asked to organize photos of a man aging “in order,” individuals consistently placed the youngest photos to the east and the oldest photos to the west, regardless of which direction they faced. This directly clashed with an American group which consistently organized the photos from left to right. Therefore, this group also appears to have an allocentric MTL, but based on the cardinal directions instead of geographical features.
The wide array of distinctions in the way different groups think about time leads to the broader question that different groups may also think about other abstract concepts in different ways as well, such as causality and number.
See also
thumb|Time's mortal aspect is personified in this bronze statue by Charles van der Stappen
Era
Horology
International System of Quantities
Kairos
List of UTC timing centers
UTC
Term (time)
Books
A Brief History of Time by Stephen Hawking
About Time: Einstein's Unfinished Revolution by Paul Davies
From Eternity to Here: The Quest for the Ultimate Theory of Time by Sean M. Carroll
The Discovery of Time by Stephen Toulmin and June Goodfield
A Natural History of Time by Pascal Richet
The Physical Basis of The Direction of Time by Heinz-Dieter Zeh
An Experiment with Time by John William Dunne
Einstein's Dreams by Alan Lightman
Being and Time by Martin Heidegger
Time Reborn by Lee Smolin
Organizations
Leading scholarly organizations for researchers on the history and technology of time and timekeeping
Antiquarian Horological Society—AHS (United Kingdom)
Chronometrophilia (Switzerland)
Deutsche Gesellschaft für Chronometrie—DGC (Germany)
National Association of Watch and Clock Collectors—NAWCC (United States)
Miscellaneous arts and sciences
Anachronism
Date and time representation by country
List of cycles
Network Time Protocol (NTP)
Nonlinear narrative
Philosophy of physics
Rate (mathematics)
Miscellaneous units of time
Fiscal year
Half-life
Hexadecimal time
Season
Tithi
Unit of time
Unix epoch
References
Further reading
- Research bibliography
Stiegler, Bernard, Technics and Time, 1: The Fault of Epimetheus
Charlie Gere, (2005) Art, Time and Technology: Histories of the Disappearing Body, Berg
Craig Callendar, Introducing Time, Icon Books, 2010, ISBN 978-1848311206
Benjamin Gal-Or, Cosmology, Physics and Philosophy, Springer Verlag, 1981, 1983, 1987, ISBN 0-387-90581-2, ISBN 0-387-96526-2.
Roberto Mangabeira Unger and Lee Smolin, The Singular Universe and the Reality of Time, Cambridge University Press, 2014, ISBN 978-1-107-07406-4.
External links
Accurate time vs. PC Clock Difference
Exploring Time from Planck Time to the lifespan of the universe
Different systems of measuring time
Time in the Internet Encyclopedia of Philosophy, by Bradley Dowden.
Time at Open Directory
Category:Concepts in metaphysics
Category:Concepts in physics
Category:Physical quantities
Category:SI base quantities
Category:Spacetime | 30,012 | 2017-01 |
Arnold Schwarzenegger | Arnold Alois Schwarzenegger (; ; born July 30, 1947) is an Austrian-American actor, producer, businessman, investor, author, philanthropist, activist, and former professional bodybuilder and politician. He served two terms as the 38th Governor of California from 2003 until 2011.
Schwarzenegger began weight training at the age of 15. He won the Mr. Universe title at age 20 and went on to win the Mr. Olympia contest seven times. Schwarzenegger has remained a prominent presence in bodybuilding and has written many books and articles on the sport. He is widely considered to be among the greatest bodybuilders of all time as well as bodybuilding's biggest icon. Schwarzenegger gained worldwide fame as a Hollywood action film icon. His breakthrough film was the sword-and-sorcery epic Conan the Barbarian in 1982, which was a box-office hit and resulted in a sequel.
In 1984, Schwarzenegger appeared in James Cameron's science-fiction thriller film The Terminator, which was a massive critical and box-office success. Schwarzenegger subsequently reprised the Terminator character in the franchise's later installments in 1991, 2003, and 2015. He appeared in a number of successful films, such as Commando (1985), The Running Man (1987), Predator (1987), Twins (1988), Total Recall (1990), Kindergarten Cop (1990) and True Lies (1994). In 2015, it was announced Schwarzenegger would replace Donald Trump as the host of The Celebrity Apprentice. He was nicknamed the "Austrian Oak" in his bodybuilding days, "Arnie" during his acting career, and "The Governator" (a portmanteau of "Governor" and "The Terminator") during his political career.
As a Republican, he was first elected on October 7, 2003, in a special recall election to replace then-Governor Gray Davis. Schwarzenegger was sworn in on November 17, to serve the remainder of Davis's term. Schwarzenegger was then re-elected on November 7, 2006, in the 2006 California gubernatorial election, to serve a full term as governor, defeating Democrat Phil Angelides, who was California State Treasurer at the time. Schwarzenegger was sworn in for his second term on January 5, 2007. In 2011, Schwarzenegger completed his second term as governor.
Early life
thumb|right|260px|Thalersee, a lake in Schwarzenegger's birthplace of Thal, pictured in October 2002
Schwarzenegger was born in Thal, Styria, and christened Arnold Alois. His parents were Gustav Schwarzenegger (August 17, 1907 – December 13, 1972) and Aurelia Schwarzenegger (née Jadrny; July 29, 1922 – August 2, 1998). Gustav was the local chief of police, and had served in World War II as a Hauptfeldwebel after voluntarily joining the Nazi Party in 1938, though he was discharged in 1943 following a bout of malaria. He married Aurelia on October 20, 1945; he was 38, and she was 23. According to Schwarzenegger, both of his parents were very strict: "Back then in Austria it was a very different world ... if we did something bad or we disobeyed our parents, the rod was not spared." Schwarzenegger grew up in a Roman Catholic family who attended Mass every Sunday.
Gustav had a preference for his elder son, Meinhard (July 17, 1946 – May 20, 1971), over Arnold. His favoritism was "strong and blatant", which stemmed from unfounded suspicion that Arnold was not his biological child. Schwarzenegger has said his father had "no patience for listening or understanding your problems". He had a good relationship with his mother and kept in touch with her until her death. In later life, Schwarzenegger commissioned the Simon Wiesenthal Center to research his father's wartime record, which came up with no evidence of Gustav's being involved in atrocities, despite his membership in the Nazi Party and SA. Gustav's background received wide press attention during the 2003 California recall campaign. At school, Schwarzenegger was reportedly academically average, but stood out for his "cheerful, good-humored, and exuberant" character. Money was a problem in their household; Schwarzenegger recalled that one of the highlights of his youth was when the family bought a refrigerator.
As a boy, Schwarzenegger played several sports, heavily influenced by his father. He picked up his first barbell in 1960, when his soccer coach took his team to a local gym. At the age of 14, he chose bodybuilding over soccer as a career. Schwarzenegger has responded to a question asking if he was 13 when he started weightlifting: "I actually started weight training when I was 15, but I'd been participating in sports, like soccer, for years, so I felt that although I was slim, I was well-developed, at least enough so that I could start going to the gym and start Olympic lifting." However, his official website biography claims: "At 14, he started an intensive training program with Dan Farmer, studied psychology at 15 (to learn more about the power of mind over body) and at 17, officially started his competitive career." During a speech in 2001, he said, "My own plan formed when I was 14 years old. My father had wanted me to be a police officer like he was. My mother wanted me to go to trade school."
Schwarzenegger took to visiting a gym in Graz, where he also frequented the local movie theaters to see bodybuilding idols such as Reg Park, Steve Reeves, and Johnny Weissmuller on the big screen. When Reeves died in 2000, Schwarzenegger fondly remembered him: "As a teenager, I grew up with Steve Reeves. His remarkable accomplishments allowed me a sense of what was possible, when others around me didn't always understand my dreams. Steve Reeves has been part of everything I've ever been fortunate enough to achieve." In 1961, Schwarzenegger met former Mr. Austria Kurt Marnul, who invited him to train at the gym in Graz. He was so dedicated as a youngster that he broke into the local gym on weekends, when it was usually closed, so that he could train. "It would make me sick to miss a workout... I knew I couldn't look at myself in the mirror the next morning if I didn't do it." When Schwarzenegger was asked about his first movie experience as a boy, he replied: "I was very young, but I remember my father taking me to the Austrian theaters and seeing some newsreels. The first real movie I saw, that I distinctly remember, was a John Wayne movie."
Schwarzenegger's brother, Meinhard, died in a car crash on May 20, 1971. He was driving drunk and died instantly. Schwarzenegger did not attend his funeral. Meinhard was engaged to Erika Knapp, and they had a three-year-old son named Patrick. Schwarzenegger paid for Patrick's education and helped him to move to the U.S. Gustav died on December 13, 1972, from a stroke. In Pumping Iron, Schwarzenegger claimed that he did not attend his father's funeral because he was training for a bodybuilding contest. Later, he and the film's producer said this story was taken from another bodybuilder to show the extremes that some would go to for their sport and to make Schwarzenegger's image more cold and robotic to create controversy for the film.Interview in Pumping Iron – 25th Anniversary Edition DVD extras Barbara Baker, his first serious girlfriend, recalled that he informed her of his father's death without emotion and that he never spoke of his brother. Over time, he has given at least three versions of why he was absent from his father's funeral.
In an interview with Fortune in 2004, Schwarzenegger told how he suffered what "would now be called child abuse" at the hands of his father: "My hair was pulled. I was hit with belts. So was the kid next door. It was just the way it was. Many of the children I've seen were broken by their parents, which was the German-Austrian mentality. They didn't want to create an individual. It was all about conforming. I was one who did not conform, and whose will could not be broken. Therefore, I became a rebel. Every time I got hit, and every time someone said, 'You can't do this,' I said, 'This is not going to be for much longer, because I'm going to move out of here. I want to be rich. I want to be somebody.'"
Early adulthood
Schwarzenegger served in the Austrian Army in 1965 to fulfill the one year of service required at the time of all 18-year-old Austrian males. During his army service, he won the Junior Mr. Europe contest. He went AWOL during basic training so he could take part in the competition and spent a week in military prison: "Participating in the competition meant so much to me that I didn't carefully think through the consequences." He won another bodybuilding contest in Graz, at Steirer Hof Hotel (where he placed second). He was voted best built man of Europe, which made him famous. "The Mr. Universe title was my ticket to America – the land of opportunity, where I could become a star and get rich." Schwarzenegger made his first plane trip in 1966, attending the NABBA Mr. Universe competition in London. He would come in second in the Mr. Universe competition, not having the muscle definition of American winner Chester Yorton.
Charles "Wag" Bennett, one of the judges at the 1966 competition, was impressed with Schwarzenegger and he offered to coach him. As Schwarzenegger had little money, Bennett invited him to stay in his crowded family home above one of his two gyms in Forest Gate, London, England. Yorton's leg definition had been judged superior, and Schwarzenegger, under a training program devised by Bennett, concentrated on improving the muscle definition and power in his legs. Staying in the East End of London helped Schwarzenegger improve his rudimentary grasp of the English language.Staff, Arnold Schwarzenegger: Made in Britain, British Film Institute. Retrieved October 3, 2008. "Wag and Dianne Bennett, an East End couple who gave Arnie a home for three years," Also in 1966, Schwarzenegger had the opportunity to meet childhood idol Reg Park, who became his friend and mentor. The training paid off and, in 1967, Schwarzenegger won the title for the first time, becoming the youngest ever Mr. Universe at the age of 20. He would go on to win the title a further three times. Schwarzenegger then flew back to Munich, training for four to six hours daily, attending business school and working in a health club (Rolf Putziger's gym where he worked and trained from 1966–1968), returning in 1968 to London to win his next Mr. Universe title. He frequently told Roger C. Field, his English coach and friend in Munich at that time, "I'm going to become the greatest actor!"
Move to the U.S.
thumb|upright|Schwarzenegger with President Ronald Reagan in 1984
Schwarzenegger, who dreamed of moving to the U.S. since the age of 10, and saw bodybuilding as the avenue through which to do so, realized his dream by moving to the United States in September 1968 at the age of 21, speaking little English. There he trained at Gold's Gym in Venice, Los Angeles, California, under Joe Weider. From 1970 to 1974, one of Schwarzenegger's weight training partners was Ric Drasin, a professional wrestler who designed the original Gold's Gym logo in 1973.Jennings, Randy (October 21, 2003). Ric Drasin: Arnold's lifting partner! The Arnold Fans Website. Retrieved December 16, 2009. Schwarzenegger also became good friends with professional wrestler Superstar Billy Graham. In 1970, at age 23, he captured his first Mr. Olympia title in New York, and would go on to win the title a total of seven times.
The immigration law firm Siskind & Susser has stated that Schwarzenegger may have been an illegal immigrant at some point in the late 1960s or early 1970s because of violations in the terms of his visa. LA Weekly would later say in 2002 that Schwarzenegger is the most famous immigrant in America, who "overcame a thick Austrian accent and transcended the unlikely background of bodybuilding to become the biggest movie star in the world in the 1990s".
In 1977, Schwarzenegger's autobiography/weight-training guide Arnold: The Education of a Bodybuilder was published and became a huge success. In 1977 he posed nude for the gay magazine After Dark. After taking English classes at Santa Monica College in California, he earned a BA by correspondence from the University of Wisconsin–Superior, in international marketing of fitness and business administration in 1979. He got his American citizenship in 1983.
He tells that during this time he ran into a friend who told him that he was teaching Transcendental Meditation (TM), which prompted Schwarzenegger to reveal he had been struggling with anxiety for the first time in his life: "Even today, I still benefit from [the year of TM] because I don't merge and bring things together and see everything as one big problem."
Bodybuilding career
Schwarzenegger is considered among the most important figures in the history of bodybuilding, and his legacy is commemorated in the Arnold Classic annual bodybuilding competition. He has remained a prominent face in bodybuilding long after his retirement, in part because of his ownership of gyms and fitness magazines. He has presided over numerous contests and awards shows.
For many years, he wrote a monthly column for the bodybuilding magazines Muscle & Fitness and Flex. Shortly after being elected Governor, he was appointed executive editor of both magazines, in a largely symbolic capacity. The magazines agreed to donate $250,000 a year to the Governor's various physical fitness initiatives. When the deal, including the contract that gave Schwarzenegger at least $1 million a year, was made public in 2005, many criticized it as being a conflict of interest since the governor's office made decisions concerning regulation of dietary supplements in California.Megerian, Chris. (March 1, 2013). Schwarzenegger to be executive editor of magazines. Los Angeles Times. Consequently, Schwarzenegger relinquished the executive editor role in 2005. American Media Inc., which owns Muscle & Fitness and Flex, announced in March 2013 that Schwarzenegger had accepted their renewed offer to be executive editor of the magazines.
The magazine MuscleMag International has a monthly two-page article on him, and refers to him as "The King".
One of the first competitions he won was the Junior Mr. Europe contest in 1965. He won Mr. Europe the following year, at age 19. He would go on to compete in, and win, many bodybuilding contests. His bodybuilding victories included five Mr. Universe (4 – NABBA [England], 1 – IFBB [USA]) wins, and seven Mr. Olympia wins, a record which would stand until Lee Haney won his eighth consecutive Mr. Olympia title in 1991.
Schwarzenegger continues to work out even today. When asked about his personal training during the 2011 Arnold Classic he said that he was still working out a half an hour with weights every day.
Competition weight:
Off-season weight:
Powerlifting/weightlifting
During Schwarzenegger's early years in bodybuilding, he also competed in several Olympic weightlifting and powerlifting contests. Schwarzenegger won two weightlifting contests in 1964 and 1965, as well as two powerlifting contests in 1966 and 1968.
In 1967, Schwarzenegger won the Munich stone-lifting contest, in which a stone weighing 508 German pounds (254 kg/560 lbs.) is lifted between the legs while standing on two foot rests.
Personal records
Clean and press –
Snatch –
Clean and jerk –
Squat –
Bench press –
Deadlift –
thumb|Schwarzenegger, pictured with 1987 world champion American Karyn Marshall, presenting awards at the USA Weightlifting Hall of Fame in 2011 in Columbus, Ohio
Mr. Olympia
Schwarzenegger's goal was to become the greatest bodybuilder in the world, which meant becoming Mr. Olympia. His first attempt was in 1969, when he lost to three-time champion Sergio Oliva. However, Schwarzenegger came back in 1970 and won the competition, making him the youngest ever Mr. Olympia at the age of 23, a record he still holds to this day.
He continued his winning streak in the 1971–74 competitions. In 1975, Schwarzenegger was once again in top form, and won the title for the sixth consecutive time, beating Franco Columbu. After the 1975 Mr. Olympia contest, Schwarzenegger announced his retirement from professional bodybuilding.
Months before the 1975 Mr. Olympia contest, filmmakers George Butler and Robert Fiore persuaded Schwarzenegger to compete, in order to film his training in the bodybuilding documentary called Pumping Iron. Schwarzenegger had only three months to prepare for the competition, after losing significant weight to appear in the film Stay Hungry with Jeff Bridges. Lou Ferrigno proved not to be a threat, and a lighter-than-usual Schwarzenegger convincingly won the 1975 Mr. Olympia.
Schwarzenegger came out of retirement, however, to compete in the 1980 Mr. Olympia. Schwarzenegger was training for his role in Conan, and he got into such good shape because of the running, horseback riding and sword training, that he decided he wanted to win the Mr. Olympia contest one last time. He kept this plan a secret, in the event that a training accident would prevent his entry and cause him to lose face. Schwarzenegger had been hired to provide color commentary for network television, when he announced at the eleventh hour that while he was there: "Why not compete?" Schwarzenegger ended up winning the event with only seven weeks of preparation. After being declared Mr. Olympia for a seventh time, Schwarzenegger then officially retired from competition.
Steroid use
Schwarzenegger has admitted to using performance-enhancing anabolic steroids while they were legal, writing in 1977 that "steroids were helpful to me in maintaining muscle size while on a strict diet in preparation for a contest. I did not use them for muscle growth, but rather for muscle maintenance when cutting up." He has called the drugs "tissue building".
In 1999, Schwarzenegger sued Dr. Willi Heepe, a German doctor who publicly predicted his early death on the basis of a link between his steroid use and his later heart problems. As the doctor had never examined him personally, Schwarzenegger collected a US$10,000 libel judgment against him in a German court. In 1999, Schwarzenegger also sued and settled with the Globe, a U.S. tabloid which had made similar predictions about the bodybuilder's future health.
List of competitions
Year Competition Location Result and notes 1965 Junior Mr. Europe Germany 1st 1966 Best Built Man of Europe Germany 1st 1966 Mr. Europe Germany 1st 1966 International Powerlifting Championship Germany 1st 1966 NABBA Mr. Universe amateur London 2nd to Chet Yorton 1967 NABBA Mr. Universe amateur London 1st 1968 NABBA Mr. Universe professional London 1st 1968 German Powerlifting Championship Germany 1st 1968 IFBB Mr. International Mexico 1st 1968 IFBB Mr. Universe Florida 2nd to Frank Zane 1969 IFBB Mr. Universe amateur New York 1st 1969 NABBA Mr. Universe professional London 1st 1969 Mr. Olympia New York 2nd to Sergio Oliva 1970 NABBA Mr. Universe professional London 1st. Defeated his idol Reg Park 1970 Mr. World Columbus, Ohio 1st. Defeated Sergio Oliva for the first time 1970 Mr. Olympia New York 1st 1971 Mr. Olympia Paris 1st 1972 Mr. Olympia Essen, Germany 1st 1973 Mr. Olympia New York 1st 1974 Mr. Olympia New York 1st 1975 Mr. Olympia Pretoria, South Africa 1st. Subject of the documentary Pumping Iron 1980 Mr. Olympia Sydney, Australia 1st
Competitive stats
Height: 6'2" (188 cm)
Contest weight:
Off-season weight:
Arms:
Chest:
Waist:
Thighs:
Calves:
Acting career
Early roles
Schwarzenegger wanted to move from bodybuilding into acting, finally achieving it when he was chosen to play the role of Hercules in 1970's Hercules in New York. Credited under the stage name "Arnold Strong", his accent in the film was so thick that his lines were dubbed after production. His second film appearance was as a deaf-mute mob hitman in The Long Goodbye (1973), which was followed by a much more significant part in the film Stay Hungry (1976), for which he was awarded a Golden Globe for New Male Star of the Year. Schwarzenegger has discussed his early struggles in developing his acting career: "It was very difficult for me in the beginning – I was told by agents and casting people that my body was 'too weird', that I had a funny accent, and that my name was too long. You name it, and they told me I had to change it. Basically, everywhere I turned, I was told that I had no chance."
Schwarzenegger drew attention and boosted his profile in the bodybuilding film Pumping Iron (1977), elements of which were dramatized; in 1991, he purchased the rights to the film, its outtakes, and associated still photography. In 1977, he made guest appearances in single episodes of the ABC sitcom The San Pedro Beach Bums and the ABC police procedural The Streets of San Francisco. Schwarzenegger auditioned for the title role of The Incredible Hulk, but did not win the role because of his height. Later, Lou Ferrigno got the part of Dr. David Banner's alter ego. Schwarzenegger appeared with Kirk Douglas and Ann-Margret in the 1979 comedy The Villain. In 1980, he starred in a biographical film of the 1950s actress Jayne Mansfield as Mansfield's husband, Mickey Hargitay.
Action superstar
Schwarzenegger's breakthrough film was the sword-and-sorcery epic Conan the Barbarian in 1982, which was a box-office hit. This was followed by a sequel, Conan the Destroyer, in 1984, although it was not as successful as its predecessor. In 1983, Schwarzenegger starred in the promotional video, Carnival in Rio. In 1984, he made his first appearance as the eponymous character, and what some would say was his acting career's signature role, in James Cameron's science fiction thriller film The Terminator. Following this, Schwarzenegger made Red Sonja in 1985.
During the 1980s, audiences had an appetite for action films, with both Schwarzenegger and Sylvester Stallone becoming international stars. Schwarzenegger's roles reflected his sense of humor, separating him from more serious action hero films, such as the alternative universe poster for Terminator 2: Judgment Day starring Stallone in the comedy thriller Last Action Hero. He made a number of successful films, such as Commando (1985), Raw Deal (1986), The Running Man (1987), Predator (1987), and Red Heat (1988).
left|thumb|Footprints and handprints of Arnold Schwarzenegger in front of the Grauman's Chinese Theatre, with his famous catchphrase "I'll be back" written in.
Twins (1988), a comedy with Danny DeVito, also proved successful. Total Recall (1990) netted Schwarzenegger $10 million and 15% of the film's gross. A science fiction script, the film was based on the Philip K. Dick short story "We Can Remember It for You Wholesale". Kindergarten Cop (1990) reunited him with director Ivan Reitman, who directed him in Twins. Schwarzenegger had a brief foray into directing, first with a 1990 episode of the TV series Tales from the Crypt, entitled "The Switch", and then with the 1992 telemovie Christmas in Connecticut. He has not directed since.
Schwarzenegger's commercial peak was his return as the title character in 1991's Terminator 2: Judgment Day, which was the highest-grossing film of 1991. In 1993, the National Association of Theatre Owners named him the "International Star of the Decade". His next film project, the 1993 self-aware action comedy spoof Last Action Hero, was released opposite Jurassic Park, and did not do well at the box office. His next film, the comedy drama True Lies (1994), was a popular spy film, and saw Schwarzenegger reunited with James Cameron.
That same year, the comedy Junior was released, the last of Schwarzenegger's three collaborations with Ivan Reitman and again co-starring Danny DeVito. This film brought him his second Golden Globe nomination, this time for Best Actor – Musical or Comedy. It was followed by the action thriller Eraser (1996), the Christmas comedy Jingle All The Way (1996), and the comic book-based Batman & Robin (1997), in which he played the villain Mr. Freeze. This was his final film before taking time to recuperate from a back injury. Following the critical failure of Batman & Robin, his film career and box office prominence went into decline. He returned with the supernatural thriller End of Days (1999), later followed by the action films The 6th Day (2000) and Collateral Damage (2002), both of which failed to do well at the box office. In 2003, he made his third appearance as the title character in Terminator 3: Rise of the Machines, which went on to earn over $150 million domestically.
thumb|left|Arnold Schwarzenegger's star on the Hollywood Walk of Fame
In tribute to Schwarzenegger in 2002, Forum Stadtpark, a local cultural association, proposed plans to build a 25-meter (82 ft) tall Terminator statue in a park in central Graz. Schwarzenegger reportedly said he was flattered, but thought the money would be better spent on social projects and the Special Olympics.
Retirement
His film appearances after becoming Governor of California included a three-second cameo appearance in The Rundown, and the 2004 remake of Around the World in 80 Days. In 2005, he appeared as himself in the film The Kid & I. He voiced Baron von Steuben in the Liberty's Kids episode "Valley Forge". He had been rumored to be appearing in Terminator Salvation as the original T-800; he denied his involvement, but he ultimately did appear briefly via his image being inserted into the movie from stock footage of the first Terminator movie. Schwarzenegger appeared in Sylvester Stallone's The Expendables, where he made a cameo appearance.
Return to acting
In January 2011, just weeks after leaving office in California, Schwarzenegger announced that he was reading several new scripts for future films, one of them being the World War II action drama With Wings as Eagles, written by Randall Wallace, based on a true story. On March 6, 2011, at the Arnold Seminar of the Arnold Classic, Schwarzenegger revealed that he was being considered for several films, including sequels to The Terminator and remakes of Predator and The Running Man, and that he was "packaging" a comic book character. The character was later revealed to be the Governator, star of the comic book and animated series of the same name. Schwarzenegger inspired the character and co-developed it with Stan Lee, who would have produced the series. Schwarzenegger would have voiced the Governator.
On May 20, 2011, Schwarzenegger's entertainment counsel announced that all movie projects currently in development were being halted: "Schwarzenegger is focusing on personal matters and is not willing to commit to any production schedules or timelines". On July 11, 2011, it was announced that Schwarzenegger was considering a comeback film despite his legal problems. He appeared in The Expendables 2 (2012), and starred in The Last Stand (2013), his first leading role in 10 years, and Escape Plan (2013), his first co-starring role alongside Sylvester Stallone. He starred in Sabotage, released in March 2014, and appeared in The Expendables 3, released in August 2014. He starred in the fifth Terminator movie Terminator Genisys in 2015Arnold Schwarzenegger Confirmed for 'Terminator 5'. Screenrant.com (January 22, 2013). Retrieved September 27, 2013. and will reprise his role as Conan the Barbarian in The Legend of Conan,Fleming, Mike (October 25, 2012). "Arnold And 'Conan The Barbarian' Reunited: Universal Reboots Action Franchise With Schwarzenegger." Deadline.com. Retrieved October 30, 2012. later renamed Conan the Conqueror.
In August 2016, his filming of action-comedy Why We're Killing Gunther was temporarily interrupted by bank robbers near filming location in Surrey, British Columbia. He was announced to star and produce in a film about the ruins of Sanxingdui called The Guest of Sanxingdui, as an ambassador.
The Celebrity Apprentice
In September 2015, it was announced Schwarzenegger would replace Donald Trump as host of The Celebrity Apprentice. This show, the 15th season of The Apprentice, will air in the 2016–2017 TV season.
Filmography
Selected notable roles:
Hercules in New York as Hercules (1970)
Stay Hungry as Joe Santo (1976)
Pumping Iron as himself (1977)
The Villain as Handsome Stranger (1979)
The Jayne Mansfield Story as Mickey Hargitay (1980)
Conan the Barbarian as Conan (1982)
Conan the Destroyer as Conan (1984)
The Terminator as The Terminator/T-800 Model 101 (1984)
Red Sonja as Kalidor (1985)
Commando as John Matrix (1985)
Raw Deal as Mark Kaminsky, a.k.a. Joseph P. Brenner (1986)
Predator as Major Alan "Dutch" Schaeffer (1987)
The Running Man as Ben Richards (1987)
Red Heat as Captain Ivan Danko (1988)
Twins as Julius Benedict (1988)
Total Recall as Douglas Quaid/Hauser (1990)
Kindergarten Cop as Detective John Kimble (1990)
Terminator 2: Judgment Day as The Terminator/T-800 Model 101 (1991)
Last Action Hero as Jack Slater / Himself (1993)
True Lies as Harry Tasker (1994)
Junior as Dr. Alex Hesse (1994)
Eraser as U.S. Marshal John Kruger (1996)
Jingle All the Way as Howard Langston (1996)
Batman and Robin as Mr. Freeze (1997)
End of Days as Jericho Cane (1999)
The 6th Day as Adam Gibson / Adam Gibson Clone (2000)
Collateral Damage as Gordy Brewer (2002)
Terminator 3: Rise of the Machines as The Terminator/T-850 Model 101 (2003)
Around the World in 80 Days as Prince Hapi (2004)
The Expendables as Trench (2010)
The Expendables 2 as Trench (2012)
The Last Stand as Sheriff Ray Owens (2013)
Escape Plan as Rottmayer (2013)
Sabotage as John 'Breacher' Wharton (2014)
The Expendables 3 as Trench (2014)
Maggie as Wade Vogel (2015)
Terminator Genisys as The Terminator/T-800 Model 101/ The Guardian (2015)
478 as Victor (2016)
Why We're Killing Gunther as Gunther (2017)
Journey to China: The Mystery of Iron Mask (2017)
Blanco as Nathan Brand (2017)
The Expendables 4 as Trench (2018)
Political career
Early politics
thumb|Vice President Dick Cheney meets with Schwarzenegger for the first time at the White House
Schwarzenegger has been a registered Republican for many years. As an actor, his political views were always well known as they contrasted with those of many other prominent Hollywood stars, who are generally considered to be a liberal and Democratic-leaning community. At the 2004 Republican National Convention, Schwarzenegger gave a speech and explained he was a Republican because the Democrats of the 1960s sounded too much like Austrian socialists.
In 1985, Schwarzenegger appeared in "Stop the Madness", an anti-drug music video sponsored by the Reagan administration. He first came to wide public notice as a Republican during the 1988 presidential election, accompanying then-Vice President George H. W. Bush at a campaign rally.
Schwarzenegger's first political appointment was as chairman of the President's Council on Physical Fitness and Sports, on which he served from 1990 to 1993. He was nominated by George H. W. Bush, who dubbed him "Conan the Republican". He later served as Chairman for the California Governor's Council on Physical Fitness and Sports under Governor Pete Wilson.
Between 1993 and 1994, Schwarzenegger was a Red Cross ambassador (a ceremonial role fulfilled by celebrities), recording several television/radio public service announcements to donate blood.
In an interview with Talk magazine in late 1999, Schwarzenegger was asked if he thought of running for office. He replied, "I think about it many times. The possibility is there, because I feel it inside." The Hollywood Reporter claimed shortly after that Schwarzenegger sought to end speculation that he might run for governor of California. Following his initial comments, Schwarzenegger said, "I'm in show business – I am in the middle of my career. Why would I go away from that and jump into something else?"
Governor of California
Schwarzenegger announced his candidacy in the 2003 California recall election for Governor of California on the August 6, 2003, episode of The Tonight Show with Jay Leno. Schwarzenegger had the most name recognition in a crowded field of candidates, but he had never held public office and his political views were unknown to most Californians. His candidacy immediately became national and international news, with media outlets dubbing him the "Governator" (referring to The Terminator movies, see above) and "The Running Man" (the name of another one of his films), and calling the recall election "Total Recall" (yet another movie starring Schwarzenegger). Schwarzenegger declined to participate in several debates with other recall replacement candidates, and appeared in only one debate on September 24, 2003.
thumb|200px|left|President George W. Bush meets with Schwarzenegger after his successful election to the California Governorship.
On October 7, 2003, the recall election resulted in Governor Gray Davis being removed from office with 55.4% of the Yes vote in favor of a recall. Schwarzenegger was elected Governor of California under the second question on the ballot with 48.6% of the vote to choose a successor to Davis. Schwarzenegger defeated Democrat Cruz Bustamante, fellow Republican Tom McClintock, and others. His nearest rival, Bustamante, received 31% of the vote. In total, Schwarzenegger won the election by about 1.3 million votes. Under the regulations of the California Constitution, no runoff election was required. Schwarzenegger was the second foreign-born governor of California after Irish-born Governor John G. Downey in 1862.
Schwarzenegger was entrenched in what he considered to be his mandate in cleaning up gridlock. Building on a catchphrase from the sketch "Hans and Franz" from Saturday Night Live (which partly parodied his bodybuilding career), Schwarzenegger called the Democratic State politicians "girlie men".
Schwarzenegger's early victories included repealing an unpopular increase in the vehicle registration fee as well as preventing driver's licenses being given out to illegal immigrants, but later he began to feel the backlash when powerful state unions began to oppose his various initiatives. Key among his reckoning with political realities was a special election he called in November 2005, in which four ballot measures he sponsored were defeated. Schwarzenegger accepted personal responsibility for the defeats and vowed to continue to seek consensus for the people of California. He would later comment that "no one could win if the opposition raised 160 million dollars to defeat you". The U.S. Supreme Court later found the public employee unions' use of compulsory fundraising during the campaign had been illegal in Knox v. Service Employees International Union, Local 1000.
Schwarzenegger then went against the advice of fellow Republican strategists and appointed a Democrat, Susan Kennedy, as his Chief of Staff. Schwarzenegger gradually moved towards a more politically moderate position, determined to build a winning legacy with only a short time to go until the next gubernatorial election.
Schwarzenegger ran for re-election against Democrat Phil Angelides, the California State Treasurer, in the 2006 elections, held on November 7, 2006. Despite a poor year nationally for the Republican party, Schwarzenegger won re-election with 56.0% of the vote compared with 38.9% for Angelides, a margin of well over one million votes. In recent years, many commentators have seen Schwarzenegger as moving away from the right and towards the center of the political spectrum. After hearing a speech by Schwarzenegger at the 2006 Martin Luther King Jr. breakfast, San Francisco mayor Gavin Newsom said that, "[H]e's becoming a Democrat [… H]e's running back, not even to the center. I would say center-left".
It was rumored that Schwarzenegger might run for the United States Senate in 2010, as his governorship would be term-limited by that time. This turned out to be false.
thumb|200px|left|With Schwarzenegger and Senator Dianne Feinstein behind him, President George W. Bush comments on wildfires and firefighting efforts in California, October 2007.
Wendy Leigh, who wrote an unofficial biography on Schwarzenegger, claims he plotted his political rise from an early age using the movie business and bodybuilding as building blocks to escape a depressing home. Leigh portrays Schwarzenegger as obsessed with power and quotes him as saying, "I wanted to be part of the small percentage of people who were leaders, not the large mass of followers. I think it is because I saw leaders use 100% of their potential – I was always fascinated by people in control of other people." Schwarzenegger has said that it was never his intention to enter politics, but he says, "I married into a political family. You get together with them and you hear about policy, about reaching out to help people. I was exposed to the idea of being a public servant and Eunice and Sargent Shriver became my heroes." Eunice Kennedy Shriver was sister of John F. Kennedy, and mother-in-law to Schwarzenegger; Sargent Shriver is husband to Eunice and father-in-law to Schwarzenegger. He cannot run for president as he is not a natural born citizen of the United States. In The Simpsons Movie (2007), he is portrayed as the president, and in the Sylvester Stallone movie, Demolition Man (1993, ten years before his first run for political office), it is revealed that a constitutional amendment passed which allowed Schwarzenegger to become president.
Schwarzenegger is a dual Austrian/United States citizen. He has held Austrian citizenship since birth and U.S. citizenship since becoming naturalized in 1983. Being Austrian and thus European, he was able to win the 2007 European Voice campaigner of the year award for taking action against climate change with the California Global Warming Solutions Act of 2006 and plans to introduce an emissions trading scheme with other US states and possibly with the EU.
thumb|200px|Governor Schwarzenegger during his visit to Naval Medical Center in San Diego, July 2010.
Because of his personal wealth from his acting career, Schwarzenegger did not accept his governor's salary of $175,000 per year.
Schwarzenegger's endorsement in the Republican primary of the 2008 U.S. presidential election was highly sought; despite being good friends with candidates Rudy Giuliani and Senator John McCain, Schwarzenegger remained neutral throughout 2007 and early 2008. Giuliani dropped out of the presidential race on January 30, 2008, largely because of a poor showing in Florida, and endorsed McCain. Later that night, Schwarzenegger was in the audience at a Republican debate at the Ronald Reagan Presidential Library in California. The following day, he endorsed McCain, joking, "It's Rudy's fault!" (in reference to his friendships with both candidates and that he could not make up his mind). Schwarzenegger's endorsement was thought to be a boost for Senator McCain's campaign; both spoke about their concerns for the environment and economy.
In its April 2010 report, Progressive ethics watchdog group Citizens for Responsibility and Ethics in Washington named Schwarzenegger one of 11 "worst governors" in the United States because of various ethics issues throughout Schwarzenegger's term as governor.
Governor Schwarzenegger played a significant role in opposing Proposition 66, a proposed amendment of the Californian Three Strikes Law, in November 2004. This amendment would have required the third felony to be either violent or serious to mandate a 25-years-to-life sentence. In the last week before the ballot, Schwarzenegger launched an intensive campaign against Proposition 66. He stated that "it would release 26,000 dangerous criminals and rapists".
Although he began his tenure as governor with record high approval ratings (as high as 89% in December 2003), he left office with a record low 23%,Schwarzenegger leaves office with huge state deficit Xinhua English News. January 3, 2011 only one percent higher than that of Gray Davis's when he was recalled in October 2003.
Death of Louis Santos
In December 2014, Esteban Núñez pleaded guilty to involuntary manslaughter and was sentenced to 16 years in prison for the death of Louis Santos. Núñez was the son of Fabian Núñez, then California Assembly Speaker of the House and a close friend and staunch political ally of then governor Schwarzenegger.
As a personal favor to "a friend", just hours before he left office, and as one of his last official acts, Schwarzenegger commuted Núñez's sentence by more than half, to seven years.http://www.nytimes.com/2011/01/04/us/04pardon.html?_r=0 Against protocol, Schwarzenegger did not inform Santos' family or the San Diego County prosecutors about the commutation. They learned about it in a call from a reporter.
The Santos family, along with the San Diego district attorney, sued to stop the commutation, claiming that it violated Marsy's Law. In September 2012, Sacramento County superior court judge Lloyd Connelly stated, "Based on the evidentiary records before this court involving this case, there was an abuse of discretion...This was a distasteful commutation. It was repugnant to the bulk of the citizenry of this state." However, Connelly ruled that Schwarzenegger remained within his executive powers as governor. Subsequently, as a direct result of the way the commutation was handled, Governor Jerry Brown signed a bipartisan bill that allows offender's victims and their families to be notified at least 10 days notice for any commutations. Núñez was released from prison after serving less than six years.http://www.latimes.com/local/lanow/la-me-ln-esteban-nunez-released-from-prison-20160410-story.html
Allegations of sexual misconduct
thumb|Code Pink protesting against Schwarzenegger
During his initial campaign for governor, allegations of sexual and personal misconduct were raised against Schwarzenegger, dubbed "Gropegate". Within the last five days before the election, news reports appeared in the Los Angeles Times recounting allegations of sexual misconduct from several individual women, six of whom eventually came forward with their personal stories.
Three of the women claimed he had grabbed their breasts, a fourth said he placed his hand under her skirt on her buttock. A fifth woman claimed Schwarzenegger tried to take off her bathing suit in a hotel elevator, and the last said he pulled her onto his lap and asked her about a sex act.
Schwarzenegger admitted that he has "behaved badly sometimes" and apologized, but also stated that "a lot of [what] you see in the stories is not true". This came after an interview in adult magazine Oui from 1977 surfaced, in which Schwarzenegger discussed attending sexual orgies and using substances such as marijuana. Schwarzenegger is shown smoking a marijuana joint after winning Mr. Olympia in the 1975 documentary film Pumping Iron. In an interview with GQ magazine in October 2007, Schwarzenegger said, "[Marijuana] is not a drug. It's a leaf. My drug was pumping iron, trust me." His spokesperson later said the comment was meant to be a joke.
British television personality Anna Richardson settled a libel lawsuit in August 2006 against Schwarzenegger, his top aide, Sean Walsh, and his publicist, Sheryl Main. A joint statement read: "The parties are content to put this matter behind them and are pleased that this legal dispute has now been settled." Richardson claimed they tried to tarnish her reputation by dismissing her allegations that Schwarzenegger touched her breast during a press event for The 6th Day in London. She claimed Walsh and Main libeled her in a Los Angeles Times article when they contended she encouraged his behavior.
Citizenship
thumb|right|Schwarzenegger in 2004
Schwarzenegger became a naturalized U.S. citizen on September 17, 1983. Shortly before he gained his citizenship, he asked the Austrian authorities for the right to keep his Austrian citizenship, as Austria does not usually allow dual citizenship. His request was granted, and he retained his Austrian citizenship.Leamer, p. 199-200 In 2005, Peter Pilz, a member of the Austrian Parliament from the Austrian Green Party, unsuccessfully advocated for Parliament to revoke Schwarzenegger's Austrian citizenship due to his decision not to prevent the executions of Donald Beardslee and Stanley Williams. Pilz argued that Schwarzenegger caused damage to Austria's reputation in the international community, because Austria abolished the death penalty in 1968. Pilz based his argument on Article 33 of the Austrian Citizenship Act, which states: "A citizen, who is in the public service of a foreign country, shall be deprived of his citizenship, if he heavily damages the reputation or the interests of the Austrian Republic." Pilz claimed that Schwarzenegger's actions in support of the death penalty (prohibited in Austria under Protocol 13 of the European Convention on Human Rights) had damaged Austria's reputation. Schwarzenegger explained his actions by pointing out that his only duty as Governor of California with respect to the death penalty was to correct an error by the justice system by pardon or clemency, if such an error had occurred.
Environmental record
On September 27, 2006, Schwarzenegger signed the Global Warming Solutions Act of 2006, creating the nation's first cap on greenhouse gas emissions. The law set new regulations on the amount of emissions utilities, refineries and manufacturing plants are allowed to release into the atmosphere. Schwarzenegger also signed a second global warming bill that prohibits large utilities and corporations in California from making long-term contracts with suppliers who do not meet the state's greenhouse gas emission standards. The two bills are part of a plan to reduce California's emissions by 25 percent to 1990s levels by 2020. In 2005, Schwarzenegger issued an executive order calling to reduce greenhouse gases to 80 percent below 1990 levels by 2050.
Schwarzenegger signed another executive order on October 17, 2006, allowing California to work with the Northeast's Regional Greenhouse Gas Initiative. They plan to reduce carbon dioxide emissions by issuing a limited amount of carbon credits to each power plant in participating states. Any power plants that exceed emissions for the amount of carbon credits will have to purchase more credits to cover the difference. The plan took effect in 2009. In addition to using his political power to fight global warming, the governor has taken steps at his home to reduce his personal carbon footprint. Schwarzenegger has adapted one of his Hummers to run on hydrogen and another to run on biofuels. He has also installed solar panels to heat his home."The Governator's green agenda" Fortune. March 23, 2007. Retrieved May 15, 2008.
In respect of his contribution to the direction of the US motor industry, Schwarzenegger was invited to open the 2009 SAE World Congress in Detroit, on April 20, 2009."SAE 2009 World Congress Special Opening Ceremonies to Feature Gov. Arnold Schwarzenegger" SAE. March 10, 2009. Retrieved April 6, 2009.
In 2011, Schwarzenegger founded the R20 Regions of Climate Action to develop a sustainable, low carbon economy.
Electoral history
Presidential ambitions
In October 2013, the New York Post reported that Schwarzenegger was exploring a future run for president. The former California governor would face a constitutional hurdle; Article II, Section I, Clause V nominally prevents individuals who are not natural-born citizens of the United States from assuming the office. He has reportedly been lobbying legislators about a possible constitutional change, or filing a legal challenge to the provision. Columbia University law professor Michael Dorf observed that Schwarzenegger's possible lawsuit could ultimately win him the right to run for the office, noting, "The law is very clear, but it's not 100 percent clear that the courts would enforce that law rather than leave it to the political process."
Business career
Schwarzenegger has had a highly successful business career. Following his move to the United States, Schwarzenegger became a "prolific goal setter" and would write his objectives at the start of the year on index cards, like starting a mail order business or buying a new car – and succeed in doing so. By the age of 30, Schwarzenegger was a millionaire, well before his career in Hollywood. His financial independence came from his success as a budding entrepreneur with a series of successful business ventures and investments.
Bricklaying business
In 1968, Schwarzenegger and fellow bodybuilder Franco Columbu started a bricklaying business. The business flourished thanks to the pair's marketing savvy and an increased demand following the 1971 San Fernando earthquake. Schwarzenegger and Columbu used profits from their bricklaying venture to start a mail order business, selling bodybuilding and fitness-related equipment and instructional tapes.
Investments
Schwarzenegger rolled profits from the mail order business and his bodybuilding competition winnings into his first real estate investment venture: an apartment building he purchased for $10,000. He would later go on to invest in a number of real estate holding companies.
Schwarzenegger was a founding celebrity investor in the Planet Hollywood chain of international theme restaurants (modeled after the Hard Rock Cafe) along with Bruce Willis, Sylvester Stallone and Demi Moore. Schwarzenegger severed his financial ties with the business in early 2000. Schwarzenegger said the company had not had the success he had hoped for, claiming he wanted to focus his attention on "new US global business ventures" and his movie career.
He also invested in a shopping mall in Columbus, Ohio. He has talked about some of those who have helped him over the years in business: "I couldn't have learned about business without a parade of teachers guiding me... from Milton Friedman to Donald Trump... and now, Les Wexner and Warren Buffett. I even learned a thing or two from Planet Hollywood, such as when to get out! And I did!" He has significant ownership in Dimensional Fund Advisors, an investment firm. Schwarzenegger is also the owner of Arnold's Sports Festival, which he started in 1989 and is held annually in Columbus, Ohio. It is a festival that hosts thousands of international health and fitness professionals which has also expanded into a three-day expo. He also owns a movie production company called Oak Productions, Inc. and Fitness Publications, a joint publishing venture with Simon & Schuster.
Restaurant
In 1992, Schwarzenegger and his wife opened a restaurant in Santa Monica called Schatzi On Main. Schatzi literally means "little treasure," colloquial for "honey" or "darling" in German. In 1998, he sold his restaurant.
Wealth
Schwarzenegger's net worth had been conservatively estimated at $100–$200 million. After separating from his wife, Maria Shriver, in 2011, it has been estimated that his net worth has been approximately $400 million, and even as high as $800 million, based on tax returns he filed in 2006.
Over the years as an investor, he invested his bodybuilding and movie earnings in an array of stocks, bonds, privately controlled companies, and real estate holdings worldwide, making his net worth as an accurate estimation difficult to calculate, particularly in light of declining real estate values owing to economic recessions in the U.S. and Europe since the late 2000s. In June 1997, Schwarzenegger spent $38 million of his own money on a private Gulfstream jet. Schwarzenegger once said of his fortune, "Money doesn't make you happy. I now have $50 million, but I was just as happy when I had $48 million."
Commercial advertisements
He appears in a series of commercials for the Machine Zone game Mobile Strike as a military commander and spokesman.
Personal life
Early relationships
In 1969, Schwarzenegger met Barbara Outland (later Barbara Outland Baker), an English teacher he lived with until 1974. Schwarzenegger talked about Barbara in his memoir in 1977: "Basically it came down to this: she was a well-balanced woman who wanted an ordinary, solid life, and I was not a well-balanced man, and hated the very idea of ordinary life." Baker has described Schwarzenegger as "[a] joyful personality, totally charismatic, adventurous, and athletic" but claims towards the end of the relationship he became "insufferable – classically conceited – the world revolved around him". Baker published her memoir in 2006, entitled Arnold and Me: In the Shadow of the Austrian Oak. Although Baker, at times, painted an unflattering portrait of her former lover, Schwarzenegger actually contributed to the tell-all book with a foreword, and also met with Baker for three hours. Baker claims, for example, that she only learned of his being unfaithful after they split, and talks of a turbulent and passionate love life. Schwarzenegger has made it clear that their respective recollection of events can differ. The couple first met six to eight months after his arrival in the U.S – their first date was watching the first Apollo Moon landing on television. They shared an apartment in Santa Monica for three and a half years, and having little money, would visit the beach all day, or have barbecues in the back yard. Although Baker claims that when she first met him, he had "little understanding of polite society" and she found him a turn-off, she says, "He's as much a self-made man as it's possible to be – he never got encouragement from his parents, his family, his brother. He just had this huge determination to prove himself, and that was very attractive … I'll go to my grave knowing Arnold loved me."
Schwarzenegger met his next paramour, Sue Moray, a Beverly Hills hairdresser's assistant, on Venice Beach in July 1977. According to Moray, the couple led an open relationship: "We were faithful when we were both in LA … but when he was out of town, we were free to do whatever we wanted." Schwarzenegger met Maria Shriver at the Robert F. Kennedy Tennis Tournament in August 1977, and went on to have a relationship with both women until August 1978, when Moray (who knew of his relationship with Shriver) issued an ultimatum.
Marriage and family
thumb|right|Schwarzenegger with his wife Maria Shriver at the 2007 Special Olympics in Shanghai, China
thumb|left|200px|Schwarzenegger and his son Patrick at Edwards Air Force Base, California in December 2002
On April 26, 1986, Schwarzenegger married television journalist Maria Shriver, niece of President John F. Kennedy, in Hyannis, Massachusetts. The Rev. John Baptist Riordan performed the ceremony at St. Francis Xavier Catholic Church. They have four children: Katherine Eunice Schwarzenegger (born December 13, 1989); Christina Maria Aurelia Schwarzenegger (born July 23, 1991); Patrick Arnold Shriver Schwarzenegger (born September 18, 1993); and Christopher Sargent Shriver Schwarzenegger (born September 27, 1997); all born in Los Angeles. Schwarzenegger lives in a home in Brentwood. The divorcing couple currently own vacation homes in Sun Valley, Idaho, and Hyannis Port, Massachusetts. They attended St. Monica's Catholic Church. Following their separation, it is reported that Schwarzenegger is dating physical therapist Heather Milligan.
Marital separation
On May 9, 2011, Shriver and Schwarzenegger ended their relationship after 25 years of marriage, with Shriver moving out of the couple's Brentwood mansion. On May 16, 2011, the Los Angeles Times revealed that Schwarzenegger had fathered a son more than fourteen years earlier with an employee in their household, Mildred Patricia 'Patty' Baena. "After leaving the governor's office I told my wife about this event, which occurred over a decade ago," Schwarzenegger said in a statement issued to The Times. In the statement, Schwarzenegger did not mention that he had confessed to his wife only after Shriver had confronted him with the information, which she had done after confirming with the housekeeper what she had suspected about the child.
Baena is of Guatemalan origin, she was employed by the family for 20 years and retired in January 2011. The pregnant Baena was working in the home while Shriver was pregnant with the youngest of the couple's four children. Baena's son with Schwarzenegger, Joseph, was born on October 2, 1997; Shriver gave birth to Christopher on September 27, 1997. Schwarzenegger says it took seven or eight years before he found out that he had fathered a child with his housekeeper. It wasn't until the boy "started looking like me, that's when I kind of got it. I put things together," the action star and former California governor, told 60 Minutes.Arnold Schwarzenegger: The Boy 'Started Looking Like Me' People, October 1, 2012 Schwarzenegger has taken financial responsibility for the child "from the start and continued to provide support." KNX 1070 radio reported that in 2010 he bought a new four-bedroom house, with a pool, for Baena and their son in Bakersfield, about north of Los Angeles. Baena separated from her husband, Rogelio, in 1997, a few months after Joseph's birth, and filed for divorce in 2008. Baena's ex-husband says that the child's birth certificate was falsified and that he plans to sue Schwarzenegger for engaging in conspiracy to falsify a public document, a serious crime in California.Ex-husband of Schwarzenegger's lover plans to sue (AFP) May 29, 2011
Schwarzenegger has consulted an attorney, Bob Kaufman. Kaufman has earlier handled divorce cases for celebrities such as Jennifer Aniston and Reese Witherspoon. Schwarzenegger will keep the Brentwood home as part of their divorce settlement and Shriver has purchased a new home nearby so that the children may travel easily between their parents' homes. They will share custody of the two minor children. Schwarzenegger came under fire after the initial petition did not include spousal support and a reimbursement of attorney's fees. However, he claims this was not intentional and that he signed the initial documents without having properly read them. Schwarzenegger has filed amended divorce papers remedying this.
After the scandal, actress Brigitte Nielsen came forward and stated that she too had an affair with Schwarzenegger while he was in a relationship with Shriver, saying, "Maybe I wouldn't have got into it if he said 'I'm going to marry Maria' and this is dead serious, but he didn't, and our affair carried on." When asked in 2014 "Of all the things you are famous for … which are you least proud of?", Schwarzenegger replied "I'm least proud of the mistakes I made that caused my family pain and split us up".
Accidents and injuries
Schwarzenegger was born with a bicuspid aortic valve, an aortic valve with only two leaflets (a normal aortic valve has three leaflets). Schwarzenegger opted in 1997 for a replacement heart valve made of his own transplanted tissue; medical experts predicted he would require heart valve replacement surgery in the following two to eight years as his valve would progressively degrade. Schwarzenegger apparently opted against a mechanical valve, the only permanent solution available at the time of his surgery, because it would have sharply limited his physical activity and capacity to exercise.
On December 9, 2001, he broke six ribs and was hospitalized for four days after a motorcycle crash in Los Angeles.
Schwarzenegger saved a drowning man's life in 2004 while on vacation in Hawaii by swimming out and bringing him back to shore.
On January 8, 2006, while Schwarzenegger was riding his Harley Davidson motorcycle in Los Angeles, with his son Patrick in the sidecar, another driver backed into the street he was riding on, causing him and his son to collide with the car at a low speed. While his son and the other driver were unharmed, the governor sustained a minor injury to his lip, requiring 15 stitches. "No citations were issued", said Officer Jason Lee, a Los Angeles Police Department spokesman. Schwarzenegger did not obtain his motorcycle license until July 3, 2006.
Schwarzenegger tripped over his ski pole and broke his right femur while skiing in Sun Valley, Idaho, with his family on December 23, 2006. On December 26, 2006, he underwent a 90-minute operation in which cables and screws were used to wire the broken bone back together. He was released from the St. John's Health Center on December 30, 2006.
Schwarzenegger's private jet made an emergency landing at Van Nuys Airport on June 19, 2009, after the pilot reported smoke coming from the cockpit, according to a statement released by the governor's press secretary. No one was harmed in the incident.
Height
Schwarzenegger's official height of 6'2" (1.88 m) has been brought into question by several articles. In his bodybuilding days in the late 1960s, he was measured to be 6'1.5" (1.87 m), a height confirmed by his fellow bodybuilders. However, in 1988 both the Daily Mail and Time Out magazine mentioned that Schwarzenegger appeared noticeably shorter.Andrews, N: "True Myths: The life and times of Arnold Schwarzenegger," page 157. Bloomsbury, 2003 Prior to running for Governor, Schwarzenegger's height was once again questioned in an article by the Chicago Reader. As Governor, Schwarzenegger engaged in a light-hearted exchange with Assemblyman Herb Wesson over their heights. At one point, Wesson made an unsuccessful attempt to, in his own words, "settle this once and for all and find out how tall he is" by using a tailor's tape measure on the Governor. Schwarzenegger retaliated by placing a pillow stitched with the words "Need a lift?" on the five-foot-five inch (165 cm) Wesson's chair before a negotiating session in his office. Bob Mulholland also claimed Schwarzenegger was 5'10" (1.78 m) and that he wore risers in his boots. In 1999, Men's Health magazine stated his height was 5'10".
Autobiography
Schwarzenegger's autobiography, Total Recall, was released in October 2012. He devotes one chapter called "The Secret" to his extramarital affair. The majority of his book is about his successes in the three major chapters in his life: bodybuilder, actor, and Governor of California.
Vehicles
Schwarzenegger was the first civilian to purchase a Humvee. He was so enamored by the vehicle that he lobbied the Humvee's manufacturer, AM General, to produce a street-legal, civilian version, which they did in 1992; the first two Hummer H1s they sold were also purchased by Schwarzenegger. In 2010, he had one regular and three running on non-fossil power sources; one for hydrogen, one for vegetable oil, and one for biodiesel. Schwarzenegger was in the news in 2014 for buying a rare Bugatti Veyron Grand Sport Vitesse. He was spotted and filmed in 2015 in his car, painted silver with bright aluminium forged wheels. His Bugatti has its interior adorned in dark brown leather. In 2017, Schwarzenegger acquired a Mercedes G-Class modified for all-electric drive.
Activism
thumb|Arnold Schwarzenegger in 2003
The Hummers that Schwarzenegger bought in 1992 are so large – each weighs and is wide – that they are classified as large trucks, and U.S. fuel economy regulations do not apply to them. During the gubernatorial recall campaign he announced that he would convert one of his Hummers to burn hydrogen. The conversion was reported to have cost about US$21,000. After the election, he signed an executive order to jump-start the building of hydrogen refueling plants called the California Hydrogen Highway Network, and gained a U.S. Department of Energy grant to help pay for its projected US$91,000,000 cost. California took delivery of the first H2H (Hydrogen Hummer) in October 2004.
Schwarzenegger has been involved with the Special Olympics for many years after they were founded by his ex-mother-in-law, Eunice Kennedy Shriver. In 2007, Schwarzenegger was the official spokesperson for the Special Olympics which were held in Shanghai, China. Schwarzenegger believes that quality school opportunities should be made available to children who might not normally be able to access them. In 1995, he founded the Inner City Games Foundation (ICG) which provides cultural, educational and community enrichment programming to youth. ICG is active in 15 cities around the country and serves over 250,000 children in over 400 schools countrywide. He has also been involved with After-School All-Stars, and founded the Los Angeles branch in 2002. ASAS is an after school program provider, educating youth about health, fitness and nutrition.
On February 12, 2010, Schwarzenegger took part in the Vancouver Olympic Torch relay. He handed off the flame to the next runner, Sebastian Coe.
Schwarzenegger is a lifelong supporter and "friend of Israel", and has participated in L.A.'s Pro-Israel rally among other similar events.
In 2012, Schwarzenegger helped to found the Schwarzenegger Institute for State and Global Policy, which is a part of the USC Sol Price School of Public Policy at the University of Southern California. The Institute's mission is to "[advance] post-partisanship, where leaders put people over political parties and work together to find the best ideas and solutions to benefit the people they serve" and to "seek to influence public policy and public debate in finding solutions to the serious challenges we face". Schwarzenegger serves as chairman of the Institute.
At a 2015 security conference, Arnold Schwarzenegger called climate change the issue of our time.
For the 2016 Republican Party presidential primaries, Schwarzenegger endorsed fellow Republican John Kasich. However, he announced in October that he would not vote for the Republican presidential candidate Donald Trump in the that year's United States presidential election, with this being the first time he did not vote for the Republican candidate since becoming a citizen in 1983.
Awards and honors
Seven-time Mr. Olympia winner
Four-time Mr. Universe winner
1969 World Amateur Bodybuilding Champion
1977 Golden Globe Award winner
Star on the Hollywood Walk of Fame
International Sports Hall of Fame (class of 2012)Arnold Schwarzenegger - International Sports HOF
WWE Hall of Fame (class of 2015)
Schwarzenegger Institute for State and Global Policy (part of the USC Sol Price School of Public Policy at the University of Southern California) named in his honor.
Arnold's Run ski trail at Sun Valley Resort named in his honor. The trail is categorized as a black diamond, or most difficult, for its terrain.
"A Day for Arnold" on July 30, 2007 in Thal, Austria. For his 60th birthday the mayor sent Schwarzenegger the enameled address sign (Thal 145) of the house where Schwarzenegger was born, declaring "This belongs to him. No one here will ever be assigned that number again".
Books
See also
List of U.S. state governors born outside the United States
Kennedy family tree
References
Further reading
External links
Arnold Schwarzenegger Museum
Arnold Schwarzenegger: Wild Years – slideshow by Life magazine
Governorship
Office of Governor Arnold Schwarzenegger
Complete text and audio of Governor Schwarzenegger's Speech to the United Nations on Global Climate Change AmericanRhetoric.com, September 24, 2007
Complete text, audio, video of Governor Schwarzenegger's 2004 Republican National Convention Address AmericanRhetoric.com
Archive of Correspondence pertaining to Governor Schwarzenegger and same-sex marriage AB 43 Project
Interviews
Interview in Oui magazine, August 1977 at thesmokinggun.com
Excerpts from Time Out (London) interview, 1977 at time.com
Schwarzenegger Interview on The Hour'' with George Stroumboulopoulos
Film
|-
Category:1947 births
Category:20th-century American male actors
Category:20th-century American businesspeople
Category:20th-century American writers
Category:20th-century Austrian male actors
Category:20th-century Austrian writers
Category:21st-century American male actors
Category:21st-century American businesspeople
Category:21st-century American writers
Category:Activists from California
Category:American actor-politicians
Category:American athlete-politicians
Category:American autobiographers
Category:American bodybuilders
Category:American businesspeople in retailing
Category:American Christian Zionists
Category:American education writers
Category:American exercise and fitness writers
Category:American health activists
Category:American instructional writers
Category:American investors
Category:American male film actors
Category:American people of Austrian descent
Category:American philanthropists
Category:American publishers (people)
Category:American real estate businesspeople
Category:American restaurateurs
Category:American Roman Catholics
Category:American stock traders
Category:Austrian autobiographers
Category:Austrian bodybuilders
Category:Austrian powerlifters
Category:Austrian emigrants to the United States
Category:Austrian film directors
Category:Austrian film producers
Category:Austrian health activists
Category:Austrian investors
Category:Austrian male film actors
Category:Austrian philanthropists
Category:Austrian publishers (people)
Category:Austrian real estate businesspeople
Category:Austrian restaurateurs
Category:Austrian Roman Catholics
Category:Austrian soldiers
Category:Businesspeople from Los Angeles
Category:California Republicans
Category:Conservatism in the United States
Category:Film directors from California
Category:Film producers from California
Category:Governors of California
Category:Kennedy family
Category:Laureus World Sports Awards winners
Category:Living people
Category:Male actors from Los Angeles
Category:New Star of the Year (Actor) Golden Globe winners
Category:Sportspeople from Graz
Category:People with acquired American citizenship
Category:Republican Party state governors of the United States
Category:Schwarzenegger family
Category:Shriver family
Category:Sportspeople from Los Angeles
Category:State and local political sex scandals in the United States
Category:University of California regents
Category:University of Wisconsin–Superior alumni
Category:Writers from Los Angeles
Category:WWE Hall of Fame
Category:People from Brentwood, Los Angeles | 1,806 | 2017-01 |